Unnamed: 0
int64 0
10k
| input
stringlengths 9.18k
112k
| output
stringlengths 136
194k
| instruction
stringclasses 1
value |
---|---|---|---|
0 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>One of the scientific disciplines behind the Science of Science studies is the study of scientific networks. We are concerned with scientific networks as a social network with connections representing either co-authorship (collaboration) or citations. Different measures of network analysis can be applied to these networks such as centrality (to find influential authors) or clustering coefficient (to show the connectedness of a group of scientists). The major limitation of the earlier studies has been lack of completeness of data set. Any study on co-authorship may not necessarily have all the citation links. This limits the analyses of different types on the same set of nodes. To overcome this, we have worked on publicly available citation metadata to develop a workflow to create different types of scientific networks. Our focus is primarily on the identification of prominent authors through centrality analysis, as well as whether this can be achieved using open metadata. To present our approach, we have analysed Scientometrics journal as a case study. We are not concerned with bibliometrics study of any field rather we aim to provide a replicable workflow (in form of Python scripts) to apply network analysis using OpenCitatons data. With the increasing popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding science in multiple dimensions.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Identifying prominent authors (gurus) of any field is one of the primary focus for young researchers in that particular field. Likewise, other researchers tend to follow research published by gurus of the field.</ns0:p><ns0:p>In this work, we aim to utilise open metadata <ns0:ref type='bibr' target='#b30'>(Peroni et al., 2015)</ns0:ref>, made available using Crossref, and utilise open access NetworkX <ns0:ref type='bibr' target='#b9'>(Hagberg et al., 2008)</ns0:ref> and SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref> libraries of Python for network analysis. Python is used based on its popularity with researchers as per survey results by <ns0:ref type='bibr' target='#b1'>AlNoamany and Borghi (2018)</ns0:ref>. This article provides minimal details of a case study for analysing collaboration network of Scientometrics journal metadata, for 10 years starting from 2003. All steps are described for replication of this study. This work shall lay the groundwork for further analyses of similar type on different journals, set of journals or a subject category using open metadata.</ns0:p><ns0:p>Defining a guru of the field is not an easy task, and any definition will be highly subjective. To this end, we focus on the definition of guru using the centrality measures of social network analysis. Details of different centrality measures are depicted in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> <ns0:ref type='bibr' target='#b25'>(Newman, 2010)</ns0:ref>. The following description was inspired by <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref>. Simply said, any author with a high citation count may be considered the guru. This can be achieved using degree centrality. Although another way of identifying a highly cited individual is to see whose paper is cited in top percentile within the domain we currently limit such definitions to degree centrality of articles. However, it is not always the case that all highly cited authors are equally influential. Those who are cited by other influential authors may also be termed as influential even though they may or may not have high citation count. Likewise, any author collaborating frequently with influential authors would also have some high influence in that field of study. This recursive influence definition is well captured by eigenvector centrality. Another centrality measure, namely betweenness PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science centrality would define an author as prominent in the field if the author is a collaborator with individuals of different clusters within the domain. Centrality measures of closeness and farness measure the extent to which an author is on average close to or far from other authors within the network, respectively.</ns0:p><ns0:p>Such analyses can be applied on a variety of scientific networks such as article citation network, author citation network or author collaboration network. These networks can be created using different data sources. Some data sources (such as Crossref) allows to fetch the metadata of articles cited by the article or that cited the original article. This allows expanding the breadth of the network. In Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> neighbours of node (n) (namely node (k), (l), (m), (o) and (p)) will form its ego network. C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have 5 neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:p><ns0:p>The remaining document is organised as follows: after giving some preliminary definitions we briefly provide details about the pipeline architecture in methodology before moving to detailed steps of acquiring data. First, we provide details of the citation index downloaded from the web and loaded in memory.</ns0:p><ns0:p>Next, we discuss how the data are fetched and filtered. Further, we provide a case study for finding gurus.</ns0:p><ns0:p>Lastly, we conclude with details of how this work can be further expanded.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Visualising bibliometric data as a network is not new, <ns0:ref type='bibr' target='#b32'>Price (1965)</ns0:ref> introduced the work more than 50 years ago. Most recent studies are on co-authorship network <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b16'>Lee, 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>, however others have focused on citation network for authors <ns0:ref type='bibr' target='#b7'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Xu and Pekelis, 2015;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref> or journal <ns0:ref type='bibr' target='#b39'>(Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leydesdorff et al., 2018)</ns0:ref>. Only a couple of studies have utilised more than one Scientific Network for analysis <ns0:ref type='bibr' target='#b23'>(Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>. Traditionally bibliometric analysis has been done using WoS and Scopus <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020)</ns0:ref>, and a similar case is seen in these studies where the data sources, primarily are WoS <ns0:ref type='bibr' target='#b7'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leydesdorff et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b22'>Massucci and Docampo, 2019)</ns0:ref> or Scopus <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b16'>Lee, 2019)</ns0:ref>, however, some recent studies have focused on open access data sources <ns0:ref type='bibr' target='#b34'>(Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Van den Besselaar and Sandström, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>. Other data sources such as PubMed, CiteSeerX and ACL are not discussed in this article as they are mostly used for text analysis instead of network analysis. Below we provide a brief account of work done on scientific networks using centrality measures in the past decade.</ns0:p><ns0:p>Details are summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> in chronological order. Some earlier studies such as <ns0:ref type='bibr' target='#b26'>(Newman, 2004)</ns0:ref> are not discussed here to only include recent studies. <ns0:ref type='bibr' target='#b7'>Ding (2011)</ns0:ref> proposed to analyse the author citation network with weighted PageRank. The author showed that their proposed strategy outperforms the conventional h-index and related citation count measures on predicting prize winners. <ns0:ref type='bibr' target='#b0'>Abbasi et al. (2012)</ns0:ref> discussed the use of betweenness centrality as a measure of getting more collaborators compared to degree and closeness centrality. They have used temporal co-authorship network in the steel research domain. Data was manually curated and downloaded from Scopus. <ns0:ref type='bibr' target='#b29'>Ortega (2014)</ns0:ref> analysed 500 co-authors' ego network and conclude that centrality measures are correlated with bibliometric indicators. They have used clustering coefficient, degree and betweenness centrality as local metrics while some global level metrics were also analysed due to a holistic view of ego network. It is one of the early studies using MAG.</ns0:p><ns0:p>Two book chapters provide hands-on details about centrality measures <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and PageRank <ns0:ref type='bibr' target='#b39'>(Waltman and Yan, 2014)</ns0:ref> using WoS data. <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref> constructed the author collaboration network and calculated degree, betweenness, eigenvector and closeness centrality. <ns0:ref type='bibr' target='#b39'>Waltman and Yan (2014)</ns0:ref> provides details for applying PageRank on journal citation network. <ns0:ref type='bibr' target='#b41'>Xu and Pekelis (2015)</ns0:ref> used a manually curated dataset for authors of China and Taiwan in the field of Chinese Language Interpreting Studies. They have applied PageRank and degree centrality to find influential authors within different clusters identified using community detection. <ns0:ref type='bibr' target='#b18'>Leydesdorff et al. (2018)</ns0:ref> have used betweenness centrality as a measure of multidisciplinary of a journal using a journal citation network. Any journal is usually cited from its subject category but the journals cited/citing the other fields are considered a bridge between the subject categories. Authors have limited their approach with a diversity measure and evaluated it on data from JCR. <ns0:ref type='bibr' target='#b16'>Lee (2019)</ns0:ref> provide a case study for young researchers performance evaluation by analysing the collaboration network of these researchers. Using statistical analysis frequency of collaborators measured by degree centrality is shown to correspond with future publication count. This is akin to <ns0:ref type='bibr' target='#b20'>Li et al. (2019)</ns0:ref> who concludes that collaboration of young scientist with top-ranked co-authors has a huge probability of future success. <ns0:ref type='bibr' target='#b22'>Massucci and Docampo (2019)</ns0:ref> applies the PageRank algorithm on a university citation network.</ns0:p><ns0:p>Working on five different subject categories they show that their framework is more robust than existing university rankings while holding a high correlation with these accepted rankings. <ns0:ref type='bibr' target='#b34'>Singh and Jolad (2019)</ns0:ref> utilised data of APS journals to form collaboration network of Indian physicist. In this co-authorship network, they have applied different centrality measures and report the overlapping top authors. <ns0:ref type='bibr' target='#b35'>Van den Besselaar and Sandström (2019)</ns0:ref> discuss the potential use of clustering coefficient and eigenvector centrality of ego network of researchers and their supervisor. These measures provide a metric for gauging the independence of a researcher. They have used a small scale study of 4 pair of researchers handpicked for their comparison. Although the authors agree that there are numerous ways to capture independence, however, the use of clustering coefficient and eigenvector centrality could be a potential tool for evaluating independence over a large data set. <ns0:ref type='table' target='#tab_1'>2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:ref> Manuscript to be reviewed Computer Science citation network to 5 levels in cited-by and citing directions. Using a large network available at AMiner they proposed a hybrid strategy for recommendations using different centrality measures on each network.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> provides a summary of these studies stating the data source used to create the scientific network, as well as the measures which were applied for analysis. Case studies similar to our work are also available on the proprietary data source of WoS <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and Scopus <ns0:ref type='bibr' target='#b33'>(Rose and Kitchin, 2019)</ns0:ref>. Further, a set of graphical tools are also available as discussed by <ns0:ref type='bibr' target='#b24'>Moral-Muñoz et al. (2020)</ns0:ref> in a recent survey but most tools do not give access for Crossref apart from <ns0:ref type='bibr' target='#b36'>(Van Eck and Waltman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chen, 2005)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Study</ns0:head><ns0:p>Chen (2005) discusses identification of highly cited clusters of a scientific network. Also discusses the identification of pivotal points in the scientific network using betweenness centrality. The author uses clinical evidence data associated with reducing risks of heart disease to illustrate the approach. They have discussed the design of citeSpace tool and its new feature for identifying pivotal points. They used betweenness centrality to identify pathways between thematic clusters because by studying these pathways identifies how two clusters differ. High betweenness centrality nodes are good for pivotal points in a scientific network. We intend to approach similarly but instead of a graphical software tool, we propose to use Python scripts which give more flexibility for advance analysis. For a detailed survey of tools, we would refer the interested reader to <ns0:ref type='bibr' target='#b24'>(Moral-Muñoz et al., 2020)</ns0:ref>.</ns0:p><ns0:p>One of the recent studies that provide replicable Python scripts <ns0:ref type='bibr' target='#b33'>(Rose and Kitchin, 2019)</ns0:ref> focuses on using Scopus data for network analysis. They have provided a scripted interface for researchers to</ns0:p></ns0:div>
<ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science perform useful analysis. Although accessing Scopus is possible with Elsevier Developer API Key but it requires institutional or authenticated access. Such access is not possible, especially for developing countries <ns0:ref type='bibr' target='#b13'>(Herzog et al., 2020)</ns0:ref>. Although our work is similar to <ns0:ref type='bibr' target='#b33'>Rose and Kitchin (2019)</ns0:ref> that it provides a scripted interface for researchers, it is different in two aspects. Firstly, we are working with OpenCitatons data using Crossref. Secondly, we have not provided an API interface that needs maintenance and support since we believe that Crossref, NetworkX and SNAP APIs fulfil the purpose.</ns0:p><ns0:p>Overall these studies show that applying centrality measures is a useful analysis in bibliometrics, however, these approaches are mostly not scalable and would require considerable effort to apply the same analysis on bigger networks. In some cases, the tools limit the size of network analysed, whereas in other studies the data are manually curated. In comparison to our work most studies are limited to one type of network and the way dataset is acquired limits the analysis to expand to another type of networks.</ns0:p><ns0:p>As mentioned above in our representative literature review it is observed that rarely any study has used multiple networks or mentioned how it can be curated with the same data source. Although with WoS and Scopus data it is theoretically possible to create all networks with other data sources a dump is usually provided with limited metadata, thereby limiting the authors to confine their studies to this limitation.</ns0:p><ns0:p>On the other hand, publicly available metadata has its limitations when it comes to completeness and verification of available data. <ns0:ref type='bibr' target='#b14'>Iorio et al. (2019)</ns0:ref> concludes that using OpenCitatons data for evaluation purpose is not enough due to unavailability of complete data, however more than half of data are available in comparison to WoS and Scopus. A similar evaluation is also done by <ns0:ref type='bibr' target='#b27'>Nishioka and Färber (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b21'>Martín-Martín et al. (2020)</ns0:ref>. Further, there are different approaches to augment the current OpenCitatons data <ns0:ref type='bibr' target='#b6'>(Daquino et al., 2018;</ns0:ref><ns0:ref type='bibr'>Heibi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Peroni and Shotton, 2020)</ns0:ref>.</ns0:p><ns0:p>Using open metadata are gaining popularity. <ns0:ref type='bibr' target='#b15'>(Kamińska, 2018)</ns0:ref> discusses a case study for using</ns0:p><ns0:p>OpenCitatons data for visualising citation network. <ns0:ref type='bibr' target='#b42'>(Zhu et al., 2019)</ns0:ref> has used COCI to evaluate books scholarship. We hypothesise that with a scripted workflow provided below it would be easier for masses to adopt to OpenCitatons data for bibliometric analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This section provides details of a systematic workflow from data fetching to analysis. To apply centrality analysis on the author collaboration and author citation networks a series of steps are required to create these networks using the OpenCitatons data which provide the article citation network. All scripts were executed on Windows Server machine having Quad-Core AMD Opteron(TM) Processor 6272 with 128 GB RAM installed. It is interesting to note that only the initial processing of data requires heavy computation and memory once. Later, the data are converted to a compressed binary format using libraries for processing large networks and thus can run on any standard laptop machine. Below we provide details of the workflow to create scientific networks for SCIM. A generic query on Crossref provided a huge amount of data so their analysis was outside the scope of this current article. We aim to provide details of our extended analysis in an upcoming publication and not clutter this workflow with unnecessary details. Although this case study is limited to data of SCIM, we have made every effort to keep the process automated such that applying the same script require minimum changes for other journals or set of journals.</ns0:p><ns0:p>Overview of the process is depicted in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> and further details about each of the following step are provided separately. Each step is distributed with three sub-steps for clarity and batch execution.</ns0:p><ns0:p>Step 1 The first step is to download the citation index provided as COCI <ns0:ref type='bibr'>(Heibi et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Step 2 The second step is to download the metadata for provided ISSN through Crossref.</ns0:p><ns0:p>Step 3 The third step is to fetch the ego network from COCI data for the DOIs of respective ISSN.</ns0:p><ns0:p>Step 4 The fourth step is to merge these data to create a different scientific network(s).</ns0:p><ns0:p>Step 5 Finally, the last step is to apply the centrality analysis on these networks.</ns0:p><ns0:p>Minimal set of Python scripts are provided as Supplemental Files, for not only replication of the current study, but also reuse of this study for other ISSN or other network types for bibliometric analyses.</ns0:p><ns0:p>Details are provided below for the understanding of this study and can be accessed online <ns0:ref type='bibr' target='#b4'>(Butt and Faizi, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Fetching citation network</ns0:head><ns0:p>Summary of the sub-steps to fetch citation network is shown in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. Below we define the sub-steps to convert the COCI data to be used in Python libraries for network processing. This step is computation and memory intensive but needs to be performed only once. Convert COCI data to edge list This step is needed to convert the COCI data to an edge list format. It is an easy to process format with two nodes on each row signifying an edge. This format is supported by SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref> which is used for processing huge network data such as COCI. After this step edge list file is approx 35 GB. We convert the COCI from comma-separated-values (CSV) to space-separated-values having only citing and cited column. This is the only format supported by SNAP for bulk upload. Some formatting corrections are also made for removing extra CR/LF and quotes since it hampers the loading process of SNAP. We have tried to load the same files with other libraries which are</ns0:p></ns0:div>
<ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science relatively more intuitive but not as powerful as SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref>. However, we later discuss how this data can be used with other libraries and provide scripts to convert data to a format that is supported by the majority of network processing libraries.</ns0:p><ns0:p>Save COCI as binary Loading 35 GB edge list in-memory using SNAP takes approx 5.5 hours. Since the edge labels are DOI in the COCI data, therefore they are saved as strings. However, this slows down further processing so strings are converted to a hash file. There are two binary files generated when loading the COCI data in SNAP. First is DOIDirected.graph file which contains the directed citation network of COCI with integer node labels. Second is DOIMapping.hash which maps the integer node label to respective DOI. We save loaded graph as binary files for further computations. Loading binary file in-memory takes a few minutes as compared to a few hours for loading CSV data with the downside that additional columns of COCI are currently not being utilised. To keep things simple for novice and non-technical user DOIMapping.hash is simply a node list where node number is mapped to its label (DOI) while the DOIDirected.graph is an edge list on node number. This is the part which makes SNAP less intuitive but more powerful since computations are much faster when integer labels are used but for human consumption, a mapping to string labels is also provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fetching Crossref metadata</ns0:head><ns0:p>Summary of the sub-steps to download Crossref metadata are shown in Figure <ns0:ref type='figure'>4</ns0:ref>. Below we define the sub-steps to fetch the citation metadata and converting it to list of authors and DOIs. Although these steps only provide API string to fetch data for a single journal, however, it is possible to fetch data with other filters and query using Crossref. Details are provided in Crossref documentation, and the metadata downloaded via different filters is in a similar format which makes this script reusable for a variety of tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 4.</ns0:head><ns0:p>Step 2 of the workflow with details of fetching metadata from Crossref API. Sub-steps are applied sequentially.</ns0:p><ns0:p>Create Crossref API string Crossref limits a one time query to 1000 records for a single ISSN. For queries with more than 1000 records, multiple API strings are needed which are created automatically.</ns0:p><ns0:p>Crossref data of SCIM is fetched via Crossref API which contains total 1857 records. These records are fetched by two API requests to create JSON of SCIM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fetch author(s) list from data</ns0:head><ns0:p>Once data are fetched from Crossref as JSON we populate the list of authors. We extract authors from the previous downloaded JSON. It is important to note that we do not apply any technique for author name disambiguation and rely on Crossref to provide correct author names.</ns0:p><ns0:p>Although this is problematic for further analysis, in the long run, corrected data from a single source is much efficient than using different methods of cleaning. A similar approach is provided by MAG <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Fetch DOI list from data Once data are fetched from Crossref as JSON we populate the list of DOI.</ns0:p><ns0:p>DOIs are extracted from the previously downloaded JSON. Although the purpose of fetching DOI is redundant but it's replica script is created to suggest that analysis with only provided DOI list is also</ns0:p></ns0:div>
<ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>possible. So the previous two sub-steps can be ignored if analysing a specific journal is not needed. If the list of DOIs is fetched from an external source then it can be easily incorporated in this workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creating ego network</ns0:head><ns0:p>Summary of the sub-steps to create ego network are shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>. Below we define the sub-steps to create Ego Network. This step can be iterated zero or more times to grow the network as desired. This step is not used in the case study, however, we provide the details in this section to show that with publicly accessible metadata it is relatively easier to scale our approach. Further, this step justifies our approach of using SNAP over other network processing libraries since the process of creating the ego network is not only fast but intuitive to code due to a variety of functions available in the extensive library documentation that makes it easier to access the nodes in both directions of an edge. Also, the integer labels make the computation faster than using string labels. Crossref dump for egonet We provide the fetching of Crossref data for all DOIs of article ego network created in the previous step. This way first we download all data and then process it to create the network.</ns0:p><ns0:p>Depending on the size of the network and the number of ego levels, as well as connectivity bandwidth available this process can take from a few hours to days. Once a local copy of data is available this delay can be reduced. Since we do not have access to complete dump of Crossref we could not identify whether these same scripts can be reused but we assume that there would be few changes required to access the data locally.</ns0:p></ns0:div>
<ns0:div><ns0:head>DOI and author list extraction</ns0:head><ns0:p>We provide the creation of the ego network for authors. This is similar to nodes of SCIM downloaded earlier. However, here we add the connecting nodes fetched in subgraph above and download their respective author details.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creating scientific network(s)</ns0:head><ns0:p>Summary of the sub-steps to create scientific networks are shown in Figure <ns0:ref type='figure'>6</ns0:ref>. Once all the data are pre-processed this step creates different types of network. We can also add bibliographic coupling and co-citation network within the list but they are ignored for two reasons. First, we did not find much evidence of centrality analysis on these networks. Secondly, the processing time for creating these networks for a very large citation network is relatively much longer than creating author collaboration or author citation network. These networks are simply created by making an edge list for authors who have collaborated or cited each other.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6.</ns0:note><ns0:p>Step 4 of the workflow with details of creating different scientific networks. Sub-steps are applied sequentially.</ns0:p></ns0:div>
<ns0:div><ns0:head>Create article citation network</ns0:head><ns0:p>Once the list of DOI is available it is used to fetch subgraph of article citation network for these DOIs. We provide details of fetching article citation network as a subgraph from COCI. Further, it saves the same graph as a binary file for further analysis. Also, the CSV file can be used with any graph processing library (such as NetworkX) while binary file can be read using SNAP.</ns0:p><ns0:p>Create author collaboration network Author collaboration is identified via a list of co-authors from JSON data fetched via Crossref. This refined data are further used in the case study in the subsequent section. It is important to note that the count of authors at this sub-step may vary from next sub-step of creating author citation network since the list of co-authors in Crossref is provided as a list of names and we do not include further metadata about these authors.</ns0:p><ns0:p>Create author citation network Using the subgraph of article citation network respective edges are made for authors to create author citation network. All co-authors are linked to use full counting method.</ns0:p><ns0:p>In case method of partial counting is to be utilised then this script needs to be modified. However, our workflow is not affected by the use of a partial or full counting method and hence we have picked simpler one for brevity <ns0:ref type='bibr' target='#b8'>(Glanzel, 2003)</ns0:ref>. In any case, this network shall supplement the analysis on a collaboration network that was constructed in the previous step, as well as article citation network that was originally provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Centrality analysis</ns0:head><ns0:p>Summary of the sub-steps to apply centrality analysis are shown in Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>. Below we define the sub-steps to apply different centrality measures on the scientific networks. This is one of the common method employed in the bibliometric analysis, however other methods of SNA can also be applied at this step. Any tool or wrapper API may restrict the functionality at this point, however, this work can be extended to use any functions in existing network processing libraries. Since using graphical tools is easier than the script so a future application of this study could be about creating a front end tool for ease of use. Below we provide details about how the different centrality measures applied by different studies can be accomplished. Each of the measures is separated in the different listing along with loading and initialisation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Applying centrality measures on article citation network</ns0:head><ns0:p>The article citation network is a Directed Acyclic Graph (DAG). Most centrality analyses are not meaningful on DAG. Two measures are presented.</ns0:p><ns0:p>First, degree centrality provides highly cited articles. Finding authors of these articles is also possible, however not provided for simplicity. Secondly, influence definition in DAG is captured via the recursive definition of Katz centrality which is also provided using NetworkX library. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Step 5 of the workflow with details of centrality measures that are applied on different scientific networks. Sub-steps may be applied as required as there is no dependency within steps.</ns0:p><ns0:p>(eigenvector centrality) and authors working in multiple domains (betweenness centrality).</ns0:p></ns0:div>
<ns0:div><ns0:head>Applying centrality measures on author collaboration network</ns0:head><ns0:p>The author collaboration network has cyclic nature and most centrality analyses are possible. Five measures are presented, namely highly collaborative authors (degree centrality), influential collaborators (eigenvector centrality), authors working in multiple groups (betweenness centrality), well-knitted authors (closeness centrality), and solo authors (farness centrality). Ranks captured here are presented in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. This work was done manually by sorting individual lists on respective centrality scores and identifying their rank position.</ns0:p></ns0:div>
<ns0:div><ns0:head>Batch execution</ns0:head><ns0:p>All python scripts can be executed through a sample batch file by modifying the ISSN and date range. This batch processing will also be useful for developing a front-end tool, as well as modifying the sequence as per user need.</ns0:p><ns0:p>CASE STUDY USING SCIM <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref> analysed collaboration network using WoS data of SCIM for 10 years starting from the year 2003. The outcome of their analysis was provided in a table having authors that had top 5 ranks in either of the centrality scores. The respective rank of those authors was also provided. To verify whether or not our workflow can capture a similar pattern we provide the results in a similar tabular form.</ns0:p><ns0:p>For each of the centrality measure we provide the rank given in <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> using WoS data, as well as compare it with the rank obtained in our study using OpenCitatons data. We observe that the rank of authors for the degree, betweenness and closeness centrality is more or less similar, however, further analysis is required to inquire the reason for the difference of eigenvector centrality ranks. Such an analysis is outside the scope of this study.</ns0:p><ns0:p>Ranks in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> are entered manually after processing the information separately. Author names are sorted in the same sequence as provided in the original study along with their respective ranks. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> has four sections for the degree, betweenness, eigenvector and closeness centrality, respectively. Each section has two columns with the left column showing rank from <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref> and the right column shows the rank calculated for the same author using our workflow. It is pertinent to note that a very hand-on approach is provided by <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref>, however, due to access restriction of WoS and its</ns0:p></ns0:div>
<ns0:div><ns0:head>11/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science unaffordability for developing countries, such useful analysis are only limited to researchers of specific institutes having subscription <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020)</ns0:ref>.</ns0:p><ns0:p>This highlights the importance of our workflow to provide access to any individual who can download the publicly available metadata. Further, we do not discuss the reasons for why a specific author has topped the list and what the centrality measure signifies, and the interested reader is referred to <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref>. However, we intend to provide a detailed analysis in a separate publication using ego networks. <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and current study for each centrality measure. Table is divided into 4 sections for each centrality measure with the left column in each section showing the rank from <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref>, and the right column showing the rank calculated by our workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Based on numerous studies discussed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> it is evident that centrality measures are a popular way of identifying prominent authors. The results of a case study also show that accessing metadata of publishers which submit metadata to Crossref as public access (such as Springer) does not hamper analysis. However, the same would not be true for publishers whose metadata are not yet public although available with Crossref (such as Elsevier).</ns0:p><ns0:p>Scientific networks rely on completion of data, and although the field has existed for more than 50 years <ns0:ref type='bibr' target='#b32'>(Price, 1965)</ns0:ref>, however, the limitations on data access have not helped to reach its true potential.</ns0:p><ns0:p>We aim that with the availability of publicly available metadata <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020</ns0:ref>) and a workflow to access it, such as the one presented in this study, a researcher from any field will be able to analyse the prominent authors. It can further be used for identifying reviewers for a potential study (based on its references), as well as a graduate student finding a PhD supervisor.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Once the citation network is fetched and saved as a binary file the time it takes to analyse authors list in a journal is well under an hour, barring the time to create ego network as it requires downloading Crossref files for each DOI. This provides a means for fast and interactive analysis for researchers of any field. This study currently does not provide a detailed analysis of the ego network, however, a brief comparison justifies the importance of systematic metadata harvesting workflow. For case study, some manual work was also done to sort and format the results, however, it can also be scripted in future as it Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>does not hamper the workflow and can be performed as a standalone. Likewise, techniques for author name disambiguation or partial counting have not been included but for effective analysis, these need to be incorporated in future.</ns0:p><ns0:p>We further aim to enhance this work to filter Crossref data based on subject categories instead of journal ISSN. It would enhance the capability and usefulness of this analysis for individual researchers. A web-based portal is also under construction where the user may be able to select the date range along with other filters and the system which initiates the scripts at the back-end. This way the users who are not familiar with programming can also benefit from this analysis.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A toy network showing different nodes with high centrality for different measures. A. shows high farness centrality since the node (a) has the maximum average distance to other nodes. B. shows high clustering coefficient since neighbours of the node (c) are all connected as well.C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have 5 neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:figDesc><ns0:graphic coords='3,141.73,172.49,413.57,232.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b37'>Waheed et al. (2019)</ns0:ref> discusses the use of centrality measures on multiple scientific networks of author collaboration, author citation and article citation to improve article recommendation. They filter the 4/15 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Workflow to identify gurus of any Field. A pyramid shows the refinement of data at every step. COCI contains approx. 625 M edges which are refined to ego network for subset nodes fetched for respective ISSN. Finally, the top of the pyramid shows the output in form of a few nodes identified with high centrality.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.59,175.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Step 1 of the workflow with details of creating the citation network. Sub-steps are applied sequentially.</ns0:figDesc><ns0:graphic coords='8,141.73,379.10,413.59,157.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5.Step 3 of the workflow with details of creating the ego network. Sub-steps are applied sequentially, and may be iterated over to create next level of ego network.</ns0:figDesc><ns0:graphic coords='10,141.73,216.53,413.59,153.36' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Applying centrality measures on author citation networkThe author citation network has cyclic nature. Three measures are presented, namely highly cited authors (degree centrality), influential authors 10/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7.Step 5 of the workflow with details of centrality measures that are applied on different scientific networks. Sub-steps may be applied as required as there is no dependency within steps.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.59,249.31' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:52217:1:1:NEW 18 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,141.73,355.50,413.59,150.71' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Review of studies applying social network analysis on scientific networks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Bibliometric</ns0:cell><ns0:cell>Scientific Network(s)</ns0:cell><ns0:cell>Social Network Analysis</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Data Source</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ding (2011)</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>Weighted PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Abbasi et al.</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2012)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Ortega (2014)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>Co-Author Ego Network</ns0:cell><ns0:cell>Clustering Coefficient, Degree</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Milojević</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Collaboration and Cita-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2014)</ns0:cell><ns0:cell /><ns0:cell>tion, Article Citation</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Waltman and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Journal Citation Network</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Yan (2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Xu and Peke-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>PageRank and Degree Central-</ns0:cell></ns0:row><ns0:row><ns0:cell>lis (2015)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ity</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff</ns0:cell><ns0:cell>WoS/JCR</ns0:cell><ns0:cell>Journal Citation</ns0:cell><ns0:cell>Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lee (2019)</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree and Betweenness Cen-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>trality, Clustering Coefficient</ns0:cell></ns0:row><ns0:row><ns0:cell>Massucci and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Institutional Citation</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Docampo</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Singh and Jo-</ns0:cell><ns0:cell>APS Journals</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Centrality, Community Detec-</ns0:cell></ns0:row><ns0:row><ns0:cell>lad (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell>Van den Besse-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Researchers Ego Network</ns0:cell><ns0:cell>Clustering coefficient, eigenvec-</ns0:cell></ns0:row><ns0:row><ns0:cell>laar and Sand-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tor Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>ström (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Waheed et al.</ns0:cell><ns0:cell>DBLP, ACM,</ns0:cell><ns0:cell>Author Collaborationand Cita-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>tionArticle Citation, Co-citation</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>and Bibliographic Coupling</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of ranks by previous study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Collaborator</ns0:cell><ns0:cell cols='8'>Degree Rank Betweenness Rank Eigenvector Rank Closeness Rank</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell cols='3'>Prev Curr Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell></ns0:row><ns0:row><ns0:cell>Glanzel, W</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Rousseau, R</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>DeMoya-Anegon, F</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Klingsporn, B</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>89</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>174</ns0:cell><ns0:cell>144</ns0:cell></ns0:row><ns0:row><ns0:cell>Ho, Ys</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>2096</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>613</ns0:cell><ns0:cell>575</ns0:cell></ns0:row><ns0:row><ns0:cell>Thijs, B</ns0:cell><ns0:cell>63</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>1710</ns0:cell></ns0:row><ns0:row><ns0:cell>Schubert,A</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell>Debackere, K</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Schlemmer, B</ns0:cell><ns0:cell>670</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>382</ns0:cell><ns0:cell>962</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>808</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>37</ns0:cell></ns0:row><ns0:row><ns0:cell>Meyer, M</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff, L</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>44</ns0:cell></ns0:row><ns0:row><ns0:cell>Rafols,I</ns0:cell><ns0:cell>1058</ns0:cell><ns0:cell>387</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>239</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>49</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Editor,
PeerJ CS
18 Nov 2020
We would like to thank the editor and all reviewers for their time and effort for detailed
comments. We have made appropriate changes to address the concerns.
In summary, code and workflow documentation is made available via GitHub which is the
primary focus of the manuscript. The analysis presented has been reshaped as a case study.
We believe that the manuscript will now be as per the expectation of esteemed reviewers.
Thanks,
Bilal, Rafi and Sabih
Reviewer 1
Basic reporting
The basic reporting is good, the very fact that the authors provide Python scripts shall be
commended. The English is more or less OK, but the use or misuse of the capital letters is
surprising. I show only a few errors:
'apply Network Analysis using open citation'
'Eigen centrality'?? You do not mean here Manfred Eigen, don't you?
'python script'
>> Response: Agreed. Capitalization is corrected
Some awkward constructions
'use case' >> Response: The term “use case” is replaced with “case study” on line 137
' instead of a software tool, we propose to use python libraries' But Python libraries ARE
software tools >> Response: corrected to mean graphical tool on line 169
' identification of highly cited clusters of scientific clusters' >> Response: correction made as
scientific networks instead of scientific clusters on line 163
Experimental design
The research goal is clear, the methods are desrcibed with sufficient details >> Response:
Appreciated
Validity of the findings
Dubious. Allthough the technique is valid, the validation protocol is problematic. >> Response:
As per overall comments, the comparative analysis is replaced as a case study.
Comments for the author
In this work B.H. Butt, M. Rafi, and M. Sabih demonstrate a software tool to download data from
the Open Citation database and to process it. They demonstarte an example of such
processing- measuring various kinds of centrality of the authors of a certain journalScientometrics.
While the protocol of downloading the whole Open Citation database shall be commended, the
scientific benefits that the authors draw from the processing of this database are not clear
enough. Hence, I do not suggest to publish this paper in its present form. However, , if the
authors report only their downloading protocol and do not report data processing, such abridged
paper is publishable.
>> Response: As per overall comments, the comparative analysis is replaced as a case
study.
Detailed comments.
The opening sentence of the paper 'Identifying prominent authors (Gurus) of any field is one of
the primary focus for researchers in that particular field' is blatantly wrong. A serious researcher
working in some field shall know all prominent authors in his field without citation analysis of the
corresponding databases. To identify prominent authors through citation databases- this is
usually made by beginners or by the researchers in adjacent fields.
>> Response: corrected for young researchers on line 27
Some of the statements of the paper look like typos:
'Eigen centrality'- as if the author thinks that Eigen is a person.
>> Response: correction made as eigenvector centrality in manuscript
'APS The American Phytopathological Society'- This is a very naive error. The abbreviation of
the American Phytopathological Society is indeed APS, but in the field of informatics, APS
means the American Physical Society and this is what the Ref. Singh and Jolad deals with. >>
Response: correction made as American Physical Society www.aps.org on line 88
'Data Source, primarily, is WoS or Scopus'. Two of the authors come from the Department of
Computer Science, why they are not familiar with CiteSeer?
>> Response: Data sources mentioned were related to scientific networks for which
CiteSeer were not found in related articles. Details about some popular data sources not
used are also mentioned in the revised manuscript on lines 111 & 112.
Using their downloaded database, the authors choose to analyze the authors of the
Scientometrics Journal. Thie goal is to identify the most prominent authors. However, one can't
make such analysis basing on one journal- one needs to analyze the whole scientific field.
There areseveral journals in this field: Scientometrics, Journal of Informetrics, Quantitative
Science Studies, Journal of the American Society for Information Science and Technology.
Prominent scientists in the the field of information science publish in these journals but due to
rivalry between the journals, there is a certain association between some scientists and some
journals. Moreover, european scientists tend to publish in the european journals. Hence, by
analyzing citation network of one journal, it is impossible to find all prominent figures in the fieldone shall analyze all journals atogether. >> Response: Agreed. Focus of revised manuscript
is workflow so it is not incorporated. However, with the provided scripts it will be
possible to run it for multiple ISSN inorder to apply the analysis on a set of journals.
Reviewer 2
Basic reporting
The authors go in great detail to explain what they have done, yet the article is lacking on
several points. The language is not always used appropriately, several figures are
low-resolution, and most definitions are given without reference to the related papers nor
formally (e.g., for centralities). >> Response: Language is improved, image resolution
enhanced and reference is added for the definitions of centrality analysis (Newman, 2010)
on line 38.
Experimental design
The research questions are well-defined, yet unfortunately they do not focus on novel
contributions.
>> Response: The RQs are rephrased as per overall comments on lines 98-99.
RQ1 explores whether network centrality measures can be used to detect popular authors, or
'gurus', which is something that has been extensively explored in previous work.
>> Response: Our aim in providing it as an RQ was to emphasize its importance,
however, RQ has been rephrased to appropriately represent the contribution of the
workflow on lines 98-99.
RQ2 is, instead, more novel in that it attempt to use an open citation index, COCI, and compare
its results with WoS. Yet, this study has also been recently performed (in much greater detail):
https://arxiv.org/abs/2004.14329
>> Response: As per overall comments, the comparative analysis is replaced as a case
study. The reference provided by the esteemed reviewer was published around 3 weeks
after the submission of this manuscript, we have cited it appropriately. It is similar to our
work in comparing the data source but the focus is on overall coverage rather than the
limitations that come with a lack of coverage of open access metadata.
Validity of the findings
I do not have much to add in terms of findings, as the main issue with the paper is in its lack of
novelty. I would, nevertheless, suggest to the authors to share their code (per se, a great thing
to do) using online persistent repositories such as Zenodo.
>> Response: code is being shared via GitHub
Reviewer: Ludo Waltman
Basic reporting
See my general comments.
Experimental design
See my general comments.
Validity of the findings
See my general comments.
Comments for the author
The contribution of this paper is in providing a set of Python scripts for performing scientometric
network analyses in a reproducible manner based on open data sources. The paper does not
aim to make a substantive contribution by providing new scientometric insights. I value the work
presented in the paper. However, the authors need to be more clear about the contribution and
the scope of their paper. For instance, I believe the following sentence needs to be removed
from the abstract: “We have shown that centrality analysis is a useful measure for identifying
prominent authors.” The paper does not show this. Likewise, the authors claim to answer the
following research question: “Is it possible to identify prominent authors (Gurus) of any field, by
applying Centrality measures on Scientific Networks?” I don’t believe the paper answers this
question (except by summarizing some earlier literature, but this is not an original contribution).
It therefore seems to me that this research question needs to be removed from the paper. The
discussion section at the end of the paper also needs to be revised accordingly.
>> Response: Suggestions have been incorporated. Our aim in providing it as an RQ was
to emphasize its importance, however, RQ has been rephrased to appropriately represent
the contribution of the workflow on lines 98-99.
“Does the coverage of CrossRef (for Open Access publishers), hamper the Network Analysis as
compared to WoS, or it can be replicated?”: This research question needs to be rephrased.
Crossref provides data not only for open access publishers but also for subscription-based
publishers, and for both types of publishers most data is openly available. Also, since the
empirical analysis presented in the paper focuses on a single journal (Scientometrics), the
paper offers only a partial answer to the question whether WoS-based analyses can be
replicated using Crossref data.
>> Response: RQ is now removed.
The comparative analysis section is hard to understand. According to the authors, Milojevic
(2014) “have fetched the data at least 5 years earlier than us, therefore, the total citation count
is different”. This is difficult to understand. It is not clear to me how exactly the authors collected
their data, and in particular for which time period data was collected. I don’t understand why the
authors didn’t organize their data collection in such a way that it is as similar as possible to the
data collection performed by Milojevic (2014). Having data sets that are as similar as possible is
essential for a meaningful comparative analysis. If there are basic differences in the time
periods covered by two data sets, I don’t see the value of performing a comparative analysis.
>> Response: Citations accrued after data collection by M
ilojevic (2014) analysis were
also part of our analysis, as we did not apply additional filtering to keep the workflow
targeted. Confusing sentences have been removed for clarity.
Ego networks play a central role in the paper, but the paper doesn’t provide a proper
explanation of what an ego network is. A one-sentence explanation is provided on p. 2, but this
is not sufficient. The authors should provide a more extensive discussion of ego networks and
their relevance in scientometric analyses.
>> Response: Incorporated with toy network explanation in Figure 1 on lines 50 to 54.
Also, while most of the paper is about analyzing ego networks, these networks are not
considered at all in the empirical part of the paper. It would be very helpful if the authors could
add a section to their paper in which they give a practical example of an analysis of an ego
network.
>> Response: Ego networks are normally restricted to a single ego node, however, the
provided scripts generate and merge ego networks for all nodes. This creates a network
similar to existing networks with rich details. A similar type of analysis is possible for
ego networks, hence additional details were not provided. We have modified the text to
reflect this understanding. Following table is from an upcoming publication that intends
to explore the ego network in more detail. Nodes at second level refer to the articles (or
authors) that are referenced by the original article(s) or which cite the original article(s).
This provides a holistic view of the field instead of limited data of only articles within a
specific journal.
Network
Type
# Nodes at
first level
# Edges at
first level
# Nodes at
second level
# Edges at
second level
Second level
Edge List
Article
Citation
1,284
2,312
166,023
274,866
Fetched
13 MB
Author
Collaboration
2,446
4,176
250,000
400,000
Estimated
30 MB
Author
Citation
1,985
14,250
200,000
1,000,000
Estimated
50 MB
Table 1: Dataset Details
“the results of a case study based on WoS data is reproduced to confirm that accessing
metadata of publishers, which submit metadata to CrossRef as Open Access (such as Springer)
does not hamper analysis as compared to WoS. However, the same would not be true for
Publishers whose data is not yet available with CrossRef (such as Elsevier).”: The information in
these sentences is not entirely correct. Both Springer and Elsevier make basic metadata such
as titles and author lists of publications openly available in Crossref. They also both make
reference lists of publications available in Crossref. The only difference between the two
publishers is that Springer makes reference lists openly available, while Elsevier keeps
reference lists closed.
>> Response: Thank you for the clarification. We have modified the text to reflect this
understanding on lines 378 to 380.
Most readers won’t understand the discussion on partial vs. full counting on p. 9. This requires
some additional explanation.
>> Response: appropriate reference is provided for interested readers on line 322.
Throughout the paper ‘CrossRef’ should be written as ‘Crossref’.
>> Response: Incorporated in overall manuscript.
The authors may be interested in a special issue on bibliographic data sources published in the
first issue of Quantitative Science Studies: https://www.mitpressjournals.org/toc/qss/1/1. This
special issue offers a lot of information on data sources such as WoS, Crossref, and
OpenCitations. >> Response: This special issue is indeed very helpful and has been
included in the bibliography on lines 81, 82, 83, 85, 89, 107, 176, 194, 270, 370 and 383.
Editor's Decision Major Revisions
Dear authors,
Thanks for submitting your work at PeerJ Computer Science. Three independent experts have
assessed your work, and you can find their reviews attached. There are several points of
interest in your work, which are counterbalanced by significative issues. All agree that the article
submitted is not acceptable for publication in the present form, and needs extensive rewriting
before being ready for publication.
The main argument is about its main contribution. According to what you said, the contribution is
twofold. On the one hand, to provide a workflow for retrieving open citation data and open
bibliographic metadata for bibliometric/scientometric studies. On the other hand, you run an
analysis using the data retrieved.
All the reviewers agree that the second part of the contribution, i.e. the analysis, should not be
the focus of the work due to several flaws. However, all of them have praised the first part of the
contribution, i.e. the workflow to run to download data.
Thus, my suggestion is to remove the part about the analysis and to solely focus on the
workflow for downloading and processing the data.
>> Response: Analysis part has been reshaped as a case study
Of course, there are issues that should also be addressed in the workflow part. Several of them
are highlighted by the reviewers. In addition to those, I would add that making scripts available
is not enough for claiming about the replicability of the workflow the scripts implement, but other
resources must be made available as well. The whole workflow should be carefully described in
the paper, with examples of use, a discussion of possible applications, some measures with
respect to the quality of the networks created (see the issues about the author-based network
below), etc.
I would also suggest the following additions:
- availability: all the code should be available in an online repository (e.g. GitHub)
>> Response: Incorporated via GitHub
- reusability: all the code should be released with appropriate open source licence to be reused
by anyone
>> Response: Incorporated on GitHub
- workflow documentation: the repository must include appropriate documentation to enable a
programmer to understand which data he/she needs and how to run all the Python scripts better if they are defined as a proper protocol (e.g. see protocols.io) and cited in the article
>> Response: Incorporated on GitHub
- code documentation: all the code developed should be appropriately documented in order to
understand how to use the various Python scripts and functions
>> Response: Incorporated on GitHub
- citation: the code developed should be citable, and thus it should be referred by using a DOI
(e.g. via GitHub+Zenodo)
- general principles: since the article describes a software, software citation principles
(https://peerj.com/articles/cs-86/) should be followed
>> Response: Python scripts are uploaded on GitHub and is cited on line 222
In addition to that, there is an aspect of the workflow, which is not clear to me, and that
concerns the author collaboration and citation networks. As far as I know, Crossref does not
have a systematic way to disambiguate authors by using some identifiers. Indeed, Crossref has
some ORCIDs specified in some entities, but the main part of them do not have any. Thus the
problem is that, in the creation of the network that involves authors:
- how do you deal with the disambiguation of the authors?
- are two authors with the same name recognised as the same author?
- are two authors having the same family name and the same initial of the given name
recognised as the same author?
- how you deal with homonymous authors?
All these issues should be taken into account when building such author-based networks in
order to avoid mistakes.
>> Response: Author name disambiguation is not the focus of this work and hence as
mentioned in original manuscript Crossref provided name strings were used. We had
added details about this in the original manuscript so that the user of our scripts is aware
of the limitations.
Also, I strongly suggest to properly check the text, since the reviewers have highlighted several
issues in using English to this respect.
>> Response: Changes have been made as pointed out by reviewers
Some typos:
- Open Citation data -> OpenCitations data >> Response: Incorporated
" | Here is a paper. Please give your review comments after reading it. |
1 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>One of the scientific disciplines behind the Science of Science studies is the study of scientific networks. We are concerned with scientific networks as a social network with connections representing either co-authorship (collaboration) or citations. Different measures of network analysis can be applied to these networks such as centrality (to find influential authors) or clustering coefficient (to show the connectedness of a group of scientists). Any study on co-authorship may not need all the citation links, however, for a holistic view citation links may also be needed. One of the major limitations of the earlier studies has been the lack of completeness of the data set. This limits the analyses of different types on the same set of nodes. To overcome this, we have worked on publicly available citation metadata to develop a workflow to create different types of scientific networks. Our focus is primarily on the identification of prominent authors through centrality analysis, as well as whether this can be achieved using open metadata. To present our approach, we have analysed Scientometrics journal as a case study. We are not concerned with bibliometrics study of any field rather we aim to provide a replicable workflow (in form of Python scripts) to apply network analysis using OpenCitatons data.</ns0:p><ns0:p>With the increasing popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding science in multiple dimensions.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Identifying prominent authors (gurus) of any field is one of the primary focus for young researchers in that particular field. Likewise, other researchers tend to follow research published by gurus of the field. To achieve this objective, using network analysis, comprehensive access to citation metadata is required. This can be accomplished using publicly available citation metadata using Crossref <ns0:ref type='bibr' target='#b12'>(Hendricks et al., 2020)</ns0:ref>. However, applying network analysis on this data requires a series of steps that may not be intuitive to a common researcher. We aim to provide these steps, with thorough details, so that it is easy for a common researcher to supplement it with different analyses. The workflow presented in this article is part of a larger study on the influence of scholarly research artefacts. To this end, we primarily limit our focus on the research goal to have a systematic workflow to identify prominent authors (gurus) using publicly available metadata for citations. In this work, we aim to utilise open metadata <ns0:ref type='bibr' target='#b30'>(Peroni et al., 2015)</ns0:ref>, made available using Crossref, and utilise open access NetworkX <ns0:ref type='bibr' target='#b9'>(Hagberg et al., 2008)</ns0:ref> and SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref> libraries of Python for network analysis. Python is used based on its popularity with researchers as per survey results by <ns0:ref type='bibr' target='#b1'>AlNoamany and Borghi (2018)</ns0:ref>. This article provides minimal details of a case study for analysing collaboration network of Scientometrics journal metadata, for 10 years starting from 2003. All steps are described for replication of this study. This work shall lay the groundwork for further analyses of similar type on different journals, set of journals or a subject category using open metadata.</ns0:p><ns0:p>Defining a guru of the field is not an easy task, and any definition will be highly subjective. To this end, we focus on the definition of guru using the centrality measures of social network analysis. Details PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science of different centrality measures are depicted in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> <ns0:ref type='bibr' target='#b25'>(Newman, 2010)</ns0:ref>. The following description was inspired by <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref>. Simply said, any author with a high citation count may be considered the guru. This can be achieved using degree centrality. Although another way of identifying a highly cited individual is to see whose paper is cited in top percentile within the domain we currently limit such definitions to degree centrality of articles. However, it is not always the case that all highly cited authors are equally influential. Those who are cited by other influential authors may also be termed as influential even though they may or may not have high citation count. Likewise, any author collaborating frequently with influential authors would also have some high influence in that field of study. This recursive influence definition is well captured by eigenvector centrality. Another centrality measure, namely betweenness centrality would define an author as prominent in the field if the author is a collaborator with individuals of different clusters within the domain. Centrality measures of closeness and farness measure the extent to which an author is on average close to or far from other authors within the network, respectively.</ns0:p><ns0:p>Such analyses can be applied on a variety of scientific networks such as article citation network, author citation network or author collaboration network. These networks can be created using different data sources. Some data sources (such as Crossref) allows to fetch the metadata of articles cited by the article or that cited the original article. This allows expanding the breadth of the network. In Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have 5 neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Visualising bibliometric data as a network is not new, <ns0:ref type='bibr' target='#b32'>Price (1965)</ns0:ref> introduced the work more than 50 years ago. Most recent studies are on co-authorship network <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b22'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b16'>Lee, 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>, however others have focused on citation network for authors <ns0:ref type='bibr' target='#b7'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b22'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Xu and Pekelis, 2015;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref> or journal <ns0:ref type='bibr' target='#b39'>(Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leydesdorff et al., 2018)</ns0:ref>. Only a couple of studies have utilised more than one Scientific Network for analysis <ns0:ref type='bibr' target='#b22'>(Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>. Traditionally bibliometric analysis has been done using WoS and Scopus <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020)</ns0:ref>, and a similar case is seen in these studies where the data sources, primarily are WoS <ns0:ref type='bibr' target='#b7'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b22'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Leydesdorff et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Massucci and Docampo, 2019)</ns0:ref> or Scopus <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b16'>Lee, 2019)</ns0:ref>, however, some recent studies have focused on open access data sources <ns0:ref type='bibr' target='#b34'>(Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Van den Besselaar and Sandström, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Waheed et al., 2019)</ns0:ref>. Other data sources such as PubMed, CiteSeerX and ACL are not discussed in this article as they are mostly used for text analysis instead of network analysis. Below we provide a brief account of work done on scientific networks using centrality measures in the past decade.</ns0:p><ns0:p>Details are summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> in chronological order. Some earlier studies such as <ns0:ref type='bibr' target='#b26'>(Newman, 2004)</ns0:ref> are not discussed here to only include recent studies. <ns0:ref type='bibr' target='#b7'>Ding (2011)</ns0:ref> proposed to analyse the author citation network with weighted PageRank. The author showed that their proposed strategy outperforms the conventional h-index and related citation count measures on predicting prize winners. <ns0:ref type='bibr' target='#b0'>Abbasi et al. (2012)</ns0:ref> discussed the use of betweenness centrality as a measure of getting more collaborators compared to degree and closeness centrality. They have used temporal co-authorship network in the steel research domain. Data was manually curated and downloaded from Scopus. <ns0:ref type='bibr' target='#b29'>Ortega (2014)</ns0:ref> analysed 500 co-authors' ego network and conclude that centrality measures are correlated with bibliometric indicators. They have used clustering coefficient, degree and betweenness centrality as local metrics while some global level metrics were also analysed due to a holistic view of ego network. It is one of the early studies using MAG.</ns0:p><ns0:p>Two book chapters provide hands-on details about centrality measures <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref> and PageRank <ns0:ref type='bibr' target='#b39'>(Waltman and Yan, 2014)</ns0:ref> using WoS data. <ns0:ref type='bibr' target='#b22'>Milojević (2014)</ns0:ref> constructed the author collaboration network and calculated degree, betweenness, eigenvector and closeness centrality. <ns0:ref type='bibr' target='#b39'>Waltman and Yan (2014)</ns0:ref> provides details for applying PageRank on journal citation network. <ns0:ref type='bibr' target='#b41'>Xu and Pekelis (2015)</ns0:ref> used a manually curated dataset for authors of China and Taiwan in the field of Chinese Language Interpreting Studies. They have applied PageRank and degree centrality to find influential authors within different clusters identified using community detection. <ns0:ref type='bibr' target='#b18'>Leydesdorff et al. (2018)</ns0:ref> have used betweenness centrality as a measure of multidisciplinary of a journal using a journal citation network. Any journal is usually cited from its subject category but the journals cited/citing the other fields are considered a bridge between the subject categories. Authors have limited their approach with a diversity measure and evaluated it on data from JCR. <ns0:ref type='bibr' target='#b16'>Lee (2019)</ns0:ref> provide a case study for young researchers performance evaluation by analysing the collaboration network of these researchers. Using statistical analysis frequency of collaborators measured by degree centrality is shown to correspond with future publication count. This is akin to <ns0:ref type='bibr' target='#b19'>Li et al. (2019)</ns0:ref> who concludes that collaboration of young scientist with top-ranked co-authors has a huge probability of future success. <ns0:ref type='bibr' target='#b21'>Massucci and Docampo (2019)</ns0:ref> applies the PageRank algorithm on a university citation network.</ns0:p><ns0:p>Working on five different subject categories they show that their framework is more robust than existing university rankings while holding a high correlation with these accepted rankings. <ns0:ref type='bibr' target='#b34'>Singh and Jolad (2019)</ns0:ref> utilised data of APS journals to form collaboration network of Indian physicist. In this co-authorship network, they have applied different centrality measures and report the overlapping top authors. <ns0:ref type='bibr' target='#b35'>Van den Besselaar and Sandström (2019)</ns0:ref> discuss the potential use of clustering coefficient and eigenvector centrality of ego network of researchers and their supervisor. These measures provide a metric for gauging the independence of a researcher. They have used a small scale study of 4 pair of researchers handpicked for their comparison. Although the authors agree that there are numerous ways to capture independence, however, the use of clustering coefficient and eigenvector centrality could be a potential tool for evaluating independence over a large data set. <ns0:ref type='table' target='#tab_1'>2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:ref> Manuscript to be reviewed Computer Science citation network to 5 levels in cited-by and citing directions. Using a large network available at AMiner they proposed a hybrid strategy for recommendations using different centrality measures on each network.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> provides a summary of these studies stating the data source used to create the scientific network, as well as the measures which were applied for analysis. Case studies similar to our work are also available on the proprietary data source of WoS <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref> and Scopus <ns0:ref type='bibr' target='#b33'>(Rose and Kitchin, 2019)</ns0:ref>. Further, a set of graphical tools are also available as discussed by <ns0:ref type='bibr' target='#b24'>Moral-Muñoz et al. (2020)</ns0:ref> in a recent survey but most tools do not give access for Crossref apart from <ns0:ref type='bibr' target='#b36'>(Van Eck and Waltman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chen, 2005)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Study</ns0:head><ns0:p>Chen (2005) discusses identification of highly cited clusters of a scientific network. Also discusses the identification of pivotal points in the scientific network using betweenness centrality. The author uses clinical evidence data associated with reducing risks of heart disease to illustrate the approach. They have discussed the design of citeSpace tool and its new feature for identifying pivotal points. They used betweenness centrality to identify pathways between thematic clusters because by studying these pathways identifies how two clusters differ. High betweenness centrality nodes are good for pivotal points in a scientific network. We intend to approach similarly but instead of a graphical software tool, we propose to use Python scripts which give more flexibility for advance analysis. For a detailed survey of tools, we would refer the interested reader to <ns0:ref type='bibr' target='#b24'>(Moral-Muñoz et al., 2020)</ns0:ref>.</ns0:p><ns0:p>One of the recent studies that provide replicable Python scripts <ns0:ref type='bibr' target='#b33'>(Rose and Kitchin, 2019)</ns0:ref> focuses on using Scopus data for network analysis. They have provided a scripted interface for researchers to</ns0:p></ns0:div>
<ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science perform useful analysis. Although accessing Scopus is possible with Elsevier Developer API Key but it requires institutional or authenticated access. Such access is not possible, especially for developing countries <ns0:ref type='bibr' target='#b13'>(Herzog et al., 2020)</ns0:ref>. Although our work is similar to <ns0:ref type='bibr' target='#b33'>Rose and Kitchin (2019)</ns0:ref> that it provides a scripted interface for researchers, it is different in two aspects. Firstly, we are working with OpenCitatons data using Crossref. Secondly, we have not provided an API interface that needs maintenance and support since we believe that Crossref, NetworkX and SNAP APIs fulfil the purpose.</ns0:p><ns0:p>Overall these studies show that applying centrality measures is a useful analysis in bibliometrics, however, these approaches are mostly not scalable and would require considerable effort to apply the same analysis on bigger networks. In some cases, the tools limit the size of network analysed, whereas in other studies the data are manually curated. In comparison to our work most studies are limited to one type of network and the way dataset is acquired limits the analysis to expand to another type of networks.</ns0:p><ns0:p>As mentioned above in our representative literature review it is observed that rarely any study has used multiple networks or mentioned how it can be curated with the same data source. Although with WoS and Scopus data it is theoretically possible to create all networks with other data sources a dump is usually provided with limited metadata, thereby limiting the authors to confine their studies to this limitation.</ns0:p><ns0:p>On the other hand, publicly available metadata has its limitations when it comes to completeness and verification of available data. <ns0:ref type='bibr' target='#b14'>Iorio et al. (2019)</ns0:ref> concludes that using OpenCitatons data for evaluation purpose is not enough due to unavailability of complete data, however more than half of data are available in comparison to WoS and Scopus. A similar evaluation is also done by <ns0:ref type='bibr' target='#b27'>Nishioka and Färber (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>Martín-Martín et al. (2020)</ns0:ref>. Further, there are different approaches to augment the current OpenCitatons data <ns0:ref type='bibr' target='#b6'>(Daquino et al., 2018;</ns0:ref><ns0:ref type='bibr'>Heibi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Peroni and Shotton, 2020)</ns0:ref>.</ns0:p><ns0:p>Using open metadata are gaining popularity. <ns0:ref type='bibr' target='#b15'>(Kamińska, 2018)</ns0:ref> discusses a case study for using</ns0:p><ns0:p>OpenCitatons data for visualising citation network. <ns0:ref type='bibr' target='#b42'>(Zhu et al., 2019)</ns0:ref> has used COCI to evaluate books scholarship. We hypothesise that with a scripted workflow provided below it would be easier for masses to adopt to OpenCitatons data for bibliometric analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This section provides details of a systematic workflow from data fetching to analysis. To apply centrality analysis on the author collaboration and author citation networks a series of steps are required to create these networks using the OpenCitatons data which provide the article citation network. All scripts were executed on Windows Server machine having Quad-Core AMD Opteron(TM) Processor 6272 with 128 GB RAM installed. It is interesting to note that only the initial processing of data requires heavy computation and memory once. Later, the data are converted to a compressed binary format using libraries for processing large networks and thus can run on any standard laptop machine. Below we provide details of the workflow to create scientific networks for SCIM. A generic query on Crossref provided a huge amount of data so their analysis was outside the scope of this current article. We aim to provide details of our extended analysis in an upcoming publication and not clutter this workflow with unnecessary details. Although this case study is limited to data of SCIM, we have made every effort to keep the process automated such that applying the same script require minimum changes for other journals or set of journals.</ns0:p><ns0:p>Overview of the process is depicted in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> and further details about each of the following step are provided separately. Each step is distributed with three sub-steps for clarity and batch execution.</ns0:p><ns0:p>Step 1 The first step is to download the citation index provided as COCI <ns0:ref type='bibr'>(Heibi et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Step 2 The second step is to download the metadata for provided ISSN through Crossref.</ns0:p><ns0:p>Step 3 The third step is to fetch the ego network from COCI data for the DOIs of respective ISSN.</ns0:p><ns0:p>Step 4 The fourth step is to merge these data to create a different scientific network(s).</ns0:p><ns0:p>Step 5 Finally, the last step is to apply the centrality analysis on these networks.</ns0:p><ns0:p>Minimal set of Python scripts are provided as Supplemental Files, for not only replication of the current study, but also reuse of this study for other ISSN or other network types for bibliometric analyses.</ns0:p><ns0:p>Details are provided below for the understanding of this study and can be accessed online <ns0:ref type='bibr' target='#b4'>(Butt and Faizi, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Fetching citation network</ns0:head><ns0:p>Summary of the sub-steps to fetch citation network is shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. Below we define the sub-steps to convert the COCI data to be used in Python libraries for network processing. This step is computation and memory intensive but needs to be performed only once. Convert COCI data to edge list This step is needed to convert the COCI data to an edge list format. It is an easy to process format with two nodes on each row signifying an edge. This format is supported by SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref> which is used for processing huge network data such as COCI. After this step edge list file is approx 35 GB. We convert the COCI from comma-separated-values (CSV) to space-separated-values having only citing and cited column. This is the only format supported by SNAP for bulk upload. Some formatting corrections are also made for removing extra CR/LF and quotes since it hampers the loading process of SNAP. We have tried to load the same files with other libraries which are</ns0:p></ns0:div>
<ns0:div><ns0:head>7/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science relatively more intuitive but not as powerful as SNAP <ns0:ref type='bibr' target='#b17'>(Leskovec and Sosič, 2016)</ns0:ref>. However, we later discuss how this data can be used with other libraries and provide scripts to convert data to a format that is supported by the majority of network processing libraries.</ns0:p><ns0:p>Save COCI as binary Loading 35 GB edge list in-memory using SNAP takes approx 5.5 hours. Since the edge labels are DOI in the COCI data, therefore they are saved as strings. However, this slows down further processing so strings are converted to a hash file. There are two binary files generated when loading the COCI data in SNAP. First is DOIDirected.graph file which contains the directed citation network of COCI with integer node labels. Second is DOIMapping.hash which maps the integer node label to respective DOI. We save loaded graph as binary files for further computations. Loading binary file in-memory takes a few minutes as compared to a few hours for loading CSV data with the downside that additional columns of COCI are currently not being utilised. To keep things simple for novice and non-technical user DOIMapping.hash is simply a node list where node number is mapped to its label (DOI) while the DOIDirected.graph is an edge list on node number. This is the part which makes SNAP less intuitive but more powerful since computations are much faster when integer labels are used but for human consumption, a mapping to string labels is also provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fetching Crossref metadata</ns0:head><ns0:p>Summary of the sub-steps to download Crossref metadata are shown in Figure <ns0:ref type='figure'>4</ns0:ref>. Below we define the sub-steps to fetch the citation metadata and converting it to list of authors and DOIs. Although these steps only provide API string to fetch data for a single journal, however, it is possible to fetch data with other filters and query using Crossref. Details are provided in Crossref documentation, and the metadata downloaded via different filters is in a similar format which makes this script reusable for a variety of tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 4.</ns0:head><ns0:p>Step 2 of the workflow with details of fetching metadata from Crossref API. Sub-steps are applied sequentially.</ns0:p><ns0:p>Create Crossref API string Crossref limits a one time query to 1000 records for a single ISSN. For queries with more than 1000 records, multiple API strings are needed which are created automatically.</ns0:p><ns0:p>Crossref data of SCIM is fetched via Crossref API which contains total 1857 records. These records are fetched by two API requests to create JSON of SCIM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fetch author(s) list from data</ns0:head><ns0:p>Once data are fetched from Crossref as JSON we populate the list of authors. We extract authors from the previous downloaded JSON. It is important to note that we do not apply any technique for author name disambiguation and rely on Crossref to provide correct author names.</ns0:p><ns0:p>Although this is problematic for further analysis, in the long run, corrected data from a single source is much efficient than using different methods of cleaning. A similar approach is provided by MAG <ns0:ref type='bibr' target='#b40'>(Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Fetch DOI list from data Once data are fetched from Crossref as JSON we populate the list of DOI.</ns0:p><ns0:p>DOIs are extracted from the previously downloaded JSON. Although the purpose of fetching DOI is redundant but it's replica script is created to suggest that analysis with only provided DOI list is also</ns0:p></ns0:div>
<ns0:div><ns0:head>8/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>possible. So the previous two sub-steps can be ignored if analysing a specific journal is not needed. If the list of DOIs is fetched from an external source then it can be easily incorporated in this workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creating ego network</ns0:head><ns0:p>Summary of the sub-steps to create ego network are shown in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>. Below we define the sub-steps to create Ego Network. This step can be iterated zero or more times to grow the network as desired. This step is not used in the case study, however, we provide the details in this section to show that with publicly accessible metadata it is relatively easier to scale our approach. Further, this step justifies our approach of using SNAP over other network processing libraries since the process of creating the ego network is not only fast but intuitive to code due to a variety of functions available in the extensive library documentation that makes it easier to access the nodes in both directions of an edge. Also, the integer labels make the computation faster than using string labels. Crossref dump for egonet We provide the fetching of Crossref data for all DOIs of article ego network created in the previous step. This way first we download all data and then process it to create the network.</ns0:p><ns0:p>Depending on the size of the network and the number of ego levels, as well as connectivity bandwidth available this process can take from a few hours to days. Once a local copy of data is available this delay can be reduced. Since we do not have access to complete dump of Crossref we could not identify whether these same scripts can be reused but we assume that there would be few changes required to access the data locally.</ns0:p></ns0:div>
<ns0:div><ns0:head>DOI and author list extraction</ns0:head><ns0:p>We provide the creation of the ego network for authors. This is similar to nodes of SCIM downloaded earlier. However, here we add the connecting nodes fetched in subgraph above and download their respective author details.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creating scientific network(s)</ns0:head><ns0:p>Summary of the sub-steps to create scientific networks are shown in Figure <ns0:ref type='figure'>6</ns0:ref>. Once all the data are pre-processed this step creates different types of network. We can also add bibliographic coupling and co-citation network within the list but they are ignored for two reasons. First, we did not find much evidence of centrality analysis on these networks. Secondly, the processing time for creating these networks for a very large citation network is relatively much longer than creating author collaboration or author citation network. These networks are simply created by making an edge list for authors who have collaborated or cited each other.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6.</ns0:note><ns0:p>Step 4 of the workflow with details of creating different scientific networks. Sub-steps are applied sequentially.</ns0:p></ns0:div>
<ns0:div><ns0:head>Create article citation network</ns0:head><ns0:p>Once the list of DOI is available it is used to fetch subgraph of article citation network for these DOIs. We provide details of fetching article citation network as a subgraph from COCI. Further, it saves the same graph as a binary file for further analysis. Also, the CSV file can be used with any graph processing library (such as NetworkX) while binary file can be read using SNAP.</ns0:p><ns0:p>Create author collaboration network Author collaboration is identified via a list of co-authors from JSON data fetched via Crossref. This refined data are further used in the case study in the subsequent section. It is important to note that the count of authors at this sub-step may vary from next sub-step of creating author citation network since the list of co-authors in Crossref is provided as a list of names and we do not include further metadata about these authors.</ns0:p><ns0:p>Create author citation network Using the subgraph of article citation network respective edges are made for authors to create author citation network. All co-authors are linked to use full counting method.</ns0:p><ns0:p>In case method of partial counting is to be utilised then this script needs to be modified. However, our workflow is not affected by the use of a partial or full counting method and hence we have picked simpler one for brevity <ns0:ref type='bibr' target='#b8'>(Glanzel, 2003)</ns0:ref>. In any case, this network shall supplement the analysis on a collaboration network that was constructed in the previous step, as well as article citation network that was originally provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Centrality analysis</ns0:head><ns0:p>Summary of the sub-steps to apply centrality analysis are shown in Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>. Below we define the sub-steps to apply different centrality measures on the scientific networks. This is one of the common method employed in the bibliometric analysis, however other methods of SNA can also be applied at this step. Any tool or wrapper API may restrict the functionality at this point, however, this work can be extended to use any functions in existing network processing libraries. Since using graphical tools is easier than the script so a future application of this study could be about creating a front end tool for ease of use. Below we provide details about how the different centrality measures applied by different studies can be accomplished. Each of the measures is separated in the different listing along with loading and initialisation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Applying centrality measures on article citation network</ns0:head><ns0:p>The article citation network is a Directed Acyclic Graph (DAG). Most centrality analyses are not meaningful on DAG. Two measures are presented.</ns0:p><ns0:p>First, degree centrality provides highly cited articles. Finding authors of these articles is also possible, however not provided for simplicity. Secondly, influence definition in DAG is captured via the recursive definition of Katz centrality which is also provided using NetworkX library. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Step 5 of the workflow with details of centrality measures that are applied on different scientific networks. Sub-steps may be applied as required as there is no dependency within steps.</ns0:p><ns0:p>(eigenvector centrality) and authors working in multiple domains (betweenness centrality).</ns0:p></ns0:div>
<ns0:div><ns0:head>Applying centrality measures on author collaboration network</ns0:head><ns0:p>The author collaboration network has cyclic nature and most centrality analyses are possible. Five measures are presented, namely highly collaborative authors (degree centrality), influential collaborators (eigenvector centrality), authors working in multiple groups (betweenness centrality), well-knitted authors (closeness centrality), and solo authors (farness centrality). Ranks captured here are presented in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. This work was done manually by sorting individual lists on respective centrality scores and identifying their rank position.</ns0:p></ns0:div>
<ns0:div><ns0:head>Batch execution</ns0:head><ns0:p>All python scripts can be executed through a sample batch file by modifying the ISSN and date range. This batch processing will also be useful for developing a front-end tool, as well as modifying the sequence as per user need.</ns0:p></ns0:div>
<ns0:div><ns0:head>CASE STUDY USING SCIM</ns0:head><ns0:p>Milojević (2014) analysed collaboration network using WoS data of SCIM for 10 years starting from the year 2003. The outcome of their analysis was provided in a table having authors that had top 5 ranks in either of the centrality scores. The respective rank of those authors was also provided. To verify whether or not our workflow can capture a similar pattern we provide the results in a similar tabular form.</ns0:p><ns0:p>For each of the centrality measure we provide the rank given in <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref> using WoS data, as well as compare it with the rank obtained in our study using OpenCitatons data. We observe that the rank of authors for the degree, betweenness and closeness centrality is more or less similar, however, further analysis is required to inquire the reason for the difference of eigenvector centrality ranks. Such an analysis is outside the scope of this study.</ns0:p><ns0:p>Ranks in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> are entered manually after processing the information separately. Author names are sorted in the same sequence as provided in the original study along with their respective ranks. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> has four sections for the degree, betweenness, eigenvector and closeness centrality, respectively. Each section has two columns with the left column showing rank from <ns0:ref type='bibr' target='#b22'>Milojević (2014)</ns0:ref> and the right column shows the rank calculated for the same author using our workflow. It is pertinent to note that a very hand-on approach is provided by <ns0:ref type='bibr' target='#b22'>Milojević (2014)</ns0:ref>, however, due to access restriction of WoS and its</ns0:p></ns0:div>
<ns0:div><ns0:head>11/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science unaffordability for developing countries, such useful analysis are only limited to researchers of specific institutes having subscription <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020)</ns0:ref>.</ns0:p><ns0:p>This highlights the importance of our workflow to provide access to any individual who can download the publicly available metadata. Further, we do not discuss the reasons for why a specific author has topped the list and what the centrality measure signifies, and the interested reader is referred to <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref>. However, we intend to provide a detailed analysis in a separate publication using ego networks. <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref> and current study for each centrality measure. Table is divided into 4 sections for each centrality measure with the left column in each section showing the rank from <ns0:ref type='bibr' target='#b22'>(Milojević, 2014)</ns0:ref>, and the right column showing the rank calculated by our workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Scientific networks rely on completion of data, and although the field has existed for more than 50 years <ns0:ref type='bibr' target='#b32'>(Price, 1965)</ns0:ref>, however, the limitations on data access have not helped to reach its true potential. We aim that with the availability of publicly available metadata <ns0:ref type='bibr' target='#b38'>(Waltman and Larivière, 2020)</ns0:ref> and a workflow to access it, such as the one presented in this study, a researcher from any field will be able to analyse the prominent authors. Based on numerous studies discussed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> it is evident that centrality measures are a popular way of identifying prominent authors. It can further be used for identifying reviewers for a potential study (based on its references), as well as a graduate student finding a PhD supervisor. Once the citation network is fetched and saved as a binary file the time it takes to analyse authors list in a journal is well under an hour, barring the time to create ego network as it requires downloading Crossref files for each DOI. This provides a means for fast and interactive analysis for researchers of any field. This study currently does not provide a detailed analysis of the ego network, however, a brief comparison justifies the importance of systematic metadata harvesting workflow. For case study, some manual work was also done to sort and format the results, however, it can also be scripted in future as it does not hamper the workflow and can be performed as a standalone. Likewise, techniques for author name disambiguation or partial counting have not been included but for effective analysis, these need to be incorporated in future.</ns0:p><ns0:p>We further aim to enhance this work to filter Crossref data based on subject categories instead of journal ISSN. It would enhance the capability and usefulness of this analysis for individual researchers. A web-based portal is also under construction where the user may be able to select the date range along with other filters and the system which initiates the scripts at the back-end. This way the users who are not familiar with programming can also benefit from this analysis.</ns0:p><ns0:p>12/14</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>neighbours of node (n) (namely node (k), (l), (m), (o) and (p)) will form its ego network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A toy network showing different nodes with high centrality for different measures. A. shows high farness centrality since the node (a) has the maximum average distance to other nodes. B. shows high clustering coefficient since neighbours of the node (c) are all connected as well.C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have 5 neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:figDesc><ns0:graphic coords='3,141.73,279.84,413.57,232.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b37'>Waheed et al. (2019)</ns0:ref> discusses the use of centrality measures on multiple scientific networks of author collaboration, author citation and article citation to improve article recommendation. They filter the 4/14 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Workflow to identify gurus of any Field. A pyramid shows the refinement of data at every step. COCI contains approx. 625 M edges which are refined to ego network for subset nodes fetched for respective ISSN. Finally, the top of the pyramid shows the output in form of a few nodes identified with high centrality.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.59,175.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Step 1 of the workflow with details of creating the citation network. Sub-steps are applied sequentially.</ns0:figDesc><ns0:graphic coords='8,141.73,379.10,413.59,157.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5.Step 3 of the workflow with details of creating the ego network. Sub-steps are applied sequentially, and may be iterated over to create next level of ego network.</ns0:figDesc><ns0:graphic coords='10,141.73,216.53,413.59,153.36' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Applying centrality measures on author citation networkThe author citation network has cyclic nature. Three measures are presented, namely highly cited authors (degree centrality), influential authors 10/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:2:0:NEW 1 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7.Step 5 of the workflow with details of centrality measures that are applied on different scientific networks. Sub-steps may be applied as required as there is no dependency within steps.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.59,249.31' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,141.73,355.50,413.59,150.71' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Review of studies applying social network analysis on scientific networks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Bibliometric</ns0:cell><ns0:cell>Scientific Network(s)</ns0:cell><ns0:cell>Social Network Analysis</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Data Source</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ding (2011)</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>Weighted PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Abbasi et al.</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2012)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Ortega (2014)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>Co-Author Ego Network</ns0:cell><ns0:cell>Clustering Coefficient, Degree</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Milojević</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Collaboration and Cita-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2014)</ns0:cell><ns0:cell /><ns0:cell>tion, Article Citation</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Waltman and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Journal Citation Network</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Yan (2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Xu and Peke-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>PageRank and Degree Central-</ns0:cell></ns0:row><ns0:row><ns0:cell>lis (2015)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ity</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff</ns0:cell><ns0:cell>WoS/JCR</ns0:cell><ns0:cell>Journal Citation</ns0:cell><ns0:cell>Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lee (2019)</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree and Betweenness Cen-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>trality, Clustering Coefficient</ns0:cell></ns0:row><ns0:row><ns0:cell>Massucci and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Institutional Citation</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Docampo</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Singh and Jo-</ns0:cell><ns0:cell>APS Journals</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Centrality, Community Detec-</ns0:cell></ns0:row><ns0:row><ns0:cell>lad (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell>Van den Besse-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Researchers Ego Network</ns0:cell><ns0:cell>Clustering coefficient, eigenvec-</ns0:cell></ns0:row><ns0:row><ns0:cell>laar and Sand-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tor Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>ström (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Waheed et al.</ns0:cell><ns0:cell>DBLP, ACM,</ns0:cell><ns0:cell>Author Collaborationand Cita-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>tionArticle Citation, Co-citation</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>and Bibliographic Coupling</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of ranks by previous study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Collaborator</ns0:cell><ns0:cell cols='8'>Degree Rank Betweenness Rank Eigenvector Rank Closeness Rank</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell cols='3'>Prev Curr Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell></ns0:row><ns0:row><ns0:cell>Glanzel, W</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Rousseau, R</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>DeMoya-Anegon, F</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Klingsporn, B</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>89</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>174</ns0:cell><ns0:cell>144</ns0:cell></ns0:row><ns0:row><ns0:cell>Ho, Ys</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>2096</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>613</ns0:cell><ns0:cell>575</ns0:cell></ns0:row><ns0:row><ns0:cell>Thijs, B</ns0:cell><ns0:cell>63</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>1710</ns0:cell></ns0:row><ns0:row><ns0:cell>Schubert,A</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell>Debackere, K</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Schlemmer, B</ns0:cell><ns0:cell>670</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>382</ns0:cell><ns0:cell>962</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>808</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>37</ns0:cell></ns0:row><ns0:row><ns0:cell>Meyer, M</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff, L</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>44</ns0:cell></ns0:row><ns0:row><ns0:cell>Rafols,I</ns0:cell><ns0:cell>1058</ns0:cell><ns0:cell>387</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>239</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>49</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Editor,
PeerJ CS
1 Jan 2021
We acknowledge the time and effort given by the editor and all reviewers in improving the
manuscript.
We believe that the manuscript will now be acceptable for publication in PeerJ CS.
Thanks,
Bilal, Rafi and Sabih
Editor's Decision Minor Revisions
Dear authors,
Thanks for submitting your revised work at PeerJ Computer Science. The same three
independent experts who assessed your initial submission were able to review your revision
again, and their reviews are attached. All of them praised the work and the extensive rewriting
you did in the article. However, there are still some issues to address before this article is
acceptable for publication in PeerJ Computer Science.
>> Response: Authors acknowledge the effort put in by the editor for a timely conduct of
the review process.
Please read carefully the reviewers' comments and address all of them in the new revision of
your work. Please let me know if you need more time for preparing the revision.
>> Response: All suggested changes have been addressed, as below.
Thanks again for having submitted to PeerJ Computer Science.
Have a nice day :-)
Silvio Peroni
https://orcid.org/0000-0003-0530-4305
Reviewer 1
Basic reporting
Although the authors reworked their submission according to suggestions of the reviewers, I find
too many inconsistencies to recommend its publication in its present form.
>> Response: Authors acknowledge the critical feedback provided by the reviewer that
has shaped the current study.
While the review of previous studies and description of the workflow are good, the formulation of
the research goals and the case study are problematic.
>> Response: Research Question is merged with the introduction section.
Indeed, the abstract formulates the research goal as developing a methodology to construct a
multinetwork from the same dataset. Namely, citation network overlayed with authorship
network and collaboration network. This is an ambitious program and the authors seem to be
able to show the workflow, algorithm how they perform this task. Such algorithms are known
and the authors shall be commended for their thorough description of how these algorithms
work together, in tandem or pipeline.
>> Response: Appreciated.
The authors present the example of such workflow, the case study. They downloaded the
authors writing to Scientometrics, calculated different centralities associated with them, and
compared their measurements to previous study (Milojevic, 2014). Table 1 show dramatic
difference between their ranking of prominent authors and those of Milojevic, In other words, the
authors shoot themselves in the foot, since discrepancies in Table 1 invalidate their algorithm.
My feeling is that either description of some details of their algorithm is missing, or they do not
explain in which aspect their measurements are different from those of Milojevic.
>> Response: As mentioned in line no. 369 (of the previous submission) due to the
unaffordability of the WoS dataset we were not able to perform a detailed comparison.
This highlights the importance of our workflow to provide access to any individual who
can download the publicly available metadata. Ranks for different measures (other than
eigenvector centrality) are similar in pattern apart from a couple of anomalies which
could not be investigated further.
Experimental design
good.
>> Response: Appreciated.
Validity of the findings
Problematic. Probably, some details of the algorithm are missing.
>> Response: In our first submission we did mention a possible reason for difference in
ranks of eigenvector centrality (of using PageRank implementation with a damping factor
of 0.85). We have later removed it since the comparative analysis was reshaped as a case
study (as per overall comments) and the case study presents only a sample execution of
our workflow. We believe that such details may confuse the reader.
Reviewer 2
Basic reporting
See below.
Experimental design
See below.
Validity of the findings
See below.
Comments for the author
Dear authors,
thank you for a thoughtful and extensive revision of your article. I believe its goals and scope
are now clear, as well as its contribution. The article is also now well-embedded into previous
literature, and the publication of the code on GitHub is crucial.
>> Response: Authors acknowledge the thoughtful comments of the reviewer.
While I remain skeptical about the actual scientific contribution of this work, which I consider
somewhat narrow, I believe that the authors have substantially improved on their previous
submission and, if the editor considers their work of interest to PeerJ readers, I now support
acceptance.
>> Response: We hypothesize that the workflow, once published as open access, shall
give the community a means of supplementing it with different analyses.
Reviewer: Ludo Waltman
Basic reporting
See my general comments.
Experimental design
See my general comments.
Validity of the findings
See my general comments.
Comments for the author
I would like to thank the authors for the improvements they have made to their paper. Before I
can recommend this paper for publication, there are some further improvements that I consider
to be necessary.
>> Response: Authors acknowledge the invaluable comments provided that have
improved this study.
The introduction of the paper, in particular the first paragraph of the introduction, needs to
provide a better explanation of what the paper is about. The introduction is largely focused on
discussing the problem of identifying ‘gurus’. This gives the incorrect impression that the paper
may provide in-depth analyses of different approaches to identifying gurus. The introduction
does not make sufficiently clear that the paper is almost entirely about providing a standardized
workflow for identifying gurus. This needs to be made much more clear in the introduction.
>> Response: Starting of introduction section is modified.
The subsection ‘Research question’ in the section ‘Preliminaries’ can best be removed. If the
authors want to specify an explicit research question, it is best to do so in the introduction.
>> Response: Research Question is merged with Introduction.
The distinction between the section ‘Discussion’ and the section ‘Conclusion and future work’ is
not very clear. My recommendation is to merge the two sections in a single section.
>> Response: Sections have been merged.
“Any study on co-authorship may not necessarily have all the citation links.”: I don’t understand
this sentence. To study co-authorship, there is no need to have information about citation links.
A study of co-authorship requires data on co-authorship. It doesn’t require citation data.
>> Response: This is correct, however, the purpose of providing workflow is to fetch all
data needed for a holistic view. A researcher looking for citation details of highly
collaborative authors may not be able to utilize the mono dimensional data of
co-authorship only. Sentences in the abstract have been rephrased.
“the same would not be true for publishers whose metadata are not yet public although available
with Crossref (such as Elsevier).”: As I pointed out in my previous review report, this is incorrect.
Elsevier does make metadata openly available in Crossref. The only exception is citation data,
which Elsevier currently doesn’t make openly available in Crossref. (By the way, this will change
soon. See https://www.elsevier.com/connect/advancing-responsible-research-assessment.)
>> Response: These sentences have been removed.
" | Here is a paper. Please give your review comments after reading it. |
2 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>One of the disciplines behind the science of science is the study of scientific networks. This work focuses on scientific networks as a social network having different nodes and connections. Nodes can be represented by authors, articles or journals while connections by citation, co-citation or co-authorship. One of the challenges in creating scientific networks is the lack of publicly available comprehensive data set. It limits the variety of analyses on the same set of nodes of different scientific networks. To supplement such analyses we have worked on publicly available citation metadata from Crossref and OpenCitatons. Using this data a workflow is developed to create scientific networks.</ns0:p><ns0:p>Analysis of these networks gives insights into academic research and scholarship. Different techniques of social network analysis have been applied in the literature to study these networks. It includes centrality analysis, community detection, and clustering coefficient.</ns0:p><ns0:p>We have used metadata of Scientometrics journal, as a case study, to present our workflow. We did a sample run of the proposed workflow to identify prominent authors using centrality analysis. This work is not a bibliometric study of any field rather it presents replicable Python scripts to perform network analysis. With an increase in the popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding scientific scholarship in multiple dimensions.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Scientific networks provide useful information in understanding the dynamics of science <ns0:ref type='bibr' target='#b34'>(Price, 1965)</ns0:ref>.</ns0:p><ns0:p>With the advent of numerous bibliographic data sources <ns0:ref type='bibr' target='#b40'>(Waltman and Larivière, 2020)</ns0:ref>, it is now possible to analyse different scientific networks. The proposed study focuses on article citation network, author citation network, and co-authorship network. Usually, studies that focus on co-authorship do not require information about the citation. However, having citation links would enable a more complete and holistic view of the possible relations among authors <ns0:ref type='bibr' target='#b45'>(Zingg et al., 2020)</ns0:ref>. To achieve this objective comprehensive access to citation metadata is required. This can be accomplished using publicly available citation metadata available via Crossref <ns0:ref type='bibr' target='#b13'>(Hendricks et al., 2020)</ns0:ref>. However, applying network analysis on it requires a series of steps that may not be intuitive. Proposed study furnish details of these steps so that it is easy to supplement it with different analyses.</ns0:p><ns0:p>Social network analysis techniques are applied to study scientific networks. It includes citation networks of article or author, and author collaboration network. Usually, these networks are build using different data sources. However, our workflow can create all these networks using OpenCitations data and Crossref. The workflow presented in this article is part of a study on the influence of scholarly research artefacts. To this end, we primarily limit our research goal to have a systematic workflow for analysing scientific networks. In this work, we aim to utilise open metadata <ns0:ref type='bibr' target='#b32'>(Peroni et al., 2015)</ns0:ref>, made available using Crossref. Also, we utilise open source Python libraries for network analysis, namely, NetworkX <ns0:ref type='bibr' target='#b10'>(Hagberg et al., 2008)</ns0:ref> and SNAP <ns0:ref type='bibr' target='#b18'>(Leskovec and Sosič, 2016)</ns0:ref>. Python is used based on its popularity with researchers as per survey results by <ns0:ref type='bibr' target='#b2'>AlNoamany and Borghi (2018)</ns0:ref>. Although graphical software has PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science an ease of use, we prefer to provide workflow as a set of Python scripts to facilitate advance analysis.</ns0:p><ns0:p>Details of batch execution of workflow scripts are available on GitHub for researchers with programming background <ns0:ref type='bibr' target='#b5'>(Butt and Faizi, 2020)</ns0:ref>. This article outlines details of a case study for analysing collaboration network of Scientometrics journal metadata. All steps are documented for the replication of this study. This work shall lay the groundwork for analysing scientific networks using metadata of different journals, set of journals or a subject category. One such analysis is the identification of prominent authors (gurus).</ns0:p><ns0:p>Identifying prominent authors of any field is one of the primary focus for young researchers. Likewise, other researchers tend to follow research published by gurus of the field. Defining a guru of the field is not an easy task, and the definition of guru will be very subjective. To this end, we focus on the definition of guru using the centrality measures of social network analysis. Details of different centrality measures are depicted in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> <ns0:ref type='bibr' target='#b26'>(Newman, 2010)</ns0:ref>. The following description was inspired by <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref>.</ns0:p><ns0:p>Simply put, any author with a high citation count may be considered the guru. It can be achieved using degree centrality. Another way of identifying a highly cited individual is to calculate whose paper is in top percentile within the domain. However, we currently limit such definitions to degree centrality of articles. It is not always the case that all highly cited authors are equally influential. Those who are cited by other influential authors may also be termed as influential even though they may or may not have high citation count. Likewise, any author frequently collaborating with influential authors would also influence that field. This recursive definition of influence is well captured by eigenvector centrality.</ns0:p><ns0:p>Another centrality measure, namely betweenness centrality would define an author as prominent if author collaborates with different groups. Centrality measures of closeness and farness is the extent to which an author is on average close to or far from other authors within the network, respectively. C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have highest count of neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:p><ns0:p>In the case of analysing the citation network with a limited snapshot of data, this could be supplemented by creating the ego-centered network <ns0:ref type='bibr' target='#b27'>(Newman, 2003)</ns0:ref>. Citation index allow fetching the metadata of</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Visualising bibliographic data as a network is not new, <ns0:ref type='bibr' target='#b34'>Price (1965)</ns0:ref> introduced the work more than 50 years ago. Most recent studies are on co-authorship network <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Lee, 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waheed et al., 2019)</ns0:ref>, however others have focused on citation network of authors <ns0:ref type='bibr' target='#b8'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b43'>Xu and Pekelis, 2015;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waheed et al., 2019)</ns0:ref> or citation network of journals <ns0:ref type='bibr' target='#b41'>(Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b19'>Leydesdorff et al., 2018)</ns0:ref>. Only a couple of studies have utilised more than one scientific network for analysis <ns0:ref type='bibr' target='#b23'>(Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waheed et al., 2019)</ns0:ref>. Traditionally bibliometric analysis has been done using WoS and Scopus <ns0:ref type='bibr' target='#b40'>(Waltman and Larivière, 2020)</ns0:ref>. A similar case has been observed in studies on scientific network analysis where the data sources used are Scopus <ns0:ref type='bibr' target='#b0'>(Abbasi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b17'>Lee, 2019)</ns0:ref> or WoS <ns0:ref type='bibr' target='#b8'>(Ding, 2011;</ns0:ref><ns0:ref type='bibr' target='#b23'>Milojević, 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Waltman and Yan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b19'>Leydesdorff et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b22'>Massucci and Docampo, 2019)</ns0:ref>. However, some recent studies have focused on open access data sources <ns0:ref type='bibr' target='#b36'>(Singh and Jolad, 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Van den Besselaar and Sandström, 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Waheed et al., 2019)</ns0:ref>. Other data sources such as PubMed, CiteSeerX and ACL are not discussed in this article. They are used mostly for text analysis instead of network analysis. Below we list a brief account of work done on scientific networks using centrality measures. Details are summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> in chronological order. Some earlier studies such as <ns0:ref type='bibr' target='#b28'>(Newman, 2004)</ns0:ref> are not included as we have focused on studies published in the last decade. <ns0:ref type='bibr' target='#b8'>Ding (2011)</ns0:ref> proposed to analyse the author citation network with weighted PageRank. The author proposed the strategy on predicting prize winners that outperforms the conventional h-index and related citation count measures. <ns0:ref type='bibr' target='#b0'>Abbasi et al. (2012)</ns0:ref> discussed the use of betweenness centrality as a measure of getting more collaborators compared to degree and closeness centrality. They have used temporal co-authorship network in the steel research domain. Data was manually curated and downloaded from Scopus. <ns0:ref type='bibr' target='#b31'>Ortega (2014)</ns0:ref> analysed 500 co-authors' ego network and conclude that bibliometric indicators and centrality measures are correlated. They have used clustering coefficient, degree and betweenness centrality as local metrics. Some global level metrics were also analysed using the ego network. It is one of the early studies using MAG.</ns0:p><ns0:p>Two book chapters provide hands-on details about centrality measures <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and PageRank <ns0:ref type='bibr' target='#b41'>(Waltman and Yan, 2014)</ns0:ref> using WoS data. <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref> constructed the author collaboration network and calculated degree, betweenness, eigenvector and closeness centrality. <ns0:ref type='bibr' target='#b41'>Waltman and Yan (2014)</ns0:ref> details applying PageRank on journal citation network. <ns0:ref type='bibr' target='#b43'>Xu and Pekelis (2015)</ns0:ref> used a manually curated dataset for authors of China and Taiwan in the field of Chinese Language Interpreting Studies. They have applied PageRank and degree centrality to find influential authors within different clusters identified using community detection. <ns0:ref type='bibr' target='#b19'>Leydesdorff et al. (2018)</ns0:ref> have used betweenness centrality to measure multidisciplinary journals.</ns0:p><ns0:p>Authors have limited their approach with a diversity measure and evaluated it on JCR data. Usually, a journal gets citation within its subject category but those journals cited/citing the other fields are considered a bridge between the subject categories.</ns0:p><ns0:p>A case study for young researchers performance evaluation is presented by <ns0:ref type='bibr' target='#b17'>Lee (2019)</ns0:ref>. The author analysed the collaboration network of these researchers using statistical analysis for the frequency of collaborators. The degree centrality is showed to correspond with future publication count. It is akin to Li Working on five different subject categories, they proposed a framework which is more robust than existing university rankings. It holds a high correlation with these accepted rankings. <ns0:ref type='bibr' target='#b36'>Singh and Jolad (2019)</ns0:ref> utilised data of APS journals to form collaboration network of Indian physicist. In this co-authorship network, they have applied different centrality measures and report the overlapping top authors.</ns0:p><ns0:p>Van den Besselaar and Sandström (2019) discuss the potential use of clustering coefficient and eigenvector centrality in ego network of research students and their supervisor. Both metrics are used to gauge the independence of a researcher. They have handpicked 4 pairs of researchers. The authors suggested that there are numerous ways to capture the researcher's autonomy. However, when evaluating large data sets the clustering coefficient and eigenvector centrality can be effective. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarises the studies in three aspects. First, the bibliographic data source used. Second, the scientific network created. Last, details of techniques applied for analysis. Studies show that applying centrality measures is a useful analysis in bibliometrics. However, these approaches are mostly not scalable and require considerable effort to apply the same analysis on bigger networks. In some cases, the tools limit the size of network analysed, whereas in other studies the data sets are manually curated. In comparison to our work, most of the studies are limited to one type of network. The way data sets are acquired limits the analysis to expand to another type of networks <ns0:ref type='bibr' target='#b45'>(Zingg et al., 2020)</ns0:ref>. We observe that very few studies have either used multiple networks or mentioned that if these can be curated with the same data source. With WoS and Scopus, it is theoretically possible to create all networks. However, with other data sources, a dump is usually uploaded with limited metadata. It restricts the authors to confine their studies.</ns0:p><ns0:p>Case studies similar to our workflow are also available on the proprietary data source of WoS <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and Scopus <ns0:ref type='bibr' target='#b35'>(Rose and Kitchin, 2019)</ns0:ref>. Further, a set of graphical tools are also available as discussed by <ns0:ref type='bibr' target='#b24'>Moral-Muñoz et al. (2020)</ns0:ref> in a recent survey. Most tools do not give access for Crossref apart from (Van Eck and <ns0:ref type='bibr'>Waltman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chen, 2005)</ns0:ref>.</ns0:p><ns0:p>One of the recent studies <ns0:ref type='bibr' target='#b35'>(Rose and Kitchin, 2019)</ns0:ref> focuses on using Scopus data for network analysis.</ns0:p><ns0:p>They have provided an API for researchers to perform useful analyses. Accessing Scopus is possible with Elsevier Developer API Key. However, it requires institutional or authenticated access. Such access is not possible, especially for developing countries <ns0:ref type='bibr' target='#b14'>(Herzog et al., 2020)</ns0:ref>. Although our work is similar to <ns0:ref type='bibr' target='#b35'>Rose and Kitchin (2019)</ns0:ref> in using Python for analysing scientific network, it is different in two aspects.</ns0:p><ns0:p>Firstly, we are working with OpenCitatons data using Crossref. Secondly, we have not developed an API interface that needs maintenance and support since Crossref, NetworkX and SNAP fulfil the purpose. <ns0:ref type='bibr' target='#b6'>Chen (2005)</ns0:ref> discusses the identification of highly cited clusters of a scientific network. The pivotal points in the scientific network are captured using betweenness centrality. The author uses clinical evidence data associated with reducing risks of heart disease to illustrate the approach. They have discussed the design of the CiteSpace tool and its new feature for identifying pivotal points. They used betweenness centrality to identify pathways between thematic clusters. Nodes with high betweenness centrality are potential pivotal points in clustering the scientific network. Instead of a graphical software tool, we propose to use Python scripts. It gives more flexibility for advance analysis. For a detailed survey, we would refer the interested reader to <ns0:ref type='bibr' target='#b24'>(Moral-Muñoz et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Usage of open metadata are gaining popularity. On the other hand, publicly available metadata has its limitations with completeness and verification. <ns0:ref type='bibr' target='#b15'>Iorio et al. (2019)</ns0:ref> concludes that using OpenCitatons data for evaluation purpose is not enough due to the unavailability of complete data. However, more than half of the records are available in comparison to WoS and Scopus. A similar evaluation is also done by <ns0:ref type='bibr' target='#b29'>Nishioka and Färber (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b21'>Martín-Martín et al. (2020)</ns0:ref>. Further, there are different approaches to augment the current OpenCitatons data <ns0:ref type='bibr' target='#b7'>(Daquino et al., 2018;</ns0:ref><ns0:ref type='bibr'>Heibi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Peroni and Shotton, 2020)</ns0:ref>. <ns0:ref type='bibr' target='#b16'>Kamińska (2018)</ns0:ref> discusses a case study for using OpenCitatons data for visualising citation network. <ns0:ref type='bibr' target='#b44'>Zhu et al. (2020)</ns0:ref> hypothesise that it would be easier for masses to adopt OpenCitatons data for bibliometric analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This section details a systematic workflow from data fetching to analysis. A series of steps are required to apply centrality analysis on the author collaboration and author citation networks. Utilising the article citation network, available as citation index, these networks get created. All scripts were executed on</ns0:p><ns0:p>Windows Server machine having Quad-Core AMD Opteron(TM) Processor 6272 with 128 GB RAM installed. The initial processing of data requires heavy computation and memory once. Later, the data are converted to a compressed binary format using libraries for processing large networks. It can run on any standard laptop machine. Below, we provide details of the workflow to create scientific networks.</ns0:p><ns0:p>Although the case study is limited to data of SCIM, we have made the process automated. This automation helps applying the same script for other journals with minimum changes.</ns0:p><ns0:p>Overview of the process is shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> and further details about each of the following steps are documented separately. Each of the steps is further distributed with three sub-steps for clarity and batch execution.</ns0:p><ns0:p>Step 1 Download the citation index available as COCI <ns0:ref type='bibr'>(Heibi et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Step 2 Download the metadata for given ISSN through Crossref.</ns0:p><ns0:p>Step 3 Fetch the ego network from COCI data for the DOIs of respective ISSN.</ns0:p><ns0:p>Step 4 Merge these data to create scientific networks.</ns0:p><ns0:p>Step 5 Apply the centrality analysis on these networks. Python scripts are uploaded as Supplemental Files which can also be accessed on GitHub <ns0:ref type='bibr' target='#b5'>(Butt and Faizi, 2020)</ns0:ref>. It gives replication and reuse of this study for other ISSN or bibliometric analyses for different network types. Details are provided below for the understanding of this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Load citation network</ns0:head><ns0:p>Summary of the sub-steps to load citation network is shown in Figure <ns0:ref type='figure'>3</ns0:ref>. Below we define the sub-steps to convert the COCI data to use in Python libraries for network processing. This step is computation and memory intensive but needs to be performed only once.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3.</ns0:head><ns0:p>Step 1 of the workflow with details of creating the citation network. Sub-steps are applied sequentially.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Download COCI data COCI is manually downloaded from (OpenCitations, 2020). The 15 GB Zip file extracts to 98 GB set of files. Loading this data in-memory resulted in memory-overflow even when using 128 GB RAM. Therefore, in the next step, we remove the columns other than citing and cited. These two columns are used to create the article citation network.</ns0:p><ns0:p>Convert COCI data to edge list This step is needed to convert the COCI data to an edge list format. In this format, two nodes on each row signify an edge. This format is supported by SNAP <ns0:ref type='bibr' target='#b18'>(Leskovec and Sosič, 2016)</ns0:ref> for processing large-scale network data such as COCI. After this step, the edge list file is approx 35 GB. We convert the COCI from comma-separated-values (CSV) to space-separated-values having citing and cited columns. It is the only format supported by SNAP for bulk upload. Some formatting corrections are done for removing extra CR/LF and quotes. It hampers the loading process of SNAP. We have failed to load the same files with other libraries which are relatively more intuitive but not as powerful as SNAP <ns0:ref type='bibr' target='#b18'>(Leskovec and Sosič, 2016)</ns0:ref>. However, we later discuss how this data can be used with other libraries. Details to save network in a format supported by most network processing libraries is provided in subsequent steps.</ns0:p><ns0:p>Save COCI as binary Loading 35 GB edge list in-memory using SNAP takes approx 5.5 hours. Since the edge labels are DOI in the COCI data, therefore they are saved as strings. However, this slows down further processing so strings are converted to a hash file. There are two binary files generated when loading the COCI data in SNAP. First is DOIDirected.graph file which contains the directed citation network of COCI with integer node labels. Second is DOIMapping.hash which maps the integer node label to respective DOI. We save loaded graph as binary files for further computations. Loading binary file in-memory takes a few minutes, compared to hours for loading CSV data. Downside is that additional columns of COCI are currently not being utilised. DOIMapping.hash is simply a node list where node number is mapped to its label (DOI). DOIDirected.graph is an edge list on node number. Using numeric labels makes SNAP less intuitive but more powerful since computations are much faster when integer labels are used. The mapping to string labels is possible with the node list.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fetching Crossref metadata</ns0:head><ns0:p>Summary of the sub-steps to download Crossref metadata are shown in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Below, we define the sub-steps to fetch the citation metadata and converting it to list of authors and DOIs. These steps only</ns0:p><ns0:p>give API string to fetch data for a single journal. However, it is possible to fetch data with other filters and details are available in Crossref documentation. The metadata downloaded via different filters is in a similar format which makes this script reusable for a variety of tasks. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fetch author(s) list from data Once data are fetched from Crossref as JSON we populate the list of authors. We extract authors from the previous downloaded JSON. It is important to note that we do not apply any technique for author name disambiguation and rely on Crossref for correct author names.</ns0:p><ns0:p>Although this is problematic for further analysis. Corrected data from a single source is much more efficient than using local methods of cleaning. A similar approach is used by MAG <ns0:ref type='bibr' target='#b42'>(Wang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Fetch DOI list from data Once data are fetched from Crossref as JSON we populate the list of DOI.</ns0:p><ns0:p>DOIs are extracted from the previously downloaded JSON. Although the purpose of fetching DOI is redundant, it's replica script is created to suggest that analysis with only given DOI list is also possible.</ns0:p><ns0:p>So the previous two sub-steps can be ignored if analysing a specific journal is not needed. If the list of DOI is fetched from an external source then it can be easily incorporated in this workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creating ego network</ns0:head><ns0:p>Summary of the sub-steps to create ego network are shown in Figure <ns0:ref type='figure'>5</ns0:ref>. Below, we define the sub-steps to create ego network. This is an optional step. Iterating this step multiple times will grow the network as desired. This step is not used in the case study, however, with publicly accessible metadata it is easier to scale our approach. Further, this step justifies our approach of using SNAP over other network processing libraries. The process of creating the ego network is not only fast but intuitive to code due to a variety of functions available in the extensive library documentation. These functions make it easier to access the nodes in both directions of an edge. Also, the hash with integer labels makes the sub-graph computation faster than using string labels.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 5.</ns0:head><ns0:p>Step 3 of the workflow with details of creating the ego network. Sub-steps are applied sequentially, and may be iterated to create the next level of ego network.</ns0:p><ns0:p>Load COCI binary to fetch subgraph After loading a binary file of COCI, a subset of the graph is fetched with nodes linked at one level apart. These nodes are either cited-by or cite the existing articles.</ns0:p><ns0:p>Processing a subgraph from 625M edges takes a few minutes on a Core i5 laptop with 16 GB RAM.</ns0:p><ns0:p>Crossref dump for egonet Crossref data is fetched for all DOIs of article ego network created in the previous step. First, all data is downloaded and then it is processed to create the network. Depending on the size of the network, the number of ego levels, and connectivity bandwidth this process may continue from hours to days. Once a local copy of data is available this delay can be reduced. Since we do not have access to complete dump of Crossref, we could not identify that whether these same scripts can be reused.</ns0:p><ns0:p>We assume that there would be few changes required to access the data locally. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science co-citation network within the list but they are ignored for two reasons. First, we did not find much evidence of centrality analysis on these networks. Secondly, the processing time for creating these networks for a very large citation network is relatively much longer than creating author collaboration or author citation network. These networks are created by making an edge list for authors who have collaborated or cited each other, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6.</ns0:head><ns0:p>Step 4 of the workflow with details of creating different scientific networks. Sub-steps are applied sequentially.</ns0:p></ns0:div>
<ns0:div><ns0:head>Create article citation network</ns0:head><ns0:p>Once the list of DOI is available it is used to fetch subgraph of article citation network. Article citation network is fetched as a subgraph from COCI. Further, it saves the same graph as a binary file for further analysis. Also, the CSV file can be used with any graph processing library (such as NetworkX) while binary file can be read using SNAP.</ns0:p><ns0:p>betweenness, eigenvector and closeness centrality, respectively. Each section has two columns with the left column showing rank from <ns0:ref type='bibr' target='#b23'>Milojević (2014)</ns0:ref> and the right column shows the rank calculated for the same author using our workflow. It is pertinent to note that a very hand-on approach is given in Milojević (2014). However, due to access restriction of WoS and its unaffordability for developing countries such useful analysis is only limited to researchers of specific institutes having WoS subscription <ns0:ref type='bibr' target='#b40'>(Waltman and Larivière, 2020)</ns0:ref>. This highlights the importance of our workflow to provide access to any individual who can download the publicly available metadata. <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref> and current study. Table is divided into 4 sections for each centrality measure. The left column in each section showing the rank from <ns0:ref type='bibr' target='#b23'>(Milojević, 2014)</ns0:ref>, and the right column showing the rank calculated by our workflow.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Scientific networks rely on completion of data <ns0:ref type='bibr' target='#b45'>(Zingg et al., 2020)</ns0:ref>. Although the field has existed for more than 50 years <ns0:ref type='bibr' target='#b34'>(Price, 1965)</ns0:ref> but the limitations on data access have not helped to reach its true potential.</ns0:p><ns0:p>We aim that with the availability of publicly available citation metadata <ns0:ref type='bibr' target='#b40'>(Waltman and Larivière, 2020)</ns0:ref> and a scripted workflow to access it <ns0:ref type='bibr' target='#b5'>(Butt and Faizi, 2020)</ns0:ref> a researcher from any field will be able to analyse the scientific networks. Its application can be vast, from identifying reviewers for a manuscript (based on article's references) to a graduate student finding a supervisor (through collaboration network).</ns0:p><ns0:p>The time it takes to completely execute the workflow scripts is well under an hour, barring the two time intensive steps. First, saving citation index as a binary file which needs to be done only once. Second, downloading Crossref DOI files for individual nodes of ego-centered network can be optimised with a local copy. The workflow provides a means for fast and interactive analysis.</ns0:p><ns0:p>Since using graphical tools is easier than executing the scripts so a future application of this study is to create a front-end tool. A web-based portal is also under construction where the user may be able to select the date range along with other filters, and the system will initiate the scripts at the back-end. This way the researchers who are not familiar with programming can also benefit. It would enhance the capability and usefulness of this workflow. Techniques for author name disambiguation and partial counting have not been included. For effective analysis these need to be incorporated in future.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A toy network showing nodes with high centrality for different measures. A. shows high farness centrality since the node (a) has the maximum average distance to other nodes. B. shows high clustering coefficient since neighbours of the node (c) are all connected as well.C. shows high betweenness centrality since the highest number of shortest paths will go from the node (i) and (k) since they are bridging two parts of the network. D. shows high degree centrality as both the nodes (e) and (n) have highest count of neighbours. E. shows high eigenvector centrality since node (e) is connected to many neighbours with a relatively higher degree. F. shows high closeness centrality as the average distance from nodes (i), (j) and (k) are minimum to other nodes.</ns0:figDesc><ns0:graphic coords='3,141.73,330.41,413.57,232.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021) Manuscript to be reviewed Computer Science et al. (2019) who concludes that collaboration of young scientist with top-ranked co-authors has a high probability of future success. Massucci and Docampo (2019) applied the PageRank algorithm on a university citation network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b39'>Waheed et al. (2019)</ns0:ref> discusses the use of centrality measures on multiple scientific networks to improve article recommendation. They filter the citation network to five levels in cited-by and citing directions. Evaluating a large-scale network available at AMiner they proposed a hybrid recommendations strategy. It includes different centrality measures on author collaboration network, author citation network and article citation network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Workflow for analysing scientific networks. The pyramid shows the refinement of data at every step. COCI contains approximately 625 M edges. It gets reduced as a subset of nodes fetched for respective ISSN. Finally, the top of the pyramid shows the output in the form of a few nodes identified with high centrality.</ns0:figDesc><ns0:graphic coords='8,141.73,166.17,413.59,175.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Step 2 of the workflow with details of fetching metadata from Crossref API. Sub-steps are applied sequentially.</ns0:figDesc><ns0:graphic coords='9,141.73,477.38,413.59,150.71' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>DOI</ns0:head><ns0:label /><ns0:figDesc>and author list extraction Processing of ego network for authors is similar to nodes of SCIM downloaded earlier. However, the connecting nodes fetched in subgraph above are added and their respective author details are downloaded.Creating scientific network(s)Summary of the sub-steps to create scientific networks are shown in Figure6. Once all the data are pre-processed this step creates different types of network. We can also add bibliographic coupling and 9/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='11,141.73,137.30,413.59,183.64' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,141.73,63.78,413.59,249.31' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>has used COCI to evaluate books scholarship. With a scripted workflow, we Review of studies applying social network analysis on scientific networks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell>Bibliographic</ns0:cell><ns0:cell>Scientific Network(s)</ns0:cell><ns0:cell>Social Network Analysis</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Data Source</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ding (2011)</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>Weighted PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Abbasi et al.</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2012)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Ortega (2014)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>Co-Author Ego Network</ns0:cell><ns0:cell>Clustering Coefficient, Degree</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Milojević</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Author Collaboration and Cita-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2014)</ns0:cell><ns0:cell /><ns0:cell>tion, Article Citation</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>Waltman and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Journal Citation Network</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Yan (2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Xu and Peke-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Author Citation</ns0:cell><ns0:cell>PageRank and Degree Central-</ns0:cell></ns0:row><ns0:row><ns0:cell>lis (2015)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ity</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff</ns0:cell><ns0:cell>WoS/JCR</ns0:cell><ns0:cell>Journal Citation</ns0:cell><ns0:cell>Betweenness Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lee (2019)</ns0:cell><ns0:cell>Scopus</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Degree and Betweenness Cen-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>trality, Clustering Coefficient</ns0:cell></ns0:row><ns0:row><ns0:cell>Massucci and</ns0:cell><ns0:cell>WoS</ns0:cell><ns0:cell>Institutional Citation</ns0:cell><ns0:cell>PageRank</ns0:cell></ns0:row><ns0:row><ns0:cell>Docampo</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Singh and Jo-</ns0:cell><ns0:cell>APS Journals</ns0:cell><ns0:cell>Author Collaboration</ns0:cell><ns0:cell>Centrality, Community Detec-</ns0:cell></ns0:row><ns0:row><ns0:cell>lad (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion</ns0:cell></ns0:row><ns0:row><ns0:cell>Van den Besse-</ns0:cell><ns0:cell>Manual</ns0:cell><ns0:cell>Researchers Ego Network</ns0:cell><ns0:cell>Clustering coefficient, eigenvec-</ns0:cell></ns0:row><ns0:row><ns0:cell>laar and Sand-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tor Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell>ström (2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Waheed et al.</ns0:cell><ns0:cell>DBLP, ACM,</ns0:cell><ns0:cell>Author Collaboration and Ci-</ns0:cell><ns0:cell>Degree, Betweenness, Close-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2019)</ns0:cell><ns0:cell>MAG</ns0:cell><ns0:cell>tation, Article Citation, Co-</ns0:cell><ns0:cell>ness, Eigenvector Centrality</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>citation and Bibliographic Cou-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>pling</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>5/14PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of ranks by previous study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Collaborator</ns0:cell><ns0:cell cols='8'>Degree Rank Betweenness Rank Eigenvector Rank Closeness Rank</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell cols='3'>Prev Curr Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell><ns0:cell>Prev</ns0:cell><ns0:cell>Curr</ns0:cell></ns0:row><ns0:row><ns0:cell>Glanzel, W</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Rousseau, R</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>DeMoya-Anegon, F</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Klingsporn, B</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>89</ns0:cell><ns0:cell>121</ns0:cell><ns0:cell>174</ns0:cell><ns0:cell>144</ns0:cell></ns0:row><ns0:row><ns0:cell>Ho, Ys</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>2096</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>613</ns0:cell><ns0:cell>575</ns0:cell></ns0:row><ns0:row><ns0:cell>Thijs, B</ns0:cell><ns0:cell>63</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>1710</ns0:cell></ns0:row><ns0:row><ns0:cell>Schubert,A</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>38</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>24</ns0:cell></ns0:row><ns0:row><ns0:cell>Debackere, K</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Schlemmer, B</ns0:cell><ns0:cell>670</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>382</ns0:cell><ns0:cell>962</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>808</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>37</ns0:cell></ns0:row><ns0:row><ns0:cell>Meyer, M</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Leydesdorff, L</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>44</ns0:cell></ns0:row><ns0:row><ns0:cell>Rafols,I</ns0:cell><ns0:cell>1058</ns0:cell><ns0:cell>387</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>239</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>49</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='14'>/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52217:3:1:NEW 27 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Editor,
PeerJ CS
21 Jan 2021
Thank you for your encouraging comments. We have revised language of the manuscript.
Abstract and introduction have been restructured to conform to the comments. Also, GitHub
readme has been updated.
Please see point-wise response below.
Thanks,
Bilal, Rafi and Sabih
############################################################################
Thanks again for your revision. I think that the current form of the article is fine for being
accepted for publication in PeerJ Computer Science, pending a few additional suggestions that
can be addressed directly on the camera-ready version of the paper – and that I will personally
check before publishing the article.
1. In the new part of the introduction, you refer to a 'common researcher'. However, it is not
clear at all to what that 'common' actually refers to, in particular in comparison to what? What is
an 'uncommon researcher' then? I think it is important to clarify explicitly what kind of users you
are going to help with the implementation of the workflow you are proposing. Is your work of any
help to researchers (in what? Scientometrics?) with no expertise in programming? Or, does it
address issues that researchers with expertise in programming but no expertise in
Scientometrics may have in retrieving and analysing these data? Or, again, is that done for
helping data scientists? etc. Thus, you need to clarify in the introduction which specific users
(i.e. kinds of researchers) are you going to help with your computational workflow.
>> response: Our primary focus is to target bibliometricians, however, researchers
looking to inspect their own field may utilize the workflow as well. A basic understanding
of programming is required.
Having said that, this workflow may be utilized by a front-end developer to create a
graphical tool. Therefore, we do not want to explicitly mention a user base. We developed
a prototype dashboard and willing to enhance it in future. Just sharing a sample image.
2. When you refer to open source software, such as NetworkX and SNAP, please mention it as
it is. In particular, 'open access' should not be used with software, 'open source' should.
>> response: Correction done.
3. The license specified in the release on GitHub
(https://github.com/bilal-dsu/Guru-workflow/tree/v1.0) is CC0, which does not apply to software
applications. Please, choose an appropriate license to use for releasing the software - e.g. see
the list at https://opensource.org/licenses.
>> response: Updated to MIT License.
4. In the GitHub readme, there should be explicitly stated how to call every single Python script
developed, since some of them actually take in input parameters, and they are not defined in
the text. Suppose to be one of your users. By reading the text it is clear what each script does,
but it is not clear how to run it properly.
>> response: A separate batch file is provided. Details have also been added on readme.
5. You say that your focus is 'to have a systematic workflow to identify prominent authors
(gurus) using publicly available metadata for citations'. However, as far as I understood, the
point is slightly different. Indeed, the workflow you devised is for collecting data and calculating
metrics that then **can be used** to identify gurus, but the identification of gurus is not the focus
of the present work. Thus, please avoid stressing too much on this aspect. This is also reflected
by the fact you are presenting a case study to show one possible use of the workflow and not
full comparative research on gurus identification. Please, relax a bit your claim about what this
article is about.
>> response: Along with restructuring abstract and introduction, we have also modified
the title to reflect this understanding.
6. In your answers to reviewers, you say that 'Research Question is merged with the
introduction section'. However, I cannot see any research question stated there. Actually, it
seems that you have totally removed it from the article. While this can be fine, after looking at
your answer I expected to find it in the introduction of the revision. Is the current form (i.e. no
research questions) correct, or did you miss to add it in the introduction?
>> response: Current form is correct.
7. In the conclusion you state that 'for case study, some manual work was also done to sort and
format the results, however, it can also be scripted in future as it does not hamper the workflow
and can be performed as a standalone'. I thought that the workflow was complete, and thus it
could retrieve all the data you need to perform the case study. But here you say that 'manual
work' (outside of the workflow, I presume) was needed to address the case study. I think you
should clarify this passage, otherwise, it seems that, in the end, the case study is not
reproducible despite the workflow you implemented.
>> response: Manual work incorporated was to enter the ranks in Table 2. Workflow
retrieves the complete data.
8. I would suggest revising the sentence in the abstract, i.e. 'Any study on co-authorship may
not need all the citation links, however, for a holistic view citation links may also be needed', in
a way which is more compliant with the reviewer's comment in the previous round of reviews. In
particular, it may be stated that while studies on co-authorship do not need, usually, information
about the citation network, having citation links would enable a more complete and holistic view
of the possible relations among authors.
>> response: We have placed the suggested sentence in the introduction and have also
made a relevant citation to it.
9. Please, check again the English of the whole article, since there are several typos, mistakes
and long and ambiguous sentences that should be rewritten to drastically improve the
readability of the text.
>> response: Text has been rewritten.
10. Typos:
- libraries of Python --> Python libraries
- line 91: the citation should be to (Heibi et al., 2019), not to (Peroni and Shotton, 2020)
- It seems that the DOI of the Guru script is not correct
- References: please, check the consistency of all the references, and add DOI URLs when
possible
>> response: Incorporated.
" | Here is a paper. Please give your review comments after reading it. |
3 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Sentiment analysis plays a key role in companies, especially stores, and increasing the accuracy in determining customers' opinions about products assists to maintain their competitive conditions. We intend to analyze the users' opinions on the website of the most immense online store in Iran; Digikala. However, the Persian language is unstructured which makes the pre-processing stage very difficult and it is the main problem of sentiment analysis in Persian. What exacerbates this problem is the lack of available libraries for Persian pre-processing, while most libraries focus on English. To tackle this, approximately 3 million reviews have been gathered in Persian from the Digikala website using web-mining techniques and have been used the fastText method to create a word embedding. It has been assumed it would dramatically cut down the need for the text pre-processing through the skip-gram method considering the position of the words in the sentence and the words' relations to each other. Another word embedding has been created using the TF-IDF in parallel with fastText to compare their performance.</ns0:p><ns0:p>In addition, the results of the CNN, BiLSTM, Logistic Regression, and Naïve Bayes models have been compared. As a significant result, we obtained 0.996 AUC, and 0.956 F-score using fastText and CNN. In this article, not only it has been demonstrated to what extent it is possible to be independent of pre-processing but also the accuracy obtained is also better than other researches done in Persian. Avoiding complex text preprocessing is also important for other languages Since most text preprocessing algorithms have been developed for English and cannot be used for other languages. The created word embedding due to its high accuracy and independence of pre-processing has other applications in Persian besides sentiment analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>With the advancement of technology and the spread of the Internet, proper conditions have been provided for the online store's activities. Due to some advantages such as high variety, delivery speed, and time savings, the customers of this type of store are constantly increasing <ns0:ref type='bibr' target='#b23'>(Liang and Wang 2019)</ns0:ref>. When buying from online stores, due to the gap between the buyer and the product, there may be some problems such as poor quality of products, inadequate after-sales service, or inconsistency between product descriptions and performance <ns0:ref type='bibr' target='#b16'>(Ji, Zhang, and Wang 2019)</ns0:ref>. One of the viable solutions to overcome the problems is to use the opinion of users who have already purchased the product. In the past, if people needed to know the other's opinion, they would ask questions of family, friends, or relatives. Similarly, companies and stores used surveys to find out the opinions of people or customers. But today, if people require to buy or companies and stores need to know the opinions of customers to provide better services and products, they can easily refer to people's comments and discussions on online store websites or forums. Therefore, online reviews are important sources of information about the quality of goods that play a key role in customer awareness of products (X. <ns0:ref type='bibr' target='#b22'>Li, Wu, and Mai 2019)</ns0:ref>. Online reviews enable the customer to have a comprehensive view of the products and their alternatives before making a purchase, thus, it has a significant impact on the extent of product sales <ns0:ref type='bibr' target='#b14'>(Hu, Liu, and Zhang 2008)</ns0:ref>. As a matter of fact, the immediate response of stores to their customers' complaints is essential in maintaining their competitive position. But analyzing these reviews manually is quite time-consuming and costly. Also, automatic comment analysis has some obstacles, problems such as using sentences with incorrect grammar, using slang terms, and not following the correct punctuation are an integral part of making text analysis difficult <ns0:ref type='bibr' target='#b15'>(Irfan et al. 2015)</ns0:ref>. When it comes to resolving these problems, sentiment analysis techniques play an essential role. These techniques automatically estimate customer sentiment into positive, negative, and even neutral classes. Therefore, sentiment analysis for online stores is highly valued because it can extract users' sense of goods and help to make decisions to increase customer satisfaction and product sales. Sentiment analysis can be considered as a type of content analysis that specifically seeks to determine the emotional tone of the text <ns0:ref type='bibr' target='#b30'>(Oscar et al. 2017)</ns0:ref>. This is done based on the emotional evidence between words and phrases <ns0:ref type='bibr' target='#b66'>(Tausczik and Pennebaker 2010)</ns0:ref>. In this article, we are seeking to analyze the feelings of customer reviews on the website of the largest and well-known online store in Iran (Digikala). At first, lingual problems were taken into account as a significant challenge. There are several problems in Persian text pre-processing such as using slang, using letters of other languages especially Arabic, lack of a clear boundary between phrases. To tackle the problems, we employed fastText and skip-gram because we wanted to examine whether the utilize of the methods capable of reducing the need for data pre-processing and make language processing easier. In the following, we will inspect this assumption and compare the obtained results with other algorithms and other reports. Another severe limitation was that the deep learning models required an immense dataset, but most of the available datasets in Persian are small to such an extent that they cannot be employed in deep models. Thus, a rich and immense dataset had to be extracted from the Digikala website which was conducted by webmining methods. It should be noted that this article seeks to achieve the following goals:</ns0:p><ns0:p> Investigating the reduction of the need for text pre-processing by implementing methods such as fastText and skip-gram.</ns0:p><ns0:p> Gathering comprehensive customer reviews dataset based on various types of digital goods to create a general word embedding for a various range of works related to digital goods.</ns0:p><ns0:p> Sentiment analysis of Digikala website's reviews with high accuracy even compared to other researches.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Works</ns0:head><ns0:p>Sentiment analysis methods are divided into three general categories Lexicon based, traditional Machine Learning, and Deep Learning models <ns0:ref type='bibr' target='#b56'>(Yadav and Vishwakarma 2020)</ns0:ref>. The first category is the sentiment analysis using a sentiment lexicon and it is an unsupervised method. In this case, emotional similarities of words and phrases are used and its accuracy is highly dependent on prelearned weights <ns0:ref type='bibr' target='#b64'>(Taboada et al. 2011)</ns0:ref>. This method collects a set of pre-compiled sentiment words, terms, phrases, and idioms with a specific thematic category such as opinion finder lexicon <ns0:ref type='bibr' target='#b54'>(Wilson et al. 2005)</ns0:ref> and ontologies <ns0:ref type='bibr' target='#b43'>(Kontopoulos et al. 2013)</ns0:ref>. The second category is based on machine learning methods which are divided into supervised and unsupervised categories. The accuracy of these methods is strongly influenced by the extracted features from the text. Supervised techniques such as Naïve Bayes, SVM, Maximum Entropy, and Logistic Regression are the most common techniques in this field <ns0:ref type='bibr'>(Ye, Zhang, and Law 2009)</ns0:ref> <ns0:ref type='bibr' target='#b44'>(Montejo-Ráez et al. 2014</ns0:ref>). However, unsupervised methods are suitable for situations where labeling for the dataset is impossible <ns0:ref type='bibr' target='#b46'>(Paltoglou and Thelwall 2012)</ns0:ref>. Deep learning has grown and been used in many areas in the last decade, for example in the field of object recognition <ns0:ref type='bibr' target='#b11'>(Ghoreyshi, AkhavanPour, and Bossaghzadeh 2020)</ns0:ref> <ns0:ref type='bibr' target='#b0'>(Ali et al. 2020)</ns0:ref>, speech recognition <ns0:ref type='bibr' target='#b4'>(Deng, Hinton, and</ns0:ref><ns0:ref type='bibr'>Kingsbury 2013)(H. Li, Baucom, and</ns0:ref><ns0:ref type='bibr' target='#b20'>Georgiou 2020)</ns0:ref>, anomaly detection <ns0:ref type='bibr' target='#b72'>(Zhao et al. 2018)</ns0:ref>, feature extraction <ns0:ref type='bibr' target='#b24'>(Lin, Nie, and Ma 2017)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Rajaraman et al. 2018)</ns0:ref>, auto-encoding <ns0:ref type='bibr' target='#b32'>(Pu et al. 2016)</ns0:ref>. Also, in cases where deep learning along with machine learning has been used for text analysis and sentiment analysis, good results have been obtained <ns0:ref type='bibr' target='#b65'>(Tang, Qin, and Liu 2015)</ns0:ref> <ns0:ref type='bibr' target='#b62'>(Severyn and Moschitti 2015)</ns0:ref>. The main difference between sentiment analysis by deep learning and other methods is how to extract the features. To be specific, one of the advantages of deep learning models is that there is no need for user intervention in feature extraction, which of course requires a large amount of data to perform the feature extraction operation. Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) are the most common models of deep learning in sentiment analysis <ns0:ref type='bibr' target='#b71'>(Zhang, Wang, and Liu 2018)</ns0:ref>. The most basic and widely used CNN model for sentiment analysis at the sentence level is the one presented by Kim <ns0:ref type='bibr'>(Kim 2014)</ns0:ref>. Then, Zhang and Wallace proposed a special single-layer CNN architecture by examining improvements made by changing the model configuration <ns0:ref type='bibr' target='#b59'>(Zhang and Wallace 2015)</ns0:ref>. Many developments have been made to improve the performance of CNN-based sentiment analysis models. In this regard, an example of CNN combined with fuzzy logic called the Fuzzy Convolutional Neural Network (FCNN) <ns0:ref type='bibr' target='#b45'>(Nguyen, Kavuri, and Lee 2018</ns0:ref>) is noteworthy. The use of CNN in natural language processing is now a common topic and much research is being done in this area <ns0:ref type='bibr' target='#b53'>(Wehrmann et al. 2017)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Gan et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Arora and Kansal 2019)</ns0:ref>. Deep neural networks are difficult to train because they often suffer from the problem of vanishing gradients. LSTM architecture was introduced <ns0:ref type='bibr' target='#b41'>(Hochreiter and Schmidhuber 1997)</ns0:ref> to overcome this shortcoming to learn long-term dependencies. After the original work, the LSTM has experienced several improvements such as adding forget gate <ns0:ref type='bibr' target='#b38'>(Gers 1999)</ns0:ref>. A neural network architecture could not be so great adopted into practice without strong theoretical support, therefore, a widespread review considering the several LSTM variants and their performances relative to the so-called vanilla LSTM model was conducted by Greff et al. <ns0:ref type='bibr' target='#b40'>(Greff et al. 2017)</ns0:ref>. The vanilla LSTM model is interpreted as the primary LSTM block with the addition of the forget-gate and peephole connections. Also, to overcome some limitations in conventional RNN models, bidirectional RNN (BRNN) models were proposed. Using this model's structure, both future and past situations of sequential inputs in a time frame are evaluated without delay <ns0:ref type='bibr' target='#b51'>(Schuster and Paliwal 1997)</ns0:ref>. By combining the ideas of BRNN and LSTM it is possible to achieve Bidirectional LSTM (BiLSTM) which has better performance than LSTM in classification processes. With the development of LSTM in recent years, it has been used in projects such as Google Translate and Amazon Alexa <ns0:ref type='bibr' target='#b55'>(Wu et al. 2016</ns0:ref>) <ns0:ref type='bibr' target='#b52'>(Vogels 2016)</ns0:ref> in natural language processing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Materials and methods</ns0:head><ns0:p>All the taken steps, methods, codes, and results that are presented below, along with a part of the extracted dataset are fully accessible on the repository (Yazdinejad and Shumaly 2020).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Dataset</ns0:head><ns0:p>Having access to a large dataset with richness and content integrity is indispensable to train a deep model. Most available datasets to train a deep model and sentiment analysis are in English. To collect a rich dataset, web-mining methods were used and the reviews on the Digikala website were extracted which were in Persian. Posted reviews by buyers express their level of satisfaction with their purchase and product features. After submitting their reviews, buyers could choose between the 'I suggest' and 'I do not suggest' options. These two options were extracted and used in the model as labels for the problem of sentiment analysis. Our goal was to analyze the opinions of users of the Digikala website, so we extracted the data of the section related to digital goods using web-mining libraries such as the Beautiful Soup <ns0:ref type='bibr' target='#b49'>(Richardson 2020)</ns0:ref>. Beautiful Soup is a Python package to parse XML and HTML documents and it is useful for web scraping <ns0:ref type='bibr' target='#b13'>(Hajba 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Pre-processing</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:54625:1:0:NEW 16 Jan 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One of the first steps in natural language processing problems has always been data pre-processing. At this stage, the texts need to be cleaned and prepared to begin the analysis. In Persian, this stage is even more difficult and important because it has its complexities. This field has attracted many researchers in the last decade, therefore, libraries and algorithms in the field of pre-processing in Persian have been developed <ns0:ref type='bibr' target='#b27'>(Mohtaj et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b29'>(Nourian 2013)</ns0:ref> which have become more complete and better over time. However, these algorithms cannot work as well as similar algorithms in English and need further development. We are seeking a way to achieve an accurate result by avoiding the complications of data pre-processing steps in Persian. Regular expressions are used for data pre-processing in all of the following steps using the 're' library (Rachum 2020) in python. Pre-developed libraries for the Persian language have not been used to perform data pre-processing steps and we assume that the use of fastText and skip-gram in creating word embedding reduces the need for complex pre-processing.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.1.Normalization</ns0:head><ns0:p>In Persian, some letters are not unique and may have different alternatives to other languages such as Arabic. For example, the letter ‫'ﯼ'‬ is Persian, but the letter ‫'ﻱ'‬ is Arabic, and these two letters will likely be used interchangeably. This causes the created words to be considered as two different words. In this way, they may be considered separately in the calculations and a vector can be drawn for each with its characteristics. To solve this issue, it is necessary to use the standard form for all available texts.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.2.Tokenization</ns0:head><ns0:p>Tokenization is a stage in which attempts are made to divide sentences into meaningful words and phrases that can be considered as a suitable input for the next steps. The main challenge of the Persian language at this stage is that sometimes there are no clear boundaries between phrases as a result of three different modes of spacing in Persian. In other words, phrases in Persian can be without space, with half-space, or with space, which is often mistakenly used instead of each other. For instance, the words ‫ﺍﻓﺰﺍﺭ'‬ ‫'ﻧﺮﻡ‬ and ‫,'ﻧﺮﻡﺍﻓﺰﺍﺭ'‬ which both mean software, are written with both space and half-space forms. If the wrong form is used, the phrase border will be mistakenly recognized as two separate words ‫'ﻧﺮﻡ'‬ and ‫.'ﺍﻓﺰﺍﺭ'‬ Vice versa, phrases that consist of several words can be considered as one word due to a mistake in using space. For example, the word ' ‫ﻣﺴﯿﺮ‬ ‫ﺍﺯ‬ ‫,'ﺩﯾﮕﺮ‬ which means 'from another path', maybe written as ‫'ﺍﺯﻣﺴﯿﺮﺩﯾﮕﺮ'‬ without any spaces. These kinds of mistakes blur the line between phrases and words and make it difficult to pre-process.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.3.Stemming</ns0:head><ns0:p>The stemming process seeks to remove part of the word in such a way that the root of the word is determined <ns0:ref type='bibr' target='#b69'>(Willett 2006)</ns0:ref>. The root of the word does not necessarily mean the dictionary root of the word and is acceptable in cases where it can improve the performance of the model. For example, we can refer to the phrase ‫.'ﺭﻧﮓﻫﺎﯾﺸﺎﻥ'‬ In this phrase, ‫'ﺭﻧﮓ'‬ means color, and ‫'ﻫﺎ'‬ is used to represent plural and ‫'ﯾﺸﺎﻥ'‬ for determination of ownership. A significant number of stemming algorithms use the following rule <ns0:ref type='bibr' target='#b27'>(Mohtaj et al. 2018</ns0:ref>):</ns0:p><ns0:p>(possessive suffix)(plural suffix)(other suffixes)(stem)(prefixes)</ns0:p><ns0:p>Stemming is a rule-based process that is usually done by removing suffixes and prefixes. Consequently, it cannot be used in other languages and each language requires its algorithms. Stemming algorithms are being developed in Persian but due to the complexity of the Persian language, their performance needs to be improved.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>Pseudo labeling</ns0:head><ns0:p>In classification problems, it is a common problem that a large number of samples do not have labels and therefore cannot be used in model training. Techniques such as pseudo-labeling can be used to overcome this problem and determine the labels of some samples <ns0:ref type='bibr' target='#b19'>(Lee 2013)</ns0:ref>. The first step in pseudo-labeling is to develop a model based on labeled samples in the dataset that is in charge of determining the label of unlabeled samples. Only labels that have been estimated with high confidence are accepted. In the next step, another model is developed using training data along with the new labeled data which can be used to predict test data with higher accuracy. In this way, 104.8 thousand Negative Feedback reviews and 30.5 thousand Positive Feedback reviews were labeled and could be used in the dataset for subsequent analysis. As will be shown in the results section, this method had a significant impact on improving accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Data balancing</ns0:head><ns0:p>Unequal distribution of data in different classes in a classification problem leads to data imbalance. The class with the most data is called the majority class, and the class with the least data is called the minority class. In these cases, the models tend to ignore the minority class and predict in favor of the majority class. Many machine learning models, such as Support Vector Machine, Naïve Bayes, Decision Tree, and Artificial Neural Network cannot have good results in this situation (Díez-Pastor et al. 2015) <ns0:ref type='bibr' target='#b68'>(Vorraboot et al. 2015)</ns0:ref>. In general, data balancing solutions can be divided into two categories; over-sampling and under-sampling. The goal of both solutions is to approximate the number of data distributed in the minority and majority classes. In over-sampling, this is done by increasing the amount of data in the minority class, and in under-sampling by reducing the amount of data in the majority class. In the present problem, we used the random oversampling method to balance the dataset. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Neural network-based methods have become very popular in the field of natural language processing due to their accuracy. However, most of these methods are slow to analyze large datasets and they need to receive word embedding to analyze texts. For this reason, a method called fastText has been proposed <ns0:ref type='bibr' target='#b17'>(Joulin et al. 2016)</ns0:ref>. fastText is an efficient, fast, and open-source model that Facebook has recently released. In fastText, a set of tricks has been used to improve the processing speed and performance of the model, one of which is skip-gram. Data sparsity has always been one of the biggest problems in natural language analysis. In other words, the main problem of modern language processing is that language is a system of rare events, so varied and complex, that we can never model all possibilities (Preethi Krishna and Sharada 2020). Therefore, skip-gram allows some words to be skipped and non-adjacent words to be examined together. Mikolov et al <ns0:ref type='bibr' target='#b26'>(Mikolov et al. 2013)</ns0:ref> found the skip-gram model to be superior to the bag-of-word model in a semantic-syntactic analogy task. Skip-gram is popular, easy to implement, and it is proven and reliable (Gurunath Shivakumar and Georgiou 2019). Accordingly, in this article, word embeddings have been provided using fastText and skip-gram to investigate the reduction of language processing dependence on data-preprocessing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.'>Sentiment analysis model 3.6.1.Convolution neural network</ns0:head><ns0:p>Using CNN has shown high accurate results based on studies in English texts <ns0:ref type='bibr' target='#b28'>(Nedjah, Santos, and de Macedo Mourelle 2019)</ns0:ref>. This model can receive and analyze word embedding as input instead of images, which are also effective in this area <ns0:ref type='bibr'>(Kim 2014)</ns0:ref>. Each row of the input matrix represents a word. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows the architecture of a CNN model used for the NLP classification problem <ns0:ref type='bibr' target='#b59'>(Zhang and Wallace 2015)</ns0:ref>. This figure shows how a CNN model treats a 6-word sentence. The matrix formed for this sentence is analyzed by 6 different convolution filters and converted to maps of attributes with dimensions of 1x4, 1x5, and 1x6. Finally, the pooling operation is performed on the maps and their outputs are merged to create a unique vector that can be used as input for the SoftMax layer and determine the class. The CNN model used in this article is based on the mentioned model and its architecture is shown in table 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.6.2.Bidirectional Long Short-Term Memory (BiLSTM)</ns0:head><ns0:p>Another deep model used to solve the problem is BiLSTM. The LSTM model can decide which information is useful and should be preserved and which information can be deleted based on the dataset it has trained with. The LSTM has been widely used in NLP such as long document categorization and sentiment analysis <ns0:ref type='bibr' target='#b48'>(Rao et al. 2018)</ns0:ref>. Figure <ns0:ref type='figure'>2</ns0:ref> is a demonstration of an LSTM cell used in this article, which has an input layer, an output layer and a forget layer <ns0:ref type='bibr' target='#b38'>(Gers 1999)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7.'>Evaluation</ns0:head><ns0:p>Due to imbalanced data, indicators such as accuracy is not appropriate for this study. Because the developed model in the face of this type of data tends to ignore the minority class and can still be accurate. For this purpose, AUC and F-score indexes will be used, which are good choices for problems dealing with imbalanced data <ns0:ref type='bibr' target='#b63'>(Sokolova, Japkowicz, and Szpakowicz 2006)</ns0:ref>. AUC indicates the area below the diagram in the ROC curve, and the ROC curve is a method for judging the performance of a two-class classifier <ns0:ref type='bibr' target='#b25'>(Luo et al. 2020</ns0:ref>). In the ROC curve, the vertical axis is the TPR (represents the true positive rate), Also, the horizontal axis is FPR (represents the false positive rate). The F-score is the harmonic mean of precision and recall <ns0:ref type='bibr' target='#b67'>(Velupillai et al. 2009</ns0:ref>) and represents a weighted average of precision and recall <ns0:ref type='bibr' target='#b10'>(Gacesa, Barlow, and Long 2016)</ns0:ref>. This index has wide applications in natural language processing <ns0:ref type='bibr' target='#b7'>(Derczynski 2016)</ns0:ref>, and like the AUC, it can be used in problems involved with imbalanced data. All the steps mentioned in the methodology section can be summarized in figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Results and discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.'>Dataset</ns0:head><ns0:p>The digital goods' reviews of the Digikala website were extracted, which are a total of 2,932,747 reviews. Figure <ns0:ref type='figure'>5</ns0:ref> shows the frequency of comments per category. Examining the comments of different product categories can increase the comprehensiveness of the model. To be specific, the words, phrases, and sentences are different in reviews of the products in the different categories, and considering various types of product categories will improve the generalization of the model in different situations. Table <ns0:ref type='table'>3</ns0:ref> shows the general structure of the collected dataset. In this table, the 'Comment ID' column stores the unique values for each comment, the 'Original Comment' column is the original of the comments written in Persian, the 'Translated Comment' column is a translation of the 'Original Comment' column into English. The 'Translated Comment' column is used only to increase readability in the table and does not exist in the dataset. In the 'Negative Feedback' column, if the value is 1, means that the user is not advised to buy the product, and in the 'Positive Feedback' column, if the value is 1, it means the user is advised to buy the product, and the 'Cat. Name' column represents the product category for which the comment was written.</ns0:p><ns0:p>The positive point of this website is that the buyers after submitting their comments can choose an option that states whether they generally recommend the product to others or not. Therefore, a significant number of extracted reviews are labeled. In other words, 308,122 of the reviews in the dataset do not recommend purchased items to others and the 'Negative Feedback' column of these reviews in the dataset shows the number one. Likewise, 1,749,055 of the reviews in the dataset recommend the purchased items to others, and the 'Positive Feedback' column of these comments in the dataset shows the number one. A significant part of the reviews is without labels and the reviews with labels are also imbalanced and these problems must be addressed in some ways.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.'>Pre-processing</ns0:head><ns0:p>During the initial review of the comments, the first correction that had to be made was the removal of escape sequences. An escape sequence is a series of two or more characters starting with a backslash and when used in a string, are not displayed as they are written. In most reviews, there were some escape sequences such as '\n' and '\t' that needed to be removed. Also, sometimes users wrote some URLs to link to personal content that had to be removed. At this stage, all Persian numbers were converted to English, and letters that had different alternatives were standardized to normalize the text. Then all the phrases were tokenized by defining the word boundary and converting the half-space to space. In the stemming stage, prefixes and suffixes used were removed.</ns0:p><ns0:p>After the pre-processing steps, the number of words in the Positive Feedback class was 6.1 million and the number of words in the Negative Feedback class was 34.1 million. Using the word cloud diagram, the most repetitive words in each of the classes can be depicted. Figure <ns0:ref type='figure'>6</ns0:ref> and figure <ns0:ref type='figure'>7</ns0:ref> show the repetitive words in the Positive Feedback and Negative Feedback classes, respectively. Words like 'I gave back', 'bad' and 'I do not recommend' can be seen in the Negative Feedback figure, and words like 'I'm satisfied', 'Appropriate' and 'Speed' can be seen in the Positive Feedback figure.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.'>Sentiment analysis</ns0:head><ns0:p>Data balancing is a crucial step that can increase accuracy. The random over-sampling method was used to balance the data. In other words, some data with the label of 'Negative Feedback' were randomly selected and repeated. As a matter of fact, one of the common mistakes in this section is to apply the balancing method to the entire data which leads to errors in estimating the indicators. In these cases, the indicators are in a better position than the model capability and the results are reported incorrectly well. To avoid this, the balancing method was used only for the training data. After using the pseudo-labeling method, the number of positive feedbacks was about 1.8 million and the number of negative feedbacks was about 400 thousand. In this way, the negative feedbacks were repeated randomly about four times to balance the dataset.</ns0:p><ns0:p>The stratified K-fold cross-validation method is used to perform the evaluation. It is a method for model evaluation that determines how independent the results of statistical analysis on a dataset are from training data. In K-fold cross-validation, the dataset is subdivided into a K subset and each time one subset is used for validation and the other K-1 is used for training. This procedure is repeated K times and all data is used exactly once for validation. The average result of this K computing is selected as a final estimate. Stratified sampling is the process of dividing members of a dataset into similar subsets before sampling and this type of data sampling was selected due to imbalanced data. Using the stratified K-fold cross-validation method, we expect the values of the indicators to be more real. In all stages of measuring the accuracy of the model, K was considered equal to 5.</ns0:p><ns0:p>As stated in the methodology, TF-IDF and fastText methods were used to extract the features. The BiLSTM and CNN models used the fastText output, and the Naïve Bayes and Logistics Regression models used the TF-IDF output, and their accuracy was finally compared with each other in table 4. According to this table, the results of BiLSTM and CNN models are more accurate than others and CNN has given the best results.</ns0:p><ns0:p>As expected, due to the use of fastText and skip-gram methods, the need for data pre-processing has been reduced. In other words, stemming and normalization methods have not affected the final result. To examine this more closely, we chose the CNN model as the best model and we once performed the sentiment analysis process using the pre-processing steps and again without these steps. The AUC and F-score were 0.9943 and 0.9291 before pre-processing, and 0.9944 and 0.9288 after pre-processing. The results can be seen in table 5. In the table, the meaning of the 'before preprocessing' is just before the stemming and normalization steps. In other words, the methods used to create word embedding can depict the same words in the same range of spaces without the need to standardize letters and also without the need to identify the original root of words.</ns0:p><ns0:p>To implement pseudo-labeling, we developed a model that can estimate labels for unlabeled reviews using fastText and CNN models. After estimating all the labels, those with more than 90% probability for the Negative Feedback class and less than 1×10 -7 for the Positive Feedback class were selected. Therefore, 104.8 thousand Negative Feedback reviews and 30.5 thousand Positive Feedback reviews were labeled and could be used in the dataset for subsequent analysis. In using the pseudo-labeling technique, most of our focus was on Negative Feedback as a minority class, which also leads to balance the dataset as much as possible. In this way, a significant amount of unlabeled data that had been excluded from the sentiment analysis process was re-entered into the process and helped to increase the accuracy and generalizability of the model.</ns0:p><ns0:p>Contrariwise of pre-processing, the use of the pseudo-labeling method significantly improved the results. After using pseudo-labeling, the values of AUC and F-score improved to 0.996 and 0.956. The values of the three mentioned states can be seen based on different folds in table 5. Figure <ns0:ref type='figure'>8</ns0:ref> also shows the ROC curve for all three states.</ns0:p><ns0:p>The suggested model has had better results than the previous models which have used preprocessing methods in Persian sentiment analysis. For instance, some researchers introduced preprocessing algorithms and succeed to enhance the results of machine learning algorithms <ns0:ref type='bibr' target='#b61'>(Saraee and Bagheri 2013)</ns0:ref>. In the research, the F-score of the proposed pre-processing algorithms employing Naïve Bayes as a classifier algorithm is 0.878. In another research, the various alternatives for pre-processing and classifier algorithms were examined and the best result was assisted with an SVM classifier by 0.915 F-score value <ns0:ref type='bibr' target='#b36'>(Asgarian, Kahani, and Sharifi 2018)</ns0:ref>. Also, some researches were attempted to utilize state-of-the-art deep models in such a way to reduce dependency on pre-processing and avoiding complex steps <ns0:ref type='bibr' target='#b60'>(Roshanfekr, Khadivi, and Rahmati 2017)</ns0:ref>. The F-score of the BiLSTM and CNN algorithms in the research is 0.532 and 0.534. All mentioned article's focus was on the digital goods reviews in Persian two-class sentiment analysis as same as this article. A comparison of the results in this paper with other researches and other common algorithms indicates that not only the dependence on data pre-processing has been eliminated but also the accuracy has increased significantly.</ns0:p><ns0:p>The result reveals that it is quite possible to create independent models from the pre-processing process using the method of fastText and skip-gram. Moreover, BiLSTM and CNN methods can have significant results. However, all of the mentioned methods need to have immense dataset. To prove this, It is noteworthy that the use of the pseudo-labeling method because of increasing training data has a great impact on the result.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:1:0:NEW 16 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The dataset included approximately 3 million reviews was extracted from the digital goods section of the Digikala website as the largest online store in Iran. Basic pre-processing methods were used to modify the words and tokenize them. Due to the lack of labels for a large part of the dataset, the pseudo-labeling method was employed which improved the results. Data balancing was also performed using random over-sampling. Persian data pre-processing was found difficult, so the fastText method was conducted to reduce the need for data pre-processing and word embedding development. The embeddings were employed as the input to the BiLSTM, and CNN models. Using the suggested model, not only the obtained results have been very desirable and are much more accurate in Persian compared to other reports but also there are no complications related to data pre-processing. The effect of stemming and normalization on the output was evaluated and revealed that the proposed method is not dependent on data pre-processing. Eventually, Besides the comparison of machine learning and deep learning methods in sentiment analysis, the TF-IDF and fastText methods were compared to create word embedding. The best result was associated with fastText and CNN. The main achievement of this model is the reduction of the need for data pre-processing. Data pre-processing in English is convenient and accurate due to the expanded text pre-processing libraries. However, in other languages, data pre-processing is very complicated because of the lack of proper libraries. Over the suggested model was proved that this need is largely solvable (AUC= 0.996) and the pre-processing steps can be reduced to preliminary tokenization processes. Avoiding complex text preprocessing is also important for other languages Since most text preprocessing algorithms have been developed for English and cannot be used for other languages. Moreover, the created word embedding due to its high accuracy can be used in other text analysis problems especially related to online digital goods. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:1:0:NEW 16 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Positive samples are classified as positive -FN: Positive samples are classified as negative -TN: Negative samples are classified as negative -FP: Negative samples are classified as positive</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,279.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,384.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,367.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,383.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>. Based on the figure, the LSTM cell mathematically expressed as follows:𝑊 𝑓ℎ ℎ 𝑡 -1 + 𝑊 𝑓𝑥 𝑥 𝑡 + 𝑏 𝑓 𝑊 𝑖ℎ ℎ 𝑡 -1 + 𝑊 𝑖𝑥 𝑥 𝑡 + 𝑏 𝑖 𝑡𝑎𝑛ℎ 𝑊 𝑐ℎ ℎ 𝑡 -1 + 𝑊 𝑐𝑥 𝑥 𝑡 + 𝑏 𝑐   𝑐 𝑡 = 𝑓 𝑡 .𝑐 𝑡 -1 + 𝑖 𝑡 .𝑐 𝑡    𝑜 𝑡 = 𝑊 𝑜ℎ ℎ 𝑡 -1 + 𝑊 𝑜𝑥 𝑥 𝑡 + 𝑏 𝑜   ℎ 𝑡 = 𝑜 𝑡 .tanh (𝑐 𝑡 )In LSTM, the input gate can decide what new information can be stored in the cell state, also the output gate can decide what information can be output based on the cell state. By combining the ideas of BRNN and LSTM it is possible to achieve Bidirectional LSTM (BiLSTM) which has better performance than LSTM in classification processes especially in speech processing tasks<ns0:ref type='bibr' target='#b39'>(Graves and Schmidhuber 2005)</ns0:ref>. Therefore, this article uses the BiLSTM structure, and figure 3 is shown a basic structure of the BiLSTM network<ns0:ref type='bibr' target='#b58'>(Yildirim 2018)</ns0:ref>. The BiLSTM model used in this article architecture is shown in table 2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell> 𝑓 𝑡 =  Computer Science  𝑐 𝑡 = where denotes the input; ,and 𝑥 𝑡 ℎ 𝑡 -1 ℎ 𝑡 output; , and denote memory from the last LSTM unit and cell state; denotes forget gate       denote the output of the last LSTM unit and current 𝑐 𝑡 -1 𝑐 𝑡 𝑓 𝑡 value; , , and are the weights; is the bias; the operator '•' denotes the pointwise 𝑊 𝑖 𝑊 𝑐 𝑊 𝑜 𝑏 𝑖 𝑡 = Manuscript to be reviewed multiplication of two vectors.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:1:0:NEW 16 Jan 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Dear Editors
We greatly appreciate the reviewers for their generous comments and helps on this manuscript. We have
revised the manuscript to address their concerns and answered all questions. After that, the manuscript
has been improved greatly. In addition, all of the code, dataset, and word embedding are available online
and we have included a link throughout the paper to the repository.
Thanks a lot for your kind helps.
Sincerely yours,
Yanhui Guo (Ph.D.)
Assistant Professor,
Department of Computer Science
University of Illinois Springfield, IL, USA
Email: yguo56@uis.edu
Phone: +1-217-206-8170
Reviewer 1
1. The main problem here: the contribution of your work is not clear. You have solved the sentiment
analysis problem with only one classifier (CNN) offered by Kim (not testing any other architectures or
hyper-parameter values). The classifier was applied on the fastText embeddings (classic neural
vectorization technique, that recently is more and more replaced with more sophisticated transformer
methods as, e.g., BERT).
➢ Thanks for your comments. To increase readability, we have explained more the contribution to
the manuscript, which is summarized as follows:
o
Investigating the reduction of the need for text pre-processing by implementing methods
such as fastText and skip-gram.
o
Gathering comprehensive customer reviews dataset based on various types of digital
goods to create a general word embedding.
o
Sentiment analysis of Digikala website’s reviews with high accuracy even compared to
other researches.
2. You emphasize the importance of pre-processing (Section 2.2), but provide no details on how preprocessing was performed in their experiments. Please, write what normalization, tokenization, stemming
tools have you used or implemented yourself. Please, provide details on how accurate they are and how
much impact the sentiment analysis task, etc.
➢ We appreciate this comment. In the methodology, examples of words that have been preprocessed are given, and it is also mentioned that these changes were made using the “re” library
and regular expressions. All of the pre-processing code is attached in the Supplemental files, and
on GitHub (https://github.com/mosiomohsen/persian-sentiment-analysis-using-fastText-wordembedding-and-pseudo-labeling). Also, we did not use complex preprocessing and sought to get
an accurate answer without using complex preprocessing to prove that the taken steps could
reduce the dependency on preprocessing.
3. The authorship of Figure 1 is not yours, please, put the reference.
➢ The reference was placed in the text. Thanks for this suggestion.
4. Figure 2 is not very informative: please, provide the exact details on each block.
➢
Thanks a lot for this suggestion. Figure 2 was expanded and the models used were carefully
expressed along with each step outputs.
5. How categories in Figure 3 make the impact on your sentiment analysis task? Please, provide the
statistics about the used sentiment dataset. How many instances were in the positive and negative classes
after balancing? And if you remove some instances from the original dataset does Figure 3 still make
sense?
➢ Thanks for this question. Figure 3 expresses the range of datasets in the field of digital goods.
The presence of different goods increases the generalization of the model and makes the created
word embedding more practical. The number of positive and negative labels was mentioned
before and after data balancing.
6. Table 1, Figure 4, Figure 5 are not clear. You either must provide translations or remove table and
figures from your paper completely.
➢ Thanks a lot for this suggestion. In the revised manuscript, the translation was added for the table
and figures.
7. The obtained very high f-score values (over 0.9) seem suspicious. Maybe your positive/negative splits
had too little diversity within each class? It even makes the impression that training/evaluation was
performed on the imbalanced dataset. Would you get similar results if you would randomly select other
instances from the major class?
➢ The training was performed on balanced data and also the model was tested using a crossvalidation procedure. It ensured that the training and evaluation data are selected correctly and
efficiently from the dataset.
8. Without having the exact details about the used dataset, I cannot interpret the results and must accept
your conclusions with caution.
➢ Thanks for this suggestion. The statistical characteristics of the dataset include the number of
data, the number of each class, and the characteristics of the columns expressed in the manuscript.
We also put the repository address in the manuscript and the dataset is accessible with all its
columns. Also, the sample dataset with 10,000 samples is attached in the Supplemental files, and
on GitHub (https://github.com/mosiomohsen/persian-sentiment-analysis-using-fastText-wordembedding-and-pseudo-labeling/blob/main/sample_dataset_10000.rar?raw=true).
Reviewer 2
1. ‘Related work’ is missing. Split the ‘Introduction’ into introduction and ‘Related Work’ for
completeness.
➢ Thanks a lot for this great suggestion. We followed it and split the Introduction section into two
sections: Introduction and Related Work.
2. Line 94. Authors assume that the use of fastText would generate better results. In research, such
assumptions are not appropriate. So either use multiple algorithms for this task to evaluate their efficacy
or support your assumption with the findings from other research works.
➢ We are thankful for this suggestion. In the revised manuscript, we used TF-IDF to support this
assumption and its results were reported and discussed.
3. The fastText model uses the skip-gram models works in similar fashion to CBOW so why authors
prefere fastText over CBOW, TF/IDF, GLoVe, etc. I strongly recommend the authors to use CBOW,
TF/IDF and GLoVe and compare their performance with fastText.
➢
We used the TF-IDF method and compared the results. The glove does not support Persian and
we did not compare it.
4. The contributions of the study are not highlighted properly. At the end of Introduction, describe the
contribuions in a bulleted form for readability.
➢ Thanks for your comments. To increase readability, we have explained more the contribution in
the manuscript, which is summarized as follows:
o
Investigating the reduction of the need for text pre-processing by implementing methods
such as fastText and skip-gram.
o
Gathering comprehensive customer reviews dataset based on various types of digital
goods to create a general word embedding.
o
Sentiment analysis of Digikala website’s reviews with high accuracy even compared to
other researches.
5. The choice of using CNN is not justified. LSTM has proven to show better performance for machine
learning tasks.
➢ Thankful for this suggestion. We compared the CNN model results with the BiLSTM model
results. The results are added as below:
States
BiLSTM
CNN
Index
AUC:
F-score:
AUC:
F-score:
Fold1
0.9934
0.9224
0.9945
0.9293
Fold2
0.9937
0.9244
0.9945
0.9251
Fold 3 Fold 4 Fold 5
0.993 0.993 0.9934
0.9238 0.9216 0.9232
0.9943 0.9946 0.9945
0.9306 0.9299 0.93
Mean
0.9933
0.9230
0.9944
0.9289
Error (SEM)
13.4*10-5
49.6*10-5
4.9*10-5
99.2*10-5
6. The performance of the proposed approach is not evaluated. The best way to do that is to compare the
performance of the proposed approach with SVM, logistic regression, GBM, and at least two state-of-theart approaches.
➢ Thanks a lot for this suggestion. Logistic regression and Naïve Bayes methods were used and the
results were compared. The results are added as below:
States
Naïve
Bayes
Logistic
Regression
Index
AUC:
F-score:
AUC:
F-score:
Fold1
0.9877
0.8856
0.9888
0.8894
Fold2
0.9881
0.8856
0.9891
0.8901
Fold 3 Fold 4 Fold 5
0.9878 0.988 0.9881
0.886 0.8863 0.8863
0.9888 0.989 0.9881
0.8898 0.8895 0.8863
Mean
0.9879
0.8859
0.9887
0.8890
Error (SEM)
8.12*10-5
15.7*10-5
17.5*10-5
69.1*10-5
7. The problem of two class classification is simple. I suspect the performance of the proposed approach
will degrade if more classes are added. So I ask the authors to perform classification on
‘positive’,’neutral’, and ‘negative’ grounds.
➢ Thanks a lot for this comment. The accuracy of the model in dealing with a two-class problem
increases in comparison with a three-class problem, but the subject of this research is a two-class
problem. The goal is to determine whether the user suggests the product to others or not.
Therefore, it is not necessary for turning this problem into a three-class problem.
8. English grammer and typos.
a. Always use Sentence case for writing. Many sentences in the manuscript start with small alphaets.
b. When using numerical values in the text, use them properly, e.g., Section 3.3 ‘’10-7’ to ’10-7’.
c. Extensive English proof reading and correction is required by a native speaker.
➢ Thanks a lot for help. We double-checked the manuscript carefully improving the English
language of the text. It was also revised with a licensed Grammarly software.
" | Here is a paper. Please give your review comments after reading it. |
4 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Sentiment analysis plays a key role in companies, especially stores, and increasing the accuracy in determining customers' opinions about products assists to maintain their competitive conditions. We intend to analyze the users' opinions on the website of the most immense online store in Iran; Digikala. However, the Persian language is unstructured which makes the pre-processing stage very difficult and it is the main problem of sentiment analysis in Persian. What exacerbates this problem is the lack of available libraries for Persian pre-processing, while most libraries focus on English. To tackle this, approximately 3 million reviews have been gathered in Persian from the Digikala website using web-mining techniques and have been used the fastText method to create a word embedding. It has been assumed it would dramatically cut down the need for the text pre-processing through the skip-gram method considering the position of the words in the sentence and the words' relations to each other. Another word embedding has been created using the TF-IDF in parallel with fastText to compare their performance.</ns0:p><ns0:p>In addition, the results of the CNN, BiLSTM, Logistic Regression, and Naïve Bayes models have been compared. As a significant result, we obtained 0.996 AUC, and 0.956 F-score using fastText and CNN. In this article, not only it has been demonstrated to what extent it is possible to be independent of pre-processing but also the accuracy obtained is also better than other researches done in Persian. Avoiding complex text preprocessing is also important for other languages since most text preprocessing algorithms have been developed for English and cannot be used for other languages. The created word embedding due to its high accuracy and independence of pre-processing has other applications in Persian besides sentiment analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>With the advancement of technology and the spread of the Internet, proper conditions have been provided for the online store's activities. Due to some advantages such as high variety, delivery speed, and time savings, the customers of this type of store are constantly increasing <ns0:ref type='bibr' target='#b22'>(Liang and Wang 2019)</ns0:ref>. When buying from online stores, due to the gap between the buyer and the product, there may be some problems such as poor quality of products, inadequate after-sales service, or inconsistency between product descriptions and performance <ns0:ref type='bibr' target='#b15'>(Ji, Zhang, and Wang 2019)</ns0:ref>. One of the viable solutions to overcome the problems is to use the opinion of users who have already purchased the product. In the past, if people needed to know the other's opinion, they would ask questions of family, friends, or relatives. Similarly, companies and stores used surveys to find out the opinions of people or customers. But today, if people require to buy or companies and stores need to know the opinions of customers to provide better services and products, they can easily refer to people's comments and discussions on online store websites or forums. Therefore, online reviews are important sources of information about the quality of goods that play a key role in customer awareness of products (X. <ns0:ref type='bibr' target='#b21'>Li, Wu, and Mai 2019)</ns0:ref>. Online reviews enable the customer to have a comprehensive view of the products and their alternatives before making a purchase, thus, it has a significant impact on the extent of product sales <ns0:ref type='bibr' target='#b13'>(Hu, Liu, and Zhang 2008)</ns0:ref>. As a matter of fact, the immediate response of stores to their customers' complaints is essential in maintaining their competitive position. But analyzing these reviews manually is quite time-consuming and costly. Also, automatic comment analysis has some obstacles, problems such as using sentences with incorrect grammar, using slang terms, and not following the correct punctuation are an integral part of making text analysis difficult <ns0:ref type='bibr' target='#b14'>(Irfan et al. 2015)</ns0:ref>. When it comes to resolving these problems, sentiment analysis techniques play an essential role. These techniques automatically estimate customer sentiment into positive, negative, and even neutral classes. Therefore, sentiment analysis for online stores is highly valued because it can extract users' sense of goods and help to make decisions to increase customer satisfaction and product sales. Sentiment analysis can be considered as a type of content analysis that specifically seeks to determine the emotional tone of the text <ns0:ref type='bibr' target='#b29'>(Oscar et al. 2017)</ns0:ref>. This is done based on the emotional evidence between words and phrases <ns0:ref type='bibr' target='#b67'>(Tausczik and Pennebaker 2010)</ns0:ref>. In this article, we are seeking to analyze the feelings of customer reviews on the website of the largest and well-known online store in Iran (Digikala). At first, lingual problems were taken into account as a significant challenge. There are several problems in Persian text pre-processing such as using slang, using letters of other languages especially Arabic, lack of a clear boundary between phrases. To tackle the problems, we employed fastText and skip-gram because we wanted to examine whether the utilize of the methods capable of reducing the need for data pre-processing and make language processing easier. In the following, we will inspect this assumption and compare the obtained results with other algorithms and other reports. Another severe limitation was that the deep learning models required an immense dataset, but most of the available datasets in Persian are small to such an extent that they cannot be employed in deep models. Thus, a rich and immense dataset had to be extracted from the Digikala website which was conducted by webmining methods. It should be noted that this article seeks to achieve the following goals:</ns0:p><ns0:p> Investigating the reduction of the need for text pre-processing by implementing methods such as fastText and skip-gram, either in Persian language processing or others.</ns0:p><ns0:p> Gathering comprehensive customer reviews dataset based on various types of digital goods to create a general word embedding for a various range of works related to digital goods.</ns0:p><ns0:p> Sentiment analysis of Digikala website's reviews with high accuracy even compared to other researches.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Works</ns0:head><ns0:p>Sentiment analysis methods are divided into three general categories Lexicon based, traditional Machine Learning, and Deep Learning models <ns0:ref type='bibr' target='#b56'>(Yadav and Vishwakarma 2020)</ns0:ref>. The first category is the sentiment analysis using a sentiment lexicon and it is an unsupervised method. In this case, emotional similarities of words and phrases are used and its accuracy is highly dependent on prelearned weights <ns0:ref type='bibr' target='#b65'>(Taboada et al. 2011)</ns0:ref>. This method collects a set of pre-compiled sentiment words, terms, phrases, and idioms with a specific thematic category such as opinion finder lexicon <ns0:ref type='bibr' target='#b54'>(Wilson et al. 2005)</ns0:ref> and ontologies <ns0:ref type='bibr' target='#b43'>(Kontopoulos et al. 2013)</ns0:ref>. The second category is based on machine learning methods which are divided into supervised and unsupervised categories. The accuracy of these methods is strongly influenced by the extracted features from the text. Supervised techniques such as Naïve Bayes, SVM, Maximum Entropy, and Logistic Regression are the most common techniques in this field <ns0:ref type='bibr'>(Ye, Zhang, and Law 2009)</ns0:ref> <ns0:ref type='bibr' target='#b44'>(Montejo-Ráez et al. 2014</ns0:ref>). However, unsupervised methods are suitable for situations where labeling for the dataset is impossible <ns0:ref type='bibr' target='#b46'>(Paltoglou and Thelwall 2012)</ns0:ref>. Deep learning has grown and been used in many areas in the last decade, for example in the field of object recognition <ns0:ref type='bibr' target='#b10'>(Ghoreyshi, AkhavanPour, and Bossaghzadeh 2020)</ns0:ref> <ns0:ref type='bibr' target='#b0'>(Ali et al. 2020)</ns0:ref>, speech recognition <ns0:ref type='bibr' target='#b5'>(Deng, Hinton, and</ns0:ref><ns0:ref type='bibr'>Kingsbury 2013)(H. Li, Baucom, and</ns0:ref><ns0:ref type='bibr' target='#b20'>Georgiou 2020)</ns0:ref>, anomaly detection <ns0:ref type='bibr' target='#b73'>(Zhao et al. 2018)</ns0:ref>, feature extraction <ns0:ref type='bibr' target='#b23'>(Lin, Nie, and Ma 2017)</ns0:ref> <ns0:ref type='bibr' target='#b34'>(Rajaraman et al. 2018)</ns0:ref>, auto-encoding <ns0:ref type='bibr' target='#b31'>(Pu et al. 2016)</ns0:ref>. Also, in cases where deep learning along with machine learning has been used for text analysis and sentiment analysis, good results have been obtained <ns0:ref type='bibr' target='#b66'>(Tang, Qin, and Liu 2015)</ns0:ref> <ns0:ref type='bibr' target='#b63'>(Severyn and Moschitti 2015)</ns0:ref>. The main difference between sentiment analysis by deep learning and other methods is how to extract the features. To be specific, one of the advantages of deep learning models is that there is no need for user intervention in feature extraction, which of course requires a large amount of data to perform the feature extraction operation. Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) are the most common models of deep learning in sentiment analysis <ns0:ref type='bibr' target='#b72'>(Zhang, Wang, and Liu 2018)</ns0:ref>. The most basic and widely used CNN model for sentiment analysis at the sentence level is the one presented by Kim <ns0:ref type='bibr'>(Kim 2014)</ns0:ref>. Then, Zhang and Wallace proposed a special single-layer CNN architecture by examining improvements made by changing the model configuration <ns0:ref type='bibr' target='#b59'>(Zhang and Wallace 2015)</ns0:ref>. Many developments have been made to improve the performance of CNN-based sentiment analysis models. In this regard, an example of CNN combined with fuzzy logic called the Fuzzy Convolutional Neural Network (FCNN) <ns0:ref type='bibr' target='#b45'>(Nguyen, Kavuri, and Lee 2018</ns0:ref>) is noteworthy. The use of CNN in natural language processing is now a common topic and much research is being done in this area <ns0:ref type='bibr' target='#b53'>(Wehrmann et al. 2017)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Gan et al. 2020)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Arora and Kansal 2019)</ns0:ref>. Deep neural networks are difficult to train because they often suffer from the problem of vanishing gradients. LSTM architecture was introduced <ns0:ref type='bibr' target='#b41'>(Hochreiter and Schmidhuber 1997)</ns0:ref> to overcome this shortcoming to learn long-term dependencies. After the original work, the LSTM has experienced several improvements such as adding forget gate <ns0:ref type='bibr' target='#b38'>(Gers 1999)</ns0:ref>. A neural network architecture could not be so great adopted into practice without strong theoretical support, therefore, a widespread review considering the several LSTM variants and their performances relative to the so-called vanilla LSTM model was conducted by Greff et al. <ns0:ref type='bibr' target='#b40'>(Greff et al. 2017)</ns0:ref>. The vanilla LSTM model is interpreted as the primary LSTM block with the addition of the forget-gate and peephole connections. Also, to overcome some limitations in conventional RNN models, bidirectional RNN (BRNN) models were proposed. Using this model's structure, both future and past situations of sequential inputs in a time frame are evaluated without delay <ns0:ref type='bibr' target='#b51'>(Schuster and Paliwal 1997)</ns0:ref>. By combining the ideas of BRNN and LSTM it is possible to achieve Bidirectional LSTM (BiLSTM) which has better performance than LSTM in classification processes. With the development of LSTM in recent years, it has been used in projects such as Google Translate and Amazon Alexa <ns0:ref type='bibr' target='#b55'>(Wu et al. 2016</ns0:ref>) <ns0:ref type='bibr' target='#b52'>(Vogels 2016)</ns0:ref> in natural language processing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Materials and methods</ns0:head><ns0:p>All the taken steps, methods, codes, and results that are presented below, along with a part of the extracted dataset are fully accessible on the repository (Yazdinejad and Shumaly 2020).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Dataset</ns0:head><ns0:p>Having access to a large dataset with richness and content integrity is indispensable to train a deep model. Most available datasets to train a deep model and sentiment analysis are in English. To collect a rich dataset, web-mining methods were used and the reviews on the Digikala website were extracted which were in Persian. Posted reviews by buyers express their level of satisfaction with their purchase and product features. After submitting their reviews, buyers could choose between the 'I suggest' and 'I do not suggest' options. These two options were extracted and used in the model as labels for the problem of sentiment analysis. Our goal was to analyze the opinions of users of the Digikala website, so we extracted the data of the section related to digital goods using web-mining libraries such as the Beautiful Soup <ns0:ref type='bibr' target='#b50'>(Richardson 2020)</ns0:ref>. Beautiful Soup is a Python package to parse XML and HTML documents and it is useful for web scraping <ns0:ref type='bibr' target='#b12'>(Hajba 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Pre-processing</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:54625:2:0:NEW 5 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One of the first steps in natural language processing problems has always been data pre-processing. At this stage, the texts need to be cleaned and prepared to begin the analysis. In Persian, this stage is even more difficult and important because it has its complexities. This field has attracted many researchers in the last decade, therefore, libraries and algorithms in the field of pre-processing in Persian have been developed <ns0:ref type='bibr' target='#b26'>(Mohtaj et al. 2018)</ns0:ref> <ns0:ref type='bibr' target='#b28'>(Nourian 2013)</ns0:ref> which have become more complete and better over time. However, these algorithms cannot work as well as similar algorithms in English and need further development. We are seeking a way to achieve an accurate result by avoiding the complications of data pre-processing steps in Persian. Regular expressions are used for data pre-processing in all of the following steps using the 're' library (Rachum 2020) in python. Pre-developed libraries for the Persian language have not been used to perform data pre-processing steps and we assume that the use of fastText and skip-gram in creating word embedding reduces the need for complex pre-processing.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.1.Normalization</ns0:head><ns0:p>In Persian, some letters are not unique and may have different alternatives to other languages such as Arabic. For example, the letter ‫'ﯼ'‬ is Persian, but the letter ‫'ﻱ'‬ is Arabic, and these two letters will likely be used interchangeably. This causes the created words to be considered as two different words. In this way, they may be considered separately in the calculations and a vector can be drawn for each with its characteristics. To solve this issue, it is necessary to use the standard form for all available texts.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.2.Tokenization</ns0:head><ns0:p>Tokenization is a stage in which attempts are made to divide sentences into meaningful words and phrases that can be considered as a suitable input for the next steps. The main challenge of the Persian language at this stage is that sometimes there are no clear boundaries between phrases as a result of three different modes of spacing in Persian. In other words, phrases in Persian can be without space, with half-space, or with space, which is often mistakenly used instead of each other. For instance, the words ‫ﺍﻓﺰﺍﺭ'‬ ‫'ﻧﺮﻡ‬ and ‫,'ﻧﺮﻡﺍﻓﺰﺍﺭ'‬ which both mean software, are written with both space and half-space forms. If the wrong form is used, the phrase border will be mistakenly recognized as two separate words ‫'ﻧﺮﻡ'‬ and ‫.'ﺍﻓﺰﺍﺭ'‬ Vice versa, phrases that consist of several words can be considered as one word due to a mistake in using space. For example, the word ' ‫ﻣﺴﯿﺮ‬ ‫ﺍﺯ‬ ‫,'ﺩﯾﮕﺮ‬ which means 'from another path', maybe written as ‫'ﺍﺯﻣﺴﯿﺮﺩﯾﮕﺮ'‬ without any spaces. These kinds of mistakes blur the line between phrases and words and make it difficult to pre-process.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.3.Stemming</ns0:head><ns0:p>The stemming process seeks to remove part of the word in such a way that the root of the word is determined <ns0:ref type='bibr' target='#b70'>(Willett 2006)</ns0:ref>. The root of the word does not necessarily mean the dictionary root of the word and is acceptable in cases where it can improve the performance of the model. For example, we can refer to the phrase ‫.'ﺭﻧﮓﻫﺎﯾﺸﺎﻥ'‬ In this phrase, ‫'ﺭﻧﮓ'‬ means color, and ‫'ﻫﺎ'‬ is used to represent plural and ‫'ﯾﺸﺎﻥ'‬ for determination of ownership. A significant number of stemming algorithms use the following rule <ns0:ref type='bibr' target='#b26'>(Mohtaj et al. 2018</ns0:ref>):</ns0:p><ns0:p>(possessive suffix)(plural suffix)(other suffixes)(stem)(prefixes)</ns0:p><ns0:p>Stemming is a rule-based process that is usually done by removing suffixes and prefixes. Consequently, it cannot be used in other languages and each language requires its algorithms. Stemming algorithms are being developed in Persian but due to the complexity of the Persian language, their performance needs to be improved.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>Pseudo labeling</ns0:head><ns0:p>In classification problems, it is a common problem that a large number of samples do not have labels and therefore cannot be used in model training. Techniques such as pseudo-labeling can be used to overcome this problem and determine the labels of some samples <ns0:ref type='bibr' target='#b19'>(Lee 2013)</ns0:ref>. The first step in pseudo-labeling is to develop a model based on labeled samples in the dataset that is in charge of determining the label of unlabeled samples. Only labels that have been estimated with high confidence are accepted. In the next step, another model is developed using training data along with the new labeled data which can be used to predict test data with higher accuracy. In this way, 104.8 thousand Negative Feedback reviews and 30.5 thousand Positive Feedback reviews were labeled and could be used in the dataset for subsequent analysis. As will be shown in the results section, this method had a significant impact on improving accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Data balancing</ns0:head><ns0:p>Unequal distribution of data in different classes in a classification problem leads to data imbalance. The class with the most data is called the majority class, and the class with the least data is called the minority class. In these cases, the models tend to ignore the minority class and predict in favor of the majority class. Many machine learning models, such as Support Vector Machine, Naïve Bayes, Decision Tree, and Artificial Neural Network cannot have good results in this situation (Díez-Pastor et al. 2015) <ns0:ref type='bibr' target='#b69'>(Vorraboot et al. 2015)</ns0:ref>. In general, data balancing solutions can be divided into two categories; over-sampling and under-sampling. The goal of both solutions is to approximate the number of data distributed in the minority and majority classes. In over-sampling, this is done by increasing the amount of data in the minority class, and in under-sampling by reducing the amount of data in the majority class. In the present problem, we used the random oversampling method to balance the dataset. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Neural network-based methods have become very popular in the field of natural language processing due to their accuracy. However, most of these methods are slow to analyze large datasets and they need to receive word embedding to analyze texts. For this reason, a method called fastText has been proposed <ns0:ref type='bibr' target='#b16'>(Joulin et al. 2016)</ns0:ref>. fastText is an efficient, fast, and open-source model that Facebook has recently released. In fastText, a set of tricks has been used to improve the processing speed and performance of the model, one of which is skip-gram. Data sparsity has always been one of the biggest problems in natural language analysis. In other words, the main problem of modern language processing is that language is a system of rare events, so varied and complex, that we can never model all possibilities (Preethi Krishna and Sharada 2020). Therefore, skip-gram allows some words to be skipped and non-adjacent words to be examined together. Mikolov et al <ns0:ref type='bibr' target='#b25'>(Mikolov et al. 2013)</ns0:ref> found the skip-gram model to be superior to the bag-of-word model in a semantic-syntactic analogy task. Skip-gram is popular, easy to implement, and it is proven and reliable (Gurunath Shivakumar and Georgiou 2019). Accordingly, in this article, word embeddings have been provided using fastText and skip-gram to investigate the reduction of language processing dependence on data-preprocessing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.'>Sentiment analysis model 3.6.1.Convolution neural network</ns0:head><ns0:p>Using CNN has shown high accurate results based on studies in English texts <ns0:ref type='bibr' target='#b27'>(Nedjah, Santos, and de Macedo Mourelle 2019)</ns0:ref>. This model can receive and analyze word embedding as input instead of images, which are also effective in this area <ns0:ref type='bibr'>(Kim 2014)</ns0:ref>. Each row of the input matrix represents a word. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows the architecture of a CNN model used for the NLP classification problem <ns0:ref type='bibr' target='#b59'>(Zhang and Wallace 2015)</ns0:ref>. This figure shows how a CNN model treats a 6-word sentence. The matrix formed for this sentence is analyzed by 6 different convolution filters and converted to maps of attributes with dimensions of 1x4, 1x5, and 1x6. Finally, the pooling operation is performed on the maps and their outputs are merged to create a unique vector that can be used as input for the SoftMax layer and determine the class. The CNN model used in this article is based on the mentioned model and its architecture is shown in table 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.6.2.Bidirectional Long Short-Term Memory (BiLSTM)</ns0:head><ns0:p>Another deep model used to solve the problem is BiLSTM. The LSTM model can decide which information is useful and should be preserved and which information can be deleted based on the dataset it has trained with. The LSTM has been widely used in NLP such as long document categorization and sentiment analysis <ns0:ref type='bibr' target='#b49'>(Rao et al. 2018)</ns0:ref>. Figure <ns0:ref type='figure'>2</ns0:ref> is a demonstration of an LSTM cell used in this article, which has an input layer, an output layer and a forget layer <ns0:ref type='bibr' target='#b38'>(Gers 1999)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7.'>Evaluation</ns0:head><ns0:p>Due to imbalanced data, indicators such as accuracy is not appropriate for this study. Because the developed model in the face of this type of data tends to ignore the minority class and can still be accurate. For this purpose, AUC and F-score indexes will be used, which are good choices for problems dealing with imbalanced data <ns0:ref type='bibr' target='#b64'>(Sokolova, Japkowicz, and Szpakowicz 2006)</ns0:ref>. AUC indicates the area below the diagram in the ROC curve, and the ROC curve is a method for judging the performance of a two-class classifier <ns0:ref type='bibr' target='#b24'>(Luo et al. 2020</ns0:ref>). In the ROC curve, the vertical axis is the TPR (represents the true positive rate), Also, the horizontal axis is FPR (represents the false positive rate). The F-score is the harmonic mean of precision and recall <ns0:ref type='bibr' target='#b68'>(Velupillai et al. 2009</ns0:ref>) and represents a weighted average of precision and recall <ns0:ref type='bibr' target='#b9'>(Gacesa, Barlow, and Long 2016)</ns0:ref>. This index has wide applications in natural language processing <ns0:ref type='bibr' target='#b6'>(Derczynski 2016)</ns0:ref>, and like the AUC, it can be used in problems involved with imbalanced data. All the steps mentioned in the methodology section can be summarized in figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Results and discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.'>Dataset</ns0:head><ns0:p>The digital goods' reviews of the Digikala website were extracted, which are a total of 2,932,747 reviews. Figure <ns0:ref type='figure'>5</ns0:ref> shows the frequency of comments per category. Examining the comments of different product categories can increase the comprehensiveness of the model. To be specific, the words, phrases, and sentences are different in reviews of the products in the different categories, and considering various types of product categories will improve the generalization of the model in different situations. Table <ns0:ref type='table'>3</ns0:ref> shows the general structure of the collected dataset. In this table, the 'Comment ID' column stores the unique values for each comment, the 'Original Comment' column is the original of the comments written in Persian, the 'Translated Comment' column is a translation of the 'Original Comment' column into English. The 'Translated Comment' column is used only to increase readability in the table and does not exist in the dataset. In the 'Negative Feedback' column, if the value is 1, means that the user is not advised to buy the product, and in the 'Positive Feedback' column, if the value is 1, it means the user is advised to buy the product, and the 'Cat. Name' column represents the product category for which the comment was written.</ns0:p><ns0:p>The positive point of this website is that the buyers after submitting their comments can choose an option that states whether they generally recommend the product to others or not. Therefore, a significant number of extracted reviews are labeled. In other words, 308,122 of the reviews in the dataset do not recommend purchased items to others and the 'Negative Feedback' column of these reviews in the dataset shows the number one. Likewise, 1,749,055 of the reviews in the dataset recommend the purchased items to others, and the 'Positive Feedback' column of these comments in the dataset shows the number one. A significant part of the reviews is without labels and the reviews with labels are also imbalanced and these problems must be addressed in some ways.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.'>Pre-processing</ns0:head><ns0:p>During the initial review of the comments, the first correction that had to be made was the removal of escape sequences. An escape sequence is a series of two or more characters starting with a backslash and when used in a string, are not displayed as they are written. In most reviews, there were some escape sequences such as '\n' and '\t' that needed to be removed. Also, sometimes users wrote some URLs to link to personal content that had to be removed. At this stage, all Persian numbers were converted to English, and letters that had different alternatives were standardized to normalize the text. Then all the phrases were tokenized by defining the word boundary and converting the half-space to space. In the stemming stage, prefixes and suffixes used were removed.</ns0:p><ns0:p>After the pre-processing steps, the number of words in the Positive Feedback class was 6.1 million and the number of words in the Negative Feedback class was 34.1 million. Using the word cloud diagram, the most repetitive words in each of the classes can be depicted. Figure <ns0:ref type='figure'>6</ns0:ref> and figure <ns0:ref type='figure'>7</ns0:ref> show the repetitive words in the Positive Feedback and Negative Feedback classes, respectively. Words like 'I gave back', 'bad' and 'I do not recommend' can be seen in the Negative Feedback figure, and words like 'I'm satisfied', 'Appropriate' and 'Speed' can be seen in the Positive Feedback figure.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.'>Sentiment analysis</ns0:head><ns0:p>Data balancing is a crucial step that can increase accuracy. The random over-sampling method was used to balance the data. In other words, some data with the label of 'Negative Feedback' were randomly selected and repeated. As a matter of fact, one of the common mistakes in this section is to apply the balancing method to the entire data which leads to errors in estimating the indicators. In these cases, the indicators are in a better position than the model capability and the results are reported incorrectly well. To avoid this, the balancing method was used only for the training data. After using the pseudo-labeling method, the number of positive feedbacks was about 1.8 million and the number of negative feedbacks was about 400 thousand. In this way, the negative feedbacks were repeated randomly about four times to balance the dataset.</ns0:p><ns0:p>The stratified K-fold cross-validation method is used to perform the evaluation. It is a method for model evaluation that determines how independent the results of statistical analysis on a dataset are from training data. In K-fold cross-validation, the dataset is subdivided into a K subset and each time one subset is used for validation and the other K-1 is used for training. This procedure is repeated K times and all data is used exactly once for validation. The average result of this K computing is selected as a final estimate. Stratified sampling is the process of dividing members of a dataset into similar subsets before sampling and this type of data sampling was selected due to imbalanced data. Using the stratified K-fold cross-validation method, we expect the values of the indicators to be more real. In all stages of measuring the accuracy of the model, K was considered equal to 5.</ns0:p><ns0:p>As stated in the methodology, TF-IDF and fastText methods were used to extract the features. The BiLSTM and CNN models used the fastText output, and the Naïve Bayes and Logistics Regression models used the TF-IDF output, and their accuracy was finally compared with each other in table 4. According to this table, the results of BiLSTM and CNN models are more accurate than others and CNN has given the best results.</ns0:p><ns0:p>As expected, due to the use of fastText and skip-gram methods, the need for data pre-processing has been reduced. In other words, stemming and normalization methods have not affected the final result. To examine this more closely, we chose the CNN model as the best model and we once performed the sentiment analysis process using the pre-processing steps and again without these steps. The AUC and F-score were 0.9943 and 0.9291 before pre-processing, and 0.9944 and 0.9288 after pre-processing. The results can be seen in table 5. In the table, the meaning of the 'before preprocessing' is just before the stemming and normalization steps. In other words, the methods used to create word embedding can depict the same words in the same range of spaces without the need to standardize letters and also without the need to identify the original root of words.</ns0:p><ns0:p>To implement pseudo-labeling, we developed a model that can estimate labels for unlabeled reviews using fastText and CNN models. After estimating all the labels, those with more than 90% probability for the Negative Feedback class and less than 1×10 -7 for the Positive Feedback class were selected. Therefore, 104.8 thousand Negative Feedback reviews and 30.5 thousand Positive Feedback reviews were labeled and could be used in the dataset for subsequent analysis. In using the pseudo-labeling technique, most of our focus was on Negative Feedback as a minority class, which also leads to balance the dataset as much as possible. In this way, a significant amount of unlabeled data that had been excluded from the sentiment analysis process was re-entered into the process and helped to increase the accuracy and generalizability of the model.</ns0:p><ns0:p>Contrariwise of pre-processing, the use of the pseudo-labeling method significantly improved the results. After using pseudo-labeling, the values of AUC and F-score improved to 0.996 and 0.956. The values of the three mentioned states can be seen based on different folds in table 5. Figure <ns0:ref type='figure'>8</ns0:ref> also shows the ROC curve for all three states.</ns0:p><ns0:p>The suggested model has had better results than the previous models which have used preprocessing methods in Persian sentiment analysis. For instance, some researchers introduced preprocessing algorithms and succeed to enhance the results of machine learning algorithms <ns0:ref type='bibr' target='#b61'>(Saraee and Bagheri 2013)</ns0:ref>. In the research, the F-score of the proposed pre-processing algorithms employing Naïve Bayes as a classifier algorithm is 0.878. In another research, the various alternatives for pre-processing and classifier algorithms were examined and the best result was assisted with an SVM classifier by 0.915 F-score value <ns0:ref type='bibr' target='#b36'>(Asgarian, Kahani, and Sharifi 2018)</ns0:ref>. Also, some researches were attempted to utilize state-of-the-art deep models in such a way to reduce dependency on pre-processing and avoiding complex steps <ns0:ref type='bibr' target='#b60'>(Roshanfekr, Khadivi, and Rahmati 2017)</ns0:ref>. The F-score of the BiLSTM and CNN algorithms in the research is 0.532 and 0.534. All mentioned article's focus was on the digital goods reviews in Persian two-class sentiment analysis as same as this article. A comparison of the results in this paper with other researches and other common algorithms indicates that not only the dependence on data pre-processing has been eliminated but also the accuracy has increased significantly.</ns0:p><ns0:p>The result reveals that it is quite possible to create independent models from the pre-processing process using the method of fastText and skip-gram. Moreover, BiLSTM and CNN methods can have significant results. However, all of the mentioned methods need to have an immense dataset. To prove this, It is noteworthy that the use of the pseudo-labeling method because of increasing training data has a great impact on the result. Independency from pre-processing steps is not related to the Persian language. The results are reachable for other languages using sufficient labeled samples and the mentioned methods in this article.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>The dataset included approximately 3 million reviews was extracted from the digital goods section of the Digikala website as the largest online store in Iran. Basic pre-processing methods were used to modify the words and tokenize them. Due to the lack of labels for a large part of the dataset, the pseudo-labeling method was employed which improved the results. Data balancing was also performed using random over-sampling. Persian data pre-processing was found difficult, so the fastText method was conducted to reduce the need for data pre-processing and word embedding development. The embeddings were employed as the input to the BiLSTM, and CNN models. Using the suggested model, not only the obtained results have been very desirable and are much more accurate in Persian compared to other reports but also there are no complications related to data pre-processing. The effect of stemming and normalization on the output was evaluated and revealed that the proposed method is not dependent on data pre-processing. Eventually, Besides the comparison of machine learning and deep learning methods in sentiment analysis, the TF-IDF and fastText methods were compared to create word embedding. The best result was associated with fastText and CNN. The main achievement of this model is the reduction of the need for data pre-processing. Data pre-processing in English is convenient and accurate due to the expanded text pre-processing libraries. However, in other languages, data pre-processing is very complicated because of the lack of proper libraries. Over the suggested model was proved that this need is largely solvable (AUC= 0.996) and the pre-processing steps can be reduced to preliminary tokenization processes. Avoiding complex text preprocessing is also important for other languages Since most text preprocessing algorithms have been developed for English and cannot be used for other languages. In other words, the taken steps are possible to be implemented for other languages to achieve the same results independently of pre-processing steps. Moreover, the created word embedding due to its high accuracy can be used in other text analysis problems especially related to online digital goods. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:2:0:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Positive samples are classified as positive -FN: Positive samples are classified as negative -TN: Negative samples are classified as negative -FP: Negative samples are classified as positive</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,279.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,384.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,367.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,383.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>. Based on the figure, the LSTM cell mathematically expressed as follows:𝑊 𝑓ℎ ℎ 𝑡 -1 + 𝑊 𝑓𝑥 𝑥 𝑡 + 𝑏 𝑓 𝑊 𝑖ℎ ℎ 𝑡 -1 + 𝑊 𝑖𝑥 𝑥 𝑡 + 𝑏 𝑖 𝑡𝑎𝑛ℎ 𝑊 𝑐ℎ ℎ 𝑡 -1 + 𝑊 𝑐𝑥 𝑥 𝑡 + 𝑏 𝑐   𝑐 𝑡 = 𝑓 𝑡 .𝑐 𝑡 -1 + 𝑖 𝑡 .𝑐 𝑡    𝑜 𝑡 = 𝑊 𝑜ℎ ℎ 𝑡 -1 + 𝑊 𝑜𝑥 𝑥 𝑡 + 𝑏 𝑜   ℎ 𝑡 = 𝑜 𝑡 .tanh (𝑐 𝑡 )In LSTM, the input gate can decide what new information can be stored in the cell state, also the output gate can decide what information can be output based on the cell state. By combining the ideas of BRNN and LSTM it is possible to achieve Bidirectional LSTM (BiLSTM) which has better performance than LSTM in classification processes especially in speech processing tasks<ns0:ref type='bibr' target='#b39'>(Graves and Schmidhuber 2005)</ns0:ref>. Therefore, this article uses the BiLSTM structure, and figure 3 is shown a basic structure of the BiLSTM network<ns0:ref type='bibr' target='#b58'>(Yildirim 2018)</ns0:ref>. The BiLSTM model used in this article architecture is shown in table 2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell> 𝑓 𝑡 =  Computer Science  𝑐 𝑡 = where denotes the input; ,and 𝑥 𝑡 ℎ 𝑡 -1 ℎ 𝑡 output; , and denote memory from the last LSTM unit and cell state; denotes forget gate       denote the output of the last LSTM unit and current 𝑐 𝑡 -1 𝑐 𝑡 𝑓 𝑡 value; , , and are the weights; is the bias; the operator '•' denotes the pointwise 𝑊 𝑖 𝑊 𝑐 𝑊 𝑜 𝑏 𝑖 𝑡 = Manuscript to be reviewed multiplication of two vectors.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:2:0:NEW 5 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54625:2:0:NEW 5 Feb 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "
Dear Editors
We sincerely appreciate the reviewers for their thoughtful comments and efforts towards improving the manuscript. In the following, we answer the remaining concerns of the reviewers.
Thanks a lot for your kind helps.
Sincerely yours,
Yanhui Guo (Ph.D.)
Assistant Professor,
Department of Computer Science
University of Illinois Springfield, IL, USA
Email: yguo56@uis.edu
Phone: +1-217-206-8170
Reviewer 1
1. Thank you for considering my previous comments.
My main concern is still about a focus on one specific language. Maybe you could add some recommendations for other languages on how they can benefit from your research.
• Thank you for your compassionate efforts to enhance the article. Based on your concern, we have stated in the manuscript as:
“Independency from pre-processing steps is not related to the Persian language. The results are reachable for other languages using sufficient labeled samples and the mentioned methods in this article.”
Also, this subject is mentioned in other parts of the article to clarify the ability of the taken steps in other languages.
Reviewer 2
1. Authors resolved all the issues.
• Thank you for your compassionate efforts to enhance the article.
" | Here is a paper. Please give your review comments after reading it. |
5 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis.</ns0:p><ns0:p>Therefore, this paper proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNN are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to discrete wavelet transform (DWT) and discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related work. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Gastrointestinal (GI) disease is considered one of the supreme common diseases that usually infect people, causing complicated health conditions <ns0:ref type='bibr' target='#b25'>(Du et al., 2019)</ns0:ref>. Based on the degree of injury, GI can approximately split into the precancerous lesion, primary GI cancer and progressive GI cancer, and benign GI diseases <ns0:ref type='bibr' target='#b64'>(Sharif et al., 2019)</ns0:ref>. Among benign GI diseases are ulcers, gastritis, and bleedings which will not depreciate into cancers in short term. In contrast, precancerous GI injury could depreciate into primary GI cancer or even progressive GI cancer, in case it was not accurately diagnosed and treated in time <ns0:ref type='bibr' target='#b25'>(Du et al., 2019)</ns0:ref>. Annually almost 0.7 million patients are diagnosed with gastric cancer. Since 2017, 135,430 new GI diseases arose in America. A global survey indicated that since 2017, 765,000 deaths occurred due to stomach cancer, 525,000 deaths are due to colon cancer. The poorest situations can be detected in the developing countries (e.g. the Asian countries and the Middle East) <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b41'>(Khan et al., 2020a)</ns0:ref>. Moreover, among people diseased with GI diseases, 20% of them are from China, 18% from Brazil, 12% from Russia, 20% of EU, and 21% of the US <ns0:ref type='bibr' target='#b64'>(Sharif et al., 2019)</ns0:ref>. The early diagnosis of GI is essential to reduce medical complications, cost of treatment, and lower death rates.</ns0:p><ns0:p>The traditional clinical method used for GI diagnosis is the intestinal biopsy of the GI tract. These biopsy samples are analyzed by medical experts using microscopes to examine the possibility of any cancerous or abnormal cells' existence. The drawbacks of such a method are being invasive and the necessity of a high degree of proficiency <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>. In contrast, endoscopic imaging is a lower invasive technique for visualizing the GI tract <ns0:ref type='bibr' target='#b39'>(Kainuma et al., 2015)</ns0:ref>. The endoscopic process assists the doctor in the recognition and diagnosis of gastric anomalies in their initial stages. Timely detection and diagnosis of chronic medical conditions can be healed with appropriate treatments. Hence, the imaging procedure can be very beneficial for a considerable decrease in medical complications, the cost of treatment, and death-rates, especially, the deaths that happen due to several GI cancers, which could be treated if cancer was discovered in its premalignant phase <ns0:ref type='bibr' target='#b31'>(Hamashima et al., 2015)</ns0:ref>. Although, the numerous advantages of endoscopy, yet it brings along with particular trade-offs, for example, the huge number of video frames produced during the screening process of the GI tract. On average, the entire process can take from 45 minutes to 8 hours depending on the aimed GI region and the expertise of the gastroenterologist <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>. The number of generated frames can reach up to 60,000 images. Most of these frames are redundant and not valuable and only a few images might have some abnormal lesions <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>. All these redundant images can be removed by examining each frame of the endoscopic video. Therefore, the manual examination of diseases through such a huge number of images is very challenging as it needs an extensive amount of time to observe the complete number of frames. Besides, at times the anomalous frames can be simply unnoticed by the gastroenterologist which can cause misdiagnosis. Therefore, such medical experts request automated schemes, that can automatically determine possible malignancies by analyzing the entire endoscopic images <ns0:ref type='bibr' target='#b6'>(Aoki et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Computer-aided diagnosis (CADx) are systems utilized for automatic diagnosis of several diseases within various parts of the human body like the brain <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>(Attallah, Sharkas & Gadelkarim, 2019)</ns0:ref>, breast <ns0:ref type='bibr' target='#b58'>(Ragab, Sharkas & Attallah, 2019)</ns0:ref>, lung <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref>, etc. Along with these diseases, CADx has been commonly used to diagnose GI disease in the intense by analyzing endoscopic images <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>. Such CADx has several advantages from which the patients, gastroenterologists, and medical students can benefit. These include; the reduction in the examination time of the whole endoscopic frames. Besides, the decrease in the cost of treatment as the lesion will be detected in an early phase. Moreover, CADx will improve the accuracy of the diagnosis of GI diseases compared to manual examination. Also the inspection time from endoscopic images is to be decreased. Furthermore, it may be used for training medical staff and students without the necessity of an expert <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In a CADx scheme, the diagnosis is carried out using each frame depending on the significant features taken out from the image. Thus, feature extraction is the key step in an accurate diagnosis of medical conditions <ns0:ref type='bibr' target='#b7'>(Attallah, 2020)</ns0:ref> like GI diseases <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>, <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>(Khan et al., 2019)</ns0:ref>. Several features are calculated using handcrafted techniques in the literature like color-based, texture-based, and some others <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>, <ns0:ref type='bibr' target='#b3'>(Ali et al., 2019)</ns0:ref>. Karargyris et al. <ns0:ref type='bibr' target='#b40'>(Karargyris & Bourbakis, 2011)</ns0:ref> utilized geometric and texture features extracted from SUSAN edge detector and Gabor filter extraction methods to detect small bowel polyps and ulcers. On the other hand, Li et al <ns0:ref type='bibr' target='#b49'>(Li & Meng, 2012a)</ns0:ref> used the uniform local binary pattern (LBP) and discrete wavelet transform (DWT). They employed an SVM classifier to detect abnormal tissues. In the same way, the authors in <ns0:ref type='bibr' target='#b50'>(Li & Meng, 2012b)</ns0:ref> detected tumors in the intestine using DWT and LBP. Instead, Yuan et al. <ns0:ref type='bibr' target='#b75'>(Yuan & Meng, 2014)</ns0:ref> fused the saliency map with the Bag of Features (BoF) technique to identify polyps in endoscopic images. Initially, the authors employed the BoF method to describe the local features by using a scale-invariant feature transform (SIFT) feature vectors using k-means clustering. Next, the saliency map histogram method was utilized to extract salience features. Lastly, both features are combined and utilized to learn an SVM classifier. Later the same authors <ns0:ref type='bibr' target='#b74'>(Yuan, Li & Meng, 2015)</ns0:ref> added the complete LBP (CLBP), LBP, uniform LBP (ULBP), and histogram of oriented gradients (HoG) features along with SIFT features to extract additional distinctive texture features. Alternatively, color-based features were extracted in <ns0:ref type='bibr' target='#b30'>(Ghosh, Fattah & Wahid, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Deeba et al., 2018)</ns0:ref> for bleeding detection.</ns0:p><ns0:p>Recently, the advancement of deep learning (DL) methods has delivered new opportunities to improve the analysis of endoscopic images. CNNs are the most type of networks used in endoscopy <ns0:ref type='bibr' target='#b2'>(Alaskar et al., 2019)</ns0:ref>. These networks can be used as classifiers or/and feature extractors. Feature extraction methods based on DL techniques have been extensively utilized in the literature. <ns0:ref type='bibr'>(Ghatwary, Zolgharni & Ye, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b44'>(Kim, Cho & Cho, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b47'>(Lee et al., 2019)</ns0:ref>. The authors of <ns0:ref type='bibr' target='#b41'>(Khan et al., 2020a</ns0:ref>) proposed a CADx system to detect ulcers and bleeding GI diseases. Their system extracted deep features from two different layers of VGG-16 CNN. Afterward, these features were fused, and then significant features were selected using an evolutionary search method called PSO. These features were then used to train the SVM classifier. Igarashi et al, <ns0:ref type='bibr' target='#b35'>(Igarashi et al., 2020)</ns0:ref> proposed a CADx framework to classify several GI diseases using AlexNet. First, AlexNet extracted spatial features and then classified them into 14 different diseases. The authors of <ns0:ref type='bibr' target='#b2'>(Alaskar et al., 2019)</ns0:ref> proposed a DL-based CADx that utilized AlexNet and GoogleNet for ulcer detection from low contrast endoscopic videos (WEV). Features extracted from these networks were classified using the fully connected layer of each network separately. AlexNet was also used in <ns0:ref type='bibr' target='#b27'>(Fan et al., 2018)</ns0:ref> to detect both erosions and ulcers that are observed in the intestine. <ns0:ref type='bibr' target='#b32'>He et al. in (He et al., 2018)</ns0:ref> introduced a framework based on two cascaded CNNs. The first network is VGG-16 CNN which was used for edge detection, whereas the second is the Inception CNN which was used for classification. Similarly, Khan et al in <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref> used two CNNs, the first one is Recurrent CNN for segmentation, whereas, the second was ResNet and was used for classification. The authors in <ns0:ref type='bibr' target='#b76'>(Yuan & Meng, 2017)</ns0:ref> suggested the use of an image manifold with stacked sparse auto-encoder to recognize polyps in endoscopic images. Instead, the authors in <ns0:ref type='bibr' target='#b55'>(Pei et al., 2017)</ns0:ref> proposed a CADx system to recognize and assess the small bowel using features extracted from long short-term memory (LSTM).</ns0:p><ns0:p>Other research articles suggested the fusion of handcrafted features and DL features. Sharif et al. <ns0:ref type='bibr' target='#b64'>(Sharif et al., 2019</ns0:ref>) proposed a CADx system for classifying GI infections. The authors extracted deep features from VGG-16 and VGG-19 CNNs and fused these features with some geometric features. These fused features were then used as input to a K-nearest neighbors (KNN) classifier. Another system was presented in <ns0:ref type='bibr'>(Ghatwary, Ye & Zolgharni, 2019)</ns0:ref> to detect esophageal cancer. The system fused Gabor features and Faster Region-Based CNN (Faster R-CNN).On the other hand, Billah et al. <ns0:ref type='bibr' target='#b18'>(Billah, Waheed & Rahman, 2017)</ns0:ref> fused the color wavelet features and CNN features for detecting polyps. The combined features were used later to fed an SVM classifier. The authors in <ns0:ref type='bibr' target='#b53'>(Nadeem et al., 2018)</ns0:ref> combined features extracted from textural analysis methods such as Haralick and LBP along with VGG-16 CNN DL features. The authors used logistic regression for classification. The authors of <ns0:ref type='bibr' target='#b51'>(Majid et al., 2020)</ns0:ref> introduced a framework that combined the DCT, DWT, color-based statistical features, and VGG16 DL features for the recognition of several GI diseases. The authors used a genetic algorithm (GA) to select features using the KNN fitness function. Finally, selected features were used to train an ensemble classifier. A summary of recent related work along with their limitations is shown in Table <ns0:ref type='table'>1</ns0:ref> . Table <ns0:ref type='table'>1</ns0:ref> The main aim of this work is to construct a CADx called Gastro-CADx that is capable of accurately diagnosing more GI diseases than the proposed by others Though there are various approaches to GI detection and classification in the literature, there exist some weaknesses among these methods which are summarized in table . Gastro-CADx tries to overcome the limitations found in related studies discussed in table 1 through three cascaded stages. First of all, the majority of the current methods studied the detection and classification of a few types of GI anomalies, disease, or anatomical landmark. But, our proposed Gastro-CADx is an automatic highly accurate system to classify several GI diseases and anatomical landmarks. Some of the related studies are based on small dataset or used only one dataset to test the efficiency of their classification model, while Gastro-CADx is validated using two large datasets of several GI diseases. The few articles that classified several GI diseases achieved low accuracy, not reliable, or used only one type of CNN, ,whereas, Gastro-CADx is an accurate and reliable system that used more four CNNs. This appears in the first stage, Gastro-CADx studies several CNN based methods for feature extraction from spatial domain instead of using one or two networks to benefit from the advantages of several types of CNNs.The previous studies were either based only on an end-to-end deep learning which has very high computational cost, used only spatial features extracted from CNNs or only handcrafted feature extractions, but Gastro-CADx is not based only on spatial features, but temporal-frequency and spatial-frequency features using handcrafted feature extraction methods as well not only end-to-end based DL . This appears clearly in the second stage of Gastro-CADx. It extracts handcrafted features based on textural analysis from the temporal-frequency and spatialtemporal domains using the DL features extracted in the first stage. This reduces the high computational cost of end-to-end DL techniques. Previous related studies indicated that CNN representations have improved the performance and the abstract level for the automatic detection and classification of GI diseases <ns0:ref type='bibr' target='#b51'>(Majid et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>, <ns0:ref type='bibr' target='#b76'>(Yuan & Meng, 2017)</ns0:ref>. Nevertheless, the fusion of CNN features with handcrafted variables could enhance diagnostic accuracy <ns0:ref type='bibr' target='#b51'>(Majid et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b65'>(Shi et al., 2018)</ns0:ref>. Therefore, in the third stage, a fusion process is introduced which combines the second stage features to benefit from the spatial, temporalfrequency, and spatial-frequency features. This stage can confirm the capacity of every feature abstraction method to mine significant information that might be disregarded from the other method. It can also reduce the computational cost compared to end-to-end DL methods.</ns0:p><ns0:p>, the previous contributions are summarized to:</ns0:p><ns0:p> Proposing an automatic and accurate CADx system called Gastro-CADx based on three stages to classify several GI diseases and anatomical landmarks.</ns0:p><ns0:p> The system is not based only on spatial features, but temporal-frequency and spatialfrequency features using handcrafted feature extraction methods as well.</ns0:p><ns0:p> In the first stage, Gastro-CADx studies several CNN based methods for feature extraction from spatial domain instead of using one or two networks to benefit from the advantages of several types of CNNs.  In the second stage, Gatro-CADx extracts handcrafted features based on textural analysis from the temporal-frequency and spatial-temporal domains using the DL features extracted in the first stage.</ns0:p><ns0:p> Also, in the second stage, Gastro-CADx tries to minimize the problem of computational time using only reduced dimensions of features.  In the third stage, a fusion process is introduced which combines the second stage features to benefit from the spatial, temporal-frequency, and spatial-frequency features.</ns0:p><ns0:p> The third stage can confirm the capacity of every feature abstraction method to mine significant information that might be disregarded from the other method.</ns0:p><ns0:p> Gastro-CADx is validated using two large datasets of several GI diseases.</ns0:p><ns0:p> Creating an accurate automatic diagnostic system that is reliable compared to related CADx systems</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Material and Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>2.1Dataset Description</ns0:head><ns0:p>This paper employs two datasets to evaluate the performance of Gastro-CADx. The first dataset used in this paper is called Kvasir <ns0:ref type='bibr' target='#b56'>(Pogorelov et al., 2017)</ns0:ref>, and denoted as Dataset I. It consists of 4000 images containing eight different GI classes . Three classes demonstrating anatomical landmarks, three demonstrating pathological states, and two associated with lesion-removal. The three anatomical landmark categories are pylorus, z-line, and cecum. The three diseased states are esophagitis, polyps, and ulcerative colitis. The two classes associated with lesion removal are dyed lifted polyps and dyed resection margins. The images are of different sizes from 720x576 up to 1920x1072 pixels. Some of these images include a green region illustrating the location and shape of the endoscope within the intestine. This information may be significant for later investigations (thus included) but must be wielded with care for the detection of the endoscopic findings. Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref> shows different image samples of different GI diseases. Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>.</ns0:p><ns0:p>The second dataset is called HyperKvasir <ns0:ref type='bibr' target='#b19'>(Borgli et al., 2020)</ns0:ref> and named Dataset II. The images and videos of this dataset were acquired using standard endoscopy equipment from Olympus (Olympus Europe, Germany) and Pentax (Pentax Medical Europe, Germany) at a Norwegian hospital from 2008 to 2016. The dataset consists of 10,662 labeled images and 23 classes. These classes are unbalanced; therefore, we chose only 10 balanced classes to construct Gastro-CADx. Four classes demonstrating anatomical landmarks, three demonstrating pathological states, one demonstrating quality of mucosal views, and two associated with lesion-removal. The three anatomical landmark categories are pylorus, z-line, pylorus, and cecum. The three pathological states are esophagitis, polyps, and ulcerative colitis. The two classes associated with lesion removal are dyed lifted polyps and dyed resection margins. The one demonstrating the quality of mucosal views is bowel quality. Figure <ns0:ref type='figure'>2</ns0:ref> shows samples of images included in the dataset. </ns0:p></ns0:div>
<ns0:div><ns0:head>2.2Deep Convolutional Neural Networks Architectures</ns0:head><ns0:p>The popular type of DL approaches that is generally utilized for solving image-related classification problems in the health informatics field is the convolutional neural network (CNN) <ns0:ref type='bibr' target='#b37'>(Jin et al., 2020)</ns0:ref>. In this paper, four CNNs are utilized including; AlexNet, ResNet-50, DarkNet-19, and DenseNet-201 constructions. . As it can be noticed from table 1 that most related studies used AlexNet, ResNet and VGG CNNs. We did not use VGG as it has very high computational cost and number of parameters. Also, the features extracted from this network is of very huge size <ns0:ref type='bibr' target='#b17'>(Bi et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b26'>(Ertosun & Rubin, 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b68'>(Su et al., 2020)</ns0:ref>. Although, AlexNet is one of the oldest architectures, but it is still being used due to its acceptable performance. This is because it has efficient computation ability and performs well with color images like these used in this paper well <ns0:ref type='bibr' target='#b71'>(Wang, Xu & Han, 2019)</ns0:ref>. We employed more recent CNNs like DarkNet and DenseNet architectures To our own knowledge darknet was not used in the literature, whereas, only few articles used DenseNet for classifying GI diseases but has several drawbacks in their proposed methods. Therefore, we used these two new CNNs architectures to test their performance and ability to classify multiple GI diseases from endoscopic images. This paragraph is added to the deep convolutional neural network architecture section page 7 in the tracked version.</ns0:p><ns0:p>The size of the input and output layers of the four networks employed used in the proposed method is shown in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. <ns0:ref type='table' target='#tab_0'>2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The structure of AlexNet CNN was presented in 2012 by <ns0:ref type='bibr' target='#b45'>(Krizhevsky, Sutskever & Hinton, 2012)</ns0:ref>. This construction won the ImageNet Large-Scale Visual Recognition Challenge in 2012. The structure of AlexNet includes 23 layers corresponding to 5 convolutional layers, 5 rectified linear unit (ReLu) layers, 2 normalization layers, 3 pooling layers, 3 fc layers, a probabilistic layer using softmax units, and a classification layer ending in 1000 neurons for 1000 categories <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>DarkNet-19</ns0:head><ns0:p>DarkNet was first introduced in 2017 by <ns0:ref type='bibr' target='#b62'>Redmon and Farhadi (Redmon & Farhadi, 2017)</ns0:ref>. DarkNet-19 is a CNN that is utilized as the spine of YOLO-V2. It commonly employs 3×3 filters and pairs the number of channels after each pooling stage. DarkNet-19 utilizes global average pooling to perform classifications in addition to 1×1 filters to reduce the feature demonstration between 3×3 convolutions. Batch Normalization is applied to regularize the classification model batch, make the training process more stable, and accelerate convergence. Darknet-19 consists of 19 convolutional layers and 5 maxpooling layers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.3'>ResNet-50</ns0:head><ns0:p>ResNet architecture was first introduced in 2016. The essential constructing block of the ResNet is the residual block which was suggested by He et al. <ns0:ref type='bibr' target='#b33'>(He et al., 2016)</ns0:ref>. The residual block offers shortcuts associations within the convolution layers, which can assist the network to step some convolution layers at a time. In other words, the residual block recommends two choices, it may attain a set of functions on the input, or it can permit this stage. Therefore, ResNet construction is supposed to be more effective than other CNNs such as AlexNet and GoogleNet as stated in <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>. In this study, ResNet-50 is used which consists of 49 convolutional layers and one fc layer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.4'>DenseNet-201</ns0:head><ns0:p>The latest research has revealed that CNNs can be significantly deeper, more accurate, and effective to learn when they consist of smaller connections between layers near the input and those adjacent to the output. This finding motivated Huang et al. <ns0:ref type='bibr' target='#b34'>(Huang et al., 2017)</ns0:ref> in 2017 to propose the Dense Convolutional Network (DenseNet). DenseNet joins every layer to each other layer in a feed-forward manner. While conventional CNNs with M layers have M connections -one amid every layer and its succeeding layer, DenseNet has M(M+1)/2 straight connections. For every layer, the feature-maps of all previous layers are utilized as inputs, and its feature-maps are utilized as inputs into all following layers. DenseNet has numerous benefits such as their ability to lessen the vanishing-gradient issue, reinforce feature dissemination, boost feature reprocesses, and considerably decrease the number of parameters. In this paper, DenseNet-201 is used which has 201 layers deep.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.3Proposed Gastro-CADx</ns0:head><ns0:p>An efficient hybrid CADx system called Gastro-CADx is proposed to classify several GI classes from endoscopic images. Gastro-CADx involves three steps including, the image preprocessing step, followed by feature extraction, reduction and fusion step, and finally a classification step. Initially, several augmentation processes are utilized to raise the number of images in the datasets. Also, images are resized. In the feature extraction, reduction, and fusion step, three stages are performed to construct Gastro-CADx. In the first stage, valuable deep features are extracted from four CNNs including <ns0:ref type='bibr'>AlexNet,</ns0:ref>. In the second stage, two PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science handcrafted features are used to extract features from the spatial DL features extracted in the first stage. These handcrafted features are textural analysis based features representing temporalfrequency and spatial-frequency features. The dimension of these extracted features is reduced in this stage. Afterward, is the third stage of the Gastro-CADx, where several reduced features are fused in a concatenated manner. Finally, is the classification step in which machine learning classifiers are used to identify several GI classes. Figure <ns0:ref type='figure'>3</ns0:ref> represents the block diagram of Gastro-CADx.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3.1'>Image preprocessing Step</ns0:head><ns0:p>The endoscopic images of both datasets are resized according to the size of the input layer of each CNN <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref> shown in table 2. Subsequently, these frames are augmented. The augmentation process is essential to raise the number of images <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>, <ns0:ref type='bibr'>(Ragab & Attallah, 2020)</ns0:ref>. This technique is performed because most likely the models which are learned with an insufficient quantity of frames may over-fit <ns0:ref type='bibr' target='#b61'>(Ravì et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>. The augmentation techniques utilized in this paper to produce new endoscopic images from the training data are flipping, translation, transformation, and rotating <ns0:ref type='bibr' target='#b69'>(Talo et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref>. Each frame is flipped and translated in x and y directions with pixel range (-30, 30) <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref>. Furthermore, each endoscopic image is rotated with an angle range (0 -180) degrees <ns0:ref type='bibr'>(Ragab & Attallah, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2'>Feature Extraction, Reduction, and Fusion Step</ns0:head><ns0:p>Gastro-CADx is based on three stages. The first stage is the DL feature extraction stage. The second is the handcrafted feature extraction and the reduction stage. Finally is the third stage known as the fusion stage.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2.1'>Deep Learning Feature Extraction stage (First Stage of Gastro-CADx)</ns0:head><ns0:p>Pre-trained CNNs trained using the endoscopic frames are used to accomplish feature extraction or classification processes. During the feature mining process, valuable DL features are mined from the CNNs. Instead of utilizing the CNNs for classification, DL variables are pulled out from the fully connected layer called 'fc7' as in <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref>, the 'global average pooling 2D layer' (fifth pooling layer), and the last average pooling layer ' of the AlexNet, ResNet-50, DarkNet, and DenseNet constructions as in <ns0:ref type='bibr'>(Ragab & Attallah, 2020)</ns0:ref>. The DL features size are 4096, 2048, 8 or 10, and 1920 for AlexNet, ResNet-50, DarkNet-19, and DenseNet-201 respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2.2'>Handcrafted Feature Extraction and Reduction stage (Second Stage of</ns0:head><ns0:p>Gastro-CADx)</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2.2.1'>Handcrafted Feature Extraction</ns0:head><ns0:p>In this stage, time-frequency and spatial-frequency features based on textural analysis are determined from the DL features extracted in the previous stage. The textural features include coefficients of discrete wavelet transform (DWT) and discrete cosine transform (DCT). Each feature method is discussed below. We employed DWT and DCT as they are popular feature extraction method based on textural analysis. One of the main benefit of DCT is its capability to spatially alter to characteristics of an image for instance discontinuities and changing frequency manner <ns0:ref type='bibr' target='#b16'>(Bennet, Arul Ganaprakasam & Arputharaj, 2014)</ns0:ref>. It offers time-frequency representation of an image. Also, DCT has several advantages, first of all it prevents complicated calculation and presents simplicity of execution in practical purposes. Furthermore, DCT is capable of effectively managing the phase removing problem and demonstrates a powerful energy compaction estate <ns0:ref type='bibr' target='#b36'>(Imtiaz & Fattah, 2010)</ns0:ref>, <ns0:ref type='bibr' target='#b60'>(Rashidi, Fallah & Towhidkhah, 2012)</ns0:ref>. DWT and DCT are the utmost common approach to extract textural features in the medical image processing area <ns0:ref type='bibr' target='#b46'>(Lahmiri & Boukadoum, 2013;</ns0:ref><ns0:ref type='bibr' target='#b66'>Srivastava & Purwar, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b52'>(Mishra et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b4'>(Anthimopoulos et al., 2014)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>(Benhassine, Boukaache & Boudjehem, 2020)</ns0:ref>. Textural analysis based methods are useful in extracting texture features from images which is equivalent to simulating human visual learning procedure . it is widely used in medical image processing <ns0:ref type='bibr'>[30]</ns0:ref><ns0:ref type='bibr'>[31]</ns0:ref><ns0:ref type='bibr'>[32]</ns0:ref><ns0:ref type='bibr'>[33]</ns0:ref>.</ns0:p><ns0:p> Discrete wavelet transform (DWT) is a widely used feature extraction method. DWT examines both signals and images <ns0:ref type='bibr' target='#b46'>(Lahmiri & Boukadoum, 2013;</ns0:ref><ns0:ref type='bibr' target='#b66'>Srivastava & Purwar, 2017)</ns0:ref>. It offers a temporal-frequency representation of an image or signal through decomposing them with the help of a group of orthogonal basis functions (Ortho-normal).</ns0:p><ns0:p>Images are of two dimensions; therefore 2-D DWT is used to decompose the image <ns0:ref type='bibr' target='#b12'>(Attallah, Sharkas & Gadelkarim, 2019)</ns0:ref>. One dimensional DWT is employed on each DL feature set distinctly which results in four groups of coefficients <ns0:ref type='bibr'>(Ragab & Attallah, 2020)</ns0:ref>.</ns0:p><ns0:p>The four groups generated after the 1-D DWT are known as three detail coefficients, CD 1 , and approximation coefficients, CA 1 . Detail coefficients consist of the diagonal, vertical, and horizontal coefficients, correspondingly. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete Cosine</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3.2.2.2'>Feature Reduction</ns0:head><ns0:p>Feature reduction is an important procedure that is commonly used in the medical field to lower the huge dimension of the feature space. This reduction will correspondingly lead to a reduction in the complexity of the classification procedure <ns0:ref type='bibr'>(Ragab & Attallah, 2020)</ns0:ref>, the training time of the model, and avoid overfitting <ns0:ref type='bibr' target='#b9'>(Attallah et al., 2017a)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>(Attallah et al., 2017b)</ns0:ref>. For this reason, DWT and DCT have been employed as feature reduction procedures as well as feature extractors instead of directly using the large dimension of DL features generated in the previous step. Therefore, a 1-level of DWT is applied to each DL features. The coefficients generated are are the approximation coefficients CA 1 , and detail coefficients CD 1 of the first decomposition level of DWT.These coefficients have half the dimension of the original DL image which enters the DWT process. By this was the dimension of feature space is reduced. The CA and CD coefficients are used separately to train SVM classifiers of the next step of Gastro-CADx.</ns0:p><ns0:p>The DCT, on its own, does not reduce the data dimension; however, it shrinks most of the image information in a small number of coefficients <ns0:ref type='bibr' target='#b21'>(Dabbaghchian, Ghaemmaghami & Aghagolzadeh, 2010)</ns0:ref>. Another reduction stage is usually executed to reduce the data dimension, where some of the coefficients are chosen to develop feature vectors. In this paper, 500 DCT coefficients are generated using the zigzag procedure. After this reduction procedure, these coefficients are used separately to train SVM classifiers of the next step of Gastro-CADx.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2.3'>Feature Fusion (Third Stage of Gastro-CADx)</ns0:head><ns0:p>The feature vectors generated for each of DCT and DWT coefficients are then fused in a concatenated manner to form different combinations of fused features sets which are then used to classify SVM classifiers in the next step of Gastro-CADx. For DWT, initially, the CA coefficients extracted from the DL features for every two networks are fused. Then, the CA coefficients extracted from the DL features of every three networks are fused. Next, all CA coefficients extracted from DL features of the four networks are merged. The same procedure is done for the CD coefficients. For DCT, firstly the coefficients extracted from the DL features for every two networks are fused. Then, the coefficients extracted from the DL features of every three networks are fused. Finally, the DCT coefficients extracted from DL features of the four CNNs are merged</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.3'>Classification Step</ns0:head><ns0:p>In this step, the classification procedure is performed using two scenarios either by an end-to-end DL <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref>, techniques or by using the features extracted from the three stages of Gastro-CADx. The scenarios resemble four experiments. The first scenario represents the use of the four CNNs including AlexNet, ResNet-50, DarkNet-19, and DenseNet-201 as classifiers (end to end DL process). Each pre-trained CNN is created and learned distinctly and then used as a classifier. The first scenario represents experiment I. In the second scenario, the first stage of Gastro-CADx is executed which corresponds to experiment II, where the pre-trained CNNs are applied to images, and then the DL features are extracted from each network individually. These DL features are used to learn distinct SVM classifiers. These features represent spatial information only and of a huge dimension. Therefore, in the second stage of Gastro-CADx which corresponds to experiment III, DWT and DCT feature extraction methods are applied to DL features generated from each CNN of the first stage of Gastro-CADx to extract temporal-frequency and spatial-frequency information. These features are utilized to train SVM classifiers individually. The problem of dimensionality reduction is considered as well in the second stage of Gastro-CADx, where a reduced set of coefficients are generated using DWT and DCT methods. These coefficients represent feature vectors that are used separately to learn three SVM classifiers. Finally, in the third stage of Gastro-CADx, the reduced features are fused to form different combinations of fused features. These different combinations are used to construct several distinct SVM classifiers The aim of this stage is to examine the influence of feature fusion on the classification accuracy and select the combination which has the highest impact on the performance of the Gastro-CADx. This stage corresponds to experiment IV. Figure <ns0:ref type='figure'>4</ns0:ref> summarizes the four experiments of Gastro-CADx.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref> Note that SVM was chosen as it is known to be a powerful classifier. It is considered to be one of the best SVM best known methods in pattern classification and image classification <ns0:ref type='bibr' target='#b70'>(Thai, Hai & Thuy, 2012)</ns0:ref>. It performs well with large dimension space and of multi-class. As it uses kernel function which maps the feature space into new domain that can easily separate between classes of a dataset. Therefore, it is commonly used with the huge dimension of DL features extracted from CNNs <ns0:ref type='bibr'>(Ragab et al., 2019)</ns0:ref>, (M.M. et al., 2017), <ns0:ref type='bibr' target='#b77'>(Zhang et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b22'>(Das et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b73'>(Xue et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b48'>(Leng et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b72'>(Wu et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b63'>(Sampaio et al., 2011)</ns0:ref> achieving outperforming results. Also, as you can see in table 1, that SVM is the commonly used in the literature It can be observed that articles that used SVM achieved the highest performance as <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref> which achieved an accuracy of 99.13%, <ns0:ref type='bibr' target='#b41'>(Khan et al., 2020a)</ns0:ref> achieving an accuracy of 98.4%, <ns0:ref type='bibr'>(Ghatwary, Ye & Zolgharni, 2019)</ns0:ref> obtaining an accuracy of 95%, <ns0:ref type='bibr' target='#b18'>(Billah, Waheed & Rahman, 2017)</ns0:ref> achieving an accuracy of 98.65%</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Experimental Setup</ns0:head><ns0:p>Several parameters are attuned after fine-tuning the fc layer of the CNNs. The number of epochs and the initial learning rate for the four CNNs are 10 and 10 -4 respectively as in <ns0:ref type='bibr' target='#b13'>(Attallah, Sharkas & Gadelkarim, 2020)</ns0:ref>. The mini-batch size and validation frequency are 10 and 3. The weight decay and momentum are set to 5×10 -4 and 0.9 respectively. The optimization algorithm used is the Stochastic Gradient Descent with Momentum (SGDM). To measure the capacity of the Gastro-CADx to classify several GI diseases, 5-fold cross-validation is engaged. This means that the GI datasets are divided into 80%-20% for training and validation. The SVM classifiers are taught with 4 folds and verified by the remaining fold. Thus, the models are taught five times and the testing accuracy is calculated for each time then averaged. The kernel functions used for the SVM classifier are linear, quadratic, and cubic.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Evaluation Performance</ns0:head><ns0:p>The presented Gastro-CADx framework is evaluated with numerous measures for instance; F1score, precision, accuracy, sensitivity, and specificity. The formulas which are utilized in calculating such metrics are displayed below in equations <ns0:ref type='bibr' target='#b11'>(Attallah, Ragab & Sharkas, 2020)</ns0:ref> (1 -5).</ns0:p><ns0:formula xml:id='formula_0'>𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 + 𝑇𝑃 (1) 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 (2) 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑁 𝑇𝑁 + 𝐹𝑃 (3) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 (4) 𝐹1 -𝑆𝑐𝑜𝑟𝑒 = 2 × 𝑇𝑃 (2 × 𝑇𝑃) + 𝐹𝑃 + 𝐹𝑁 (5)</ns0:formula><ns0:p>Where; is the total sum of GI images that are well classified to the GI class which they actually belongs to is known as TP, TN is the sum of GI images that do not belong to the GI class intended to be classified, and truly do not belong to. For each class of GI, FP is the sum of all images that are classified as this GI class but they do not truly belong to. For each class of GI, FN is the entire sum of GI images that are not classified as this GI class.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Results</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The results of four experiments of Gastro-CADx are presented in this section. Experiment I is an end to end DL process where the four CNN are employed to perform classification. In experiment II (first stage of Gastro-CADx), DL features are extracted from the four CNNs and used to train distinct SVM classifiers. Experiment III (second stage of Gastro-CADx) represents the use of the second stage of feature extraction and reduction methods which employs DCT and DWT to extract temporal-frequency and spatial-frequency information from the images. In this experiment, reduced coefficients generated from DWT and DCT methods are employed to train SVM classifiers. In experiment IV, different combinations of fused features are generated and utilized to inspect the effect of feature combination on Gastro-CADx performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Experiment I Results</ns0:head><ns0:p>The results of the end-to-end DL procedure employed for classification are illustrated in tables 3 and 4 for Dataset I and Dataset II respectively. Table <ns0:ref type='table'>3</ns0:ref> shows that the highest accuracy of 91.66% is achieved by ResNet-50 followed by an accuracy of 90.08%, 89.83%,88.32% attained by DarkNet-19, DenseNet-201, and AlexNet respectively for Dataset I. Table <ns0:ref type='table'>4</ns0:ref> demonstrates that the peak accuracy of 94.75% is achieved by ResNet-50 followed by an accuracy of 93.26%, 91.93%,91.66% attained by DarkNet-19, DenseNet-201, and AlexNet respectively for Dataset II. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Experiment II Results</ns0:head><ns0:p>This experiment represents the first stage of Gastro-CADx. The results of this experiment are shown in figures 5 and 6 for Dataset I and Dataset II respectively. Figure <ns0:ref type='figure'>5</ns0:ref> indicates the maximum accuracies of 94.4%and 94.3 are attained by DarkNet-19 using linear and quadratic SVM classifiers using Dataset I. Subsequently, ResNet-50 features achieve an accuracy of 93.5%, 93.4%, and 93.4% using linear, quadratic, and cubic SVM classifiers respectively. Following, AlexNet and DenseNet-201 features obtain an accuracy of 92.9%, 93%, 92.7%, and 91%. 91.5%, 91.7% for using linear, quadratic, and cubic SVM classifiers respectively. Figure <ns0:ref type='figure'>6</ns0:ref> shows the peak accuracies of 96.9%,96.8%, 96.7% are achieved by ResNet-50 using linear, quadratic, and cubic SVM classifiers constructed with Dataset II. Next, DarkNet features attain an accuracy of 96.4%, 96%, and 95.2% using linear, quadratic, and cubic SVM classifiers respectively. Following, AlexNet and DenseNet-201 features obtain an accuracy of 95.5%, 95.7%, 95.3% and 94.7%. 94.6%, 94.6% for using linear, quadratic, and cubic SVM classifiers respectively. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Experiment III Results</ns0:head><ns0:p>This experiment represents the second stage of Gastro-CADx. The results of this experiment are shown in figures 7-10 for Dataset I and figures 11-14 for Dataset II. Figure <ns0:ref type='figure'>7</ns0:ref> shows the classification accuracy for the three SVM classifiers constructed with CA and CD coefficients of DWT, besides the 500 DCT coefficients extracted from the ResNet-50 CNN using Dataset I. The figure indicates that the peak accuracy of 93.6% is achieved using the 500 DCT coefficients with linear SVM. Almost the same accuracy of 93.5% is attained using the CA coefficients of DWT.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:note type='other'>Figure 7</ns0:note><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> demonstrates the classification accuracy for the three SVM classifiers built with CA and CD coefficients of DWT, in addition to the 500 DCT coefficients extracted from AlexNet CNN using Dataset I. The figure specifies that the highest accuracy of 93.3% is accomplished using the CD coefficients of DWT with a quadratic SVM classifier. A slightly lower accuracy of 92.9% is attained using the CA coefficients of DWT.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> Figure <ns0:ref type='figure'>9</ns0:ref> displays the classification accuracy for the three SVM classifiers constructed with CA and CD coefficients of DWT, as well as the 500, DCT coefficients extracted from DenseNet CNN using Dataset I. The figure identifies that the highest accuracy of 91.1% is accomplished using the CA coefficients of DWT with a cubic SVM classifier. A lower accuracy of 90.6% is reached using the CA coefficients of DWT with a linear SVM classifier.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> Figure <ns0:ref type='figure' target='#fig_5'>10</ns0:ref> shows the classification accuracy for the three SVM classifiers created with the CA and CD coefficients of DWT, besides the DCT coefficients extracted from DarkNet-19 CNN using Dataset I. Note that, since the number of DL features extracted from DarkNet-19 was only 8 (which is already a small dimension of features), all the DCT coefficients are used in this experiment without the need of the zigzag scanning procedure. The figure indicates that the highest accuracy of 94.7 % is accomplished using the DCT coefficients with linear SVM.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>10</ns0:ref> Figure <ns0:ref type='figure' target='#fig_5'>11</ns0:ref> shows the classification accuracy for the three SVM classifiers constructed with CA and CD coefficients of DWT, besides the 500 DCT coefficients extracted from the ResNet-50 CNN using Dataset II. The figure indicates that the peak accuracy of 96.9% is achieved using the CA coefficients with linear, cubic, and quadratic SVM. Almost the same accuracy of 96.8% is attained using the CD coefficients of DWT with linear, cubic, and quadratic SVM and the 500 DCT coefficient with linear SVM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 11</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>12</ns0:ref> reveals the classification accuracy for the three SVM classifiers learned with CA and CD coefficients of DWT, besides the 500 DCT coefficients extracted from AlexNet CNN using Dataset II. The figure specifies that the highest accuracy of 95.6% is accomplished using the CA coefficients of DWT with a quadratic SVM classifier. A slightly lower accuracy of 95.5% is attained using the CD coefficients of DWT with quadratic SVM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 12</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>13</ns0:ref> indicates the classification accuracy for the three SVM classifiers built with CA and CD coefficients of DWT, besides the 500, DCT coefficients extracted from DenseNet CNN using Dataset II. The figure identifies that the highest accuracy of 94.4% is accomplished using the CA coefficients of DWT with cubic and quadratic SVM classifiers. The same accuracy is reached using the CD coefficients of DWT with a quadratic SVM classifier. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>14</ns0:ref> demonstrates the classification accuracy for the three SVM classifiers constructed with CA and CD coefficients of DWT, in addition to the DCT coefficients extracted from DarkNet-19 CNN using Dataset II. As the number of DL features mined from DarkNet-19 was only 10 in the case of Dataset II (which is already a small dimension of features), all the DCT coefficients are employed in this experiment without the necessity of the zigzag scanning process. The figure specifies that the peak accuracy of 96.4 % is obtained using the DCT coefficients with linear SVM. </ns0:p></ns0:div>
<ns0:div><ns0:head>5.4Experiment IV Results</ns0:head><ns0:p>This experiment represents the third stage of Gastro-CADx. This experiment aims to explore the effect of combining features on the CADx's performance. Moreover, to search for the best combination of fused feature set which has the highest influence on the classification accuracy. To form the fused feature sets, firstly for DWT, the CD coefficients extracted from the DL features for every two CNNs are fused. Next, the CD coefficients extracted from the DL features of each three CNNs are merged. Afterward, all CD coefficients extracted from the DL features of the four CNNs are combined. A similar fusion process is executed for the CA coefficients. For DCT, initially, the coefficients extracted from the DL features for every two CNNs are fused. Afterward, the DCT coefficients extracted from the DL featured of each three CNNs are fused. Next, the coefficients extracted from DL images of the four CNNs are merged.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> displays a comparison between classification accuracy achieved using CA and CD features extracted from different combinations of the DL features generated from the four CNNs employed in Gastro-CADx using Dataset I. This comparison shows the CA features has slightly higher accuracy than CD features for all combination of fused features except for AlexNet + ResNet and AlexNet + DenseNet. The maximum performance (written in bold) is achieved using CA and CD features extracted using the fusion of AlexNet+ ResNet+DenseNet CNNs, where the highest accuracy of 97.3% is attained using CA features extracted using AlexNet+ ResNet+DenseNet CNNs with both quadratic and cubic SVM. On the other hand, Table <ns0:ref type='table'>6</ns0:ref> presents a comparison between classification accuracy accomplished using 500 DCT features extracted from different combinations of the DL variables produced from the four CNNs employed in Gastro-CADx using Dataset I. This comparison indicates the maximum performance (written in bold) is achieved using DCT features extracted using AlexNet+ ResNet+DenseNet+ DarkNet CNNs, where the highest accuracy of 97.3% is attained using features extracted using AlexNet+ ResNet+DenseNet +DarkNet CNNs with both quadratic and cubic SVM.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> Table <ns0:ref type='table'>6</ns0:ref> Table <ns0:ref type='table'>7</ns0:ref> demonstrates a comparison between the classification accuracy accomplished using CA and CD features extracted from different combinations of the DL variables produced from the four CNNs using Dataset II. This comparison indicates the CA features has slightly higher accuracy than CD features for all combinations of fused features except for AlexNet+ ResNet and AlexNet+ DenseNet. The peak performance (written in bold) is achieved using CA and CD features extracted using AlexNet+ ResNet+DenseNet CNNs, where the maximum accuracy of 99.7% is reached using CA features extracted using AlexNet+ResNet+DenseNet CNNs using cubic SVM. In contrast, Table <ns0:ref type='table'>8</ns0:ref> shows a comparison between classification accuracy accomplished using 500 DCT features extracted from different combinations of the DL variables generated from the four CNNs employed in Gastro-CADx using Dataset II. This comparison specifies the maximum accuracy (written in bold) is achieved using 500 DCT features extracted using AlexNet+ResNet+DenseNet CNNs and AlexNet+ ResNet+DenseNet+ DarkNet CNNs, where the highest accuracy of 97.3% is attained using linear, quadratic, and cubic SVM classifiers. Table <ns0:ref type='table'>9</ns0:ref> shows the performance metrics for cubic SVM classifiers trained with the fused CA features extracted from AlexNet+ResNet+DenseNet CNNs using Dataset I and Dataset II. The results of table 9 indicate that the specificity of 0.9959 and 0.9996, sensitivity of 0.9715 and 0.9965, precision of 0.9718 and 0.9961, and F1 score of 0.9715 and 0.9963 are obtained for Dataset I and Dataset II respectively Table <ns0:ref type='table'>7</ns0:ref> Table <ns0:ref type='table'>8</ns0:ref> Table <ns0:ref type='table'>9</ns0:ref> 6 Discussion</ns0:p><ns0:p>The manual diagnosis of GI diseases with a huge number of endoscopic images is very challenging and time-consuming. Besides, at times the image containing the abnormality can be simply unobserved by the medical expert which can lead to misdiagnosis. Therefore, there is an essential need for automatic systems that have the capability to automatically identify possible anomalies by analyzing the entire endoscopic images <ns0:ref type='bibr' target='#b6'>(Aoki et al., 2019)</ns0:ref>. Nowadays, with the current development of DL and imaging processing technologies, CADx systems have been frequently used to help gastroenterologists in automatically examining endoscopic images and recognizing the GI disease <ns0:ref type='bibr' target='#b42'>(Khan et al., 2020b)</ns0:ref>. In this study, an automatic CADx system called Gastro-CADx is proposed. The proposed CADx involves three steps including the image preprocessing step, followed by the feature extraction, reduction, and fusion step, and finally the classification step. Primary the endoscopic images were augmented. Next, is the feature extraction, reduction, and fusion step. which presents the three stages of Gastro-CADx. In the first stage of Gastro-CADx, four spatial valuable DL features were extracted from the four CNNs and used to train SVM classifiers. Next, in the second stage of Gastro-CADx, DCT, and DWT feature extraction methods were employed to extract temporal-frequency and spatial-frequency features. These methods were used for feature reduction as well. These extracted features are utilized to construct SVM classifiers. Finally, in the third stage of Gastro-CADx, the coefficients of the DCT and DWT were fused to form different combinations of fused feature sets. This stage examined the influence of fusing features on the performance of the CADx. Besides, the third stage of Gastro-CADx searched for the greatest mixture of features that influenced Gastro-CADx's performance. Two datasets, namely Dataset I and Dataset II were used to evaluate the performance of the proposed Gastro-CADx.</ns0:p><ns0:p>The first stage of Gastro-CADx is compared with the end-to-end DL CNNs of experiment I and the results are shown in tables 10 for Dataset I and II. It can be observed from Table <ns0:ref type='table'>10</ns0:ref> that the first stage of Gastro-CADx has higher accuracies compared to the end-to-end CNNs constructed in experiment I for both datasets. The highest accuracy achieved in the first stage of Gastro-CADx is 94.4 % using linear SVM trained with DarkNet-19 features for Dataset I (written in bold). Whereas, for Dataset II, the peak accuracy attained in the first stage of Gastro-CADx is 96.9 % using linear SVM trained with DarkNet-19 features.</ns0:p><ns0:p>Table <ns0:ref type='table'>10</ns0:ref> It was found that most of the previous studies directly used spatial DL features to perform the classification, however in the paper we tried extracting spatial-temporal-frequency DL features using DWT and spatial-frequency DL features using DCT to examine their influence on the classification performance of Gastro-CADx (stage two of Gastro-CADx). DCT and DCT was also performed to reduce the huge dimension of the DL spatial features. It is proved from figure <ns0:ref type='figure' target='#fig_5'>15</ns0:ref>, that for dataset I, stage two has enhanced the classification performance with reduced feature set, while for dataset II it attained the same accuracy but with lower feature dimension. The second stage of Gastro-CADx has reduced the features extracted from the first stage of Gastro-CADx with almost the same accuracy but with fewer feature dimensional space for Dataset I and Dataset II. The highest accuracy of 94.7% of the second stage of Gastro-CADx for Dataset I was obtained using linear SVM classifier trained with the DCT coefficients extracted from deep learning features of DarkNet-19 CNN . Whereas, for Dataset II, the peak accuracy of 96.9% is achieved using linear SVM classifier trained with the CA coefficients extracted from deep learning features of ResNet-50 CNN.</ns0:p><ns0:p>On the other hand, the third stage of Gastro-CADx has further enhancement on the classification accuracy of Gastro-CADx as shown in figure <ns0:ref type='figure' target='#fig_5'>15</ns0:ref> for Dataset I and Dataset II. Figure <ns0:ref type='figure' target='#fig_5'>15</ns0:ref> shows the highest classification accuracy achieved using each stage of Gastro-CADx for Dataset I and II respectively. It can be noticed from the third stage of Gastro-CADx (experiment IV) that the fusion of DCT and DWT of DarkNet and DenseNet CNNs yielded the worst accuracy of around 47-49% for both Dataset I and Dataset II. Whereas, the highest accuracy of 97.3% and 99.7% is achieved using cubic SVM classifier trained with the fused CA coefficients extracted using deep learning features of AlexNet+ResNet+DesneNet for Dataset I and Dataset II respectively. Table In order to make a fair comparison regarding the computational time with other related studies, we both should use the same platform and environment like using the same processor and video controller and other specifications which can vary the computational time. Since, this is very hard to be accomplished as an alternative, we compared the computational cost of the proposed Gastro-CADx with ResNet CNN (end-to-end deep learning techniques) which is widely used in the literature and achieved the highest accuracy using both Dataset I and Dataset II as shown in table 10. This comparison is shown in table 11 which compares both the classification accuracy and training time of ResNet CNN using end-to-end procedure with Gastro-CADx. Table <ns0:ref type='table'>11</ns0:ref> proves that Gastro-CADx has much lower computation time than ResNet (end-end classification) while attaining higher accuracy for both datasets. This is because the computation time for ResNet is 80,580 sec and 100,800 sec for Dataset I and II respectively which is much higher than the 210 sec and 780 sec achieved by Gastro-CADx. Also, the accuracy for ResNet is 90.08 % and 94.75% for Dataset I and II respectively which is much higher than the 97.3% and 99.7% obtained by Gastro-CADx Note that we also searched related studies to see if the authors have mentioned the computational time of their proposed methods, but unfortunately this information was missing. Figure <ns0:ref type='figure' target='#fig_5'>15</ns0:ref> Table <ns0:ref type='table'>11</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All experiments are done with Matlab 2020 a. The processor used is Intel(R) Core(TM) i7-7700HQ, CPU @ 2.80 GHZ 2.8 GHZ, RAM 16 GB, and 64-bit operating system. The video controller is NVIDIA GeForce GTX 1050.</ns0:p><ns0:p>A comparison is made to compare the performance of Gastro-CAD with the latest relevant work that used Dataset I. The results of this assessment are displayed in table 12. Table <ns0:ref type='table' target='#tab_0'>12</ns0:ref> results prove the competence of Gastro-CADx compared to other previous related studies. Gastro-CADx proposed in this paper appears to perform well on all of the metrics provided in Table <ns0:ref type='table' target='#tab_0'>12</ns0:ref>. Gastro-CADx outperformed the systems presented by <ns0:ref type='bibr' target='#b1'>(Ahmad et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b56'>(Pogorelov et al., 2017)</ns0:ref> (first method), <ns0:ref type='bibr' target='#b54'>(Owais et al., 2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b54'>(Owais et al., 2019)</ns0:ref> as they used only spatial information extracted from one or two CNN. The proposed system also outperformed <ns0:ref type='bibr' target='#b56'>(Pogorelov et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Nadeem et al., 2018)</ns0:ref> as they used only handcrafted global features and did not benefit from using the spatial information of features extracted with DL techniques. Although, <ns0:ref type='bibr' target='#b0'>(Agrawal et al., 2017)</ns0:ref> combined DL features with handcrafted global features, yet their performance is lower than Gastro-CADx. This is because Gastro-CADx considered the fusion of two types of textural features while reducing the feature space.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>12</ns0:ref> Dataset II is a new dataset for GI disease that was just released in 2020. Therefore, there is still no research articles to compare with. For this reason, we only compared with the ResNet-50 CNN used in <ns0:ref type='bibr' target='#b19'>(Borgli et al., 2020)</ns0:ref> as well as the other three CNNs employed in experiment I of Gastro-CADx and illustrated in table <ns0:ref type='table'>13</ns0:ref>. The results of Gasto-CADx shown in table 12 verifies its competence. It outperformed the classification accuracy achieved by ResNet-50 used in <ns0:ref type='bibr' target='#b19'>(Borgli et al., 2020)</ns0:ref>. Gastro-CADx has also better performance than the classification accuracy achieved by AlexNet, CNNs. This is because Gastro-CADx extracted not only spatial features but temporal-frequency and spatial-frequency features. It also used DCT and DWT not only for feature extractors but also for feature reduction methods. Moreover, it fuses these several reduced t features to enhance the performance of the CADx.</ns0:p><ns0:p>Table <ns0:ref type='table'>13</ns0:ref> The three stages of Gatro-CADx based on deep CNNs, DCT, and DWT showed the best performance with the highest accuracies of 97.3% and 99.7% for Dataset I and Dataset II respectively. The following article (Attallah, 2020) <ns0:ref type='bibr' target='#b20'>(Colquhoun, 2014)</ns0:ref>, that that the reliability of a medical system requires that the sensitivity should be greater than or equivalent to 80%, the specificity is greater or equivalent to 95%, and the precision is more or equivalent to 95%. The specificities, sensitivities, and precision shown in table 9 are all larger than 95%, therefore Gastro-CADx can be considered a reliable system. This remarkable reliability and performance of Gastro-CADx rises its usability in the diagnosis of several GI diseases by automatically detecting several types of GI lesions or anomalies. Our AI-based Gastro-CADx framework can help the medical experts in an effective diagnosis of several complex GI diseases. Furthermore, it may assist gastroenterologists in reaching a more accurate diagnosis whereas reducing examination time. The proposed system can be used to decrease medical obstacles, death-rates in addition to the cost of treatment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusion</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This paper introduced a CADx system called Gastro-CADx for the automatic classification of GI diseases based on DL techniques. Gastro-CADx consist of three stages. The first stage is based on DL feature extraction techniques to extract spatial information from endoscopic images. The second stage extracted some temporal-frequency and spatial-frequency features. The feature reduction procedure is also considered in this stage. The third stage is a feature fusion based process where several features sets extracted in the second stage are fused to form numerous combinations of fused features. The results of the three stages of Gastro-CADx verified that the proposed system was capable of accurately classifying GI diseases. The first stage of Gastro-CADx achieved higher accuracy than that of end to end DL CNNs. Moreover, the results of the second stage of Gastro-CADx indicated that using the temporal-frequency and spatial-frequency has a better performance compared to using only spatial features. Besides, the second stage of Gastro-CADx achieved competitive performance to the first stage with a lower dimension of features. Also, the third stage has further improvement in the performance of Gastro-CADx which indicated that feature fusion had a significant impact on the accuracy of classification. The performance of the Gastro-CADx is competitive with recent related work based on the same dataset. This means the proposed method can be used efficiently for the diagnosis and classification of GI diseases. Consequently, the cost of medical investigations, medical complications, and death-rates will be reduced. Moreover, the quality of diagnosis will be enhanced as well as the accuracy. Future work will focus on combining multiple datasets to form a multicenter study. Besides, exploring more CNNs and more handcrafted feature extraction methods. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Image samples of Hyperkvasir dataset; </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Transform (DCT) is frequently used to transform images into basic frequency components. It displays data as a sum of cosine functions oscillating at different frequencies<ns0:ref type='bibr' target='#b14'>(Aydoğdu & Ekinci, 2020)</ns0:ref>. Generally, the DCT is applied to the imaged features to attain the DCT coefficients. The DCT coefficients are separated into three sets, known as low frequencies called (DC coefficients), middle frequencies, and high frequencies called (AC coefficients). High frequencies characterize noise and small deviations (details). Whereas, low frequencies are associated with the brightness conditions. On the other hand, middle frequencies coefficients comprise valuable information and build the basic assembly of the image. The dimension of the DCT coefficient matrix is identical to the input DL featue<ns0:ref type='bibr' target='#b21'>(Dabbaghchian, Ghaemmaghami & Aghagolzadeh, 2010)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 5 Figure 6</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5Figure6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 14</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure14</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>A)bowel quality,(B) Normal Cecum, (C)Dyed-Lifted-Polyp, (D) Dyed-Resection-Margin, (E) Esophagitis, (F) Polyps, (G) Polyrous, (H) Retroflex stomach, (I) Ulcerative-Colitis, and (J)Normal-z-line</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,316.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,199.12,525.00,379.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,199.12,525.00,379.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,378.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,379.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,379.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,199.12,525.00,350.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,199.12,525.00,383.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,199.12,525.00,330.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,199.12,525.00,374.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,199.12,525.00,379.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 2.2.1 AlexNet</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 Table 4</ns0:head><ns0:label>34</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Global Features, 2 logistic model tree,3 Local Binary Pattern, 4 Logistic Regression, 5 Multilayer perceptron 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Article (Khan et al., 2020b) 2017) Article (Ahmad et al., 2017) Purpose Ulcer, polyp, bleeding detection detection and assessment ResNet+DarkNet Class Method 4 RCNN, ResNet101, and SVM CNN Accuracy (%) Accuracy Limitation 99.13%  Used Only spatial features.  Low segmentation accuracy for the ulcer regions.  Fail for the  Used only temporal features. AlexNet 88.32 CNN Accuracy (%) AlexNet 91.66 Features Linear Quadratic Cubic 95 95 95.3 Performance metrics Method Accuracy(%) Sensitivity Precision Specificity  Used only one type ResNet-50 91.66 ResNet-50 94.75 CA CD Features AlexNet+DarkNet 95 95 95.1 AlexNet 75.4 ---of CNN features  Low accuracy  Small dataset DarkNet-19 90.08 DenseNet-201 89.83 DarkNet-19 93.26 DenseNet-201 91.93 Linear Quadratic Cubic Linear Quadratic Cubic ResNet+DarkNet 94.8 95 95 93.8 94 AlexNet+DarkNet 93.7 94.5 94.6 93.4 93.7 AlexNet+DenseNet 95.5 95.7 96.1 93.4 AlexNet+ResNet 94.2 94.2 2017) V3+VGG+SVM 94.4 93.7 DenseNet+DarkNet 47.7 47.5 46.6 (Agrawal et al., GF 1 +Inception 96.1 0.852 0.847 0.978</ns0:cell><ns0:cell>1 1 1 1 F1 score -0.827</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>(Khan et al., 2020a) (Sharif et al., 2019) DenseNet+DarkNet Ulcer, and bleeding detection Ulcer, and bleeding detection ResNet +DenseNet 3 3 (Pogorelov et al., ResNet+LMT 2 VGG-16, PSO, and SVM VGG-16, VGG-19, geometric features, KNN 48.8 48.5 96.2 95.7 AlexNet+ResNet 94.2 94.4 AlexNet+ResNet +DarkNet 95.5 2017) (Ghatwary, Ye & Esophageal cancer 2 Gabor Filter. faster R-CNN, and SVM AlexNet+DenseNet 95.7 96 ResNet +DenseNet 96.1 96.3 AlexNet+ DenseNet +DarkNet 95.8 (Pogorelov et al., GF+ Decision Tree 93.7 AlexNet+ResNet +DenseNet 96.8 97.3 ResNet +DenseNet+DarkNet 96.3 AlexNet+ResNet +DenseNet 96.8 2017) AlexNet+ResNet +DarkNet 95.2 95.3 AlexNet+ResNet +DenseNet+DarkNet 97.1 (Pogorelov et al., GF+ Random Forest 93.3</ns0:cell><ns0:cell>segmentation polyp and bleeding of regions.  Limited classes  Used Only spatial features.  High Computational  Limited classes. 48.4 48.6 47.7 96.2 98.4% 99.42% 47.6 96.3 0.826 0.829 0.975  Small dataset. 94.1 94.3 94.7 94.5 95.8 95.9  Used spatial and geometric features only 95% 96.2 95.8 96.2 96 96.6 96.4 96.5 95.9 96 0.748 0.748 0.964 96.7 97.3 96.5 96.8 96.8 96.5 96.5 97 97.1  Limited classes.  Used only one type 95.4 94.4 94.7 94.7 97.3 97.3 0.732 0.732 0.962</ns0:cell><ns0:cell>0.802 0.747 0.727</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>(Igarashi et al., 2020) Zolgharni, AlexNet+ DenseNet +DarkNet Classify several GI diseases 14 detection 2017) 2018) LR 2019) ResNet +DenseNet+DarkNet AlexNet+ResNet +DenseNet+DarkNet AlexNet 95.6 96.3 96.6 (Nadeem et al., GF+ LBP 3 + Haralick +</ns0:cell><ns0:cell>95.9 96.7 96.8 94.2</ns0:cell><ns0:cell>96.5% 96.2 96.7 97 0.774</ns0:cell><ns0:cell>Cost of CNN features 96.4 96.3  Used only spatial 95.9 features. Gabor features only.  The training or test 96.3 96.4 96.5  Used spatial and textural based -96.3 96.6 96.6 0.767 0.966</ns0:cell><ns0:cell>0.707</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>(Thambawita et al.,</ns0:cell><ns0:cell>GF+CNN</ns0:cell><ns0:cell /><ns0:cell>95.8</ns0:cell><ns0:cell>0.958</ns0:cell><ns0:cell>data included chosen images of gastric  High computational cost 0.9587 0.9971</ns0:cell><ns0:cell>0.9580</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='5'>(Billah, Waheed & Rahman, 2017) 2018) (Owais et al., 2019) ResNet+DenseNet+ML Polyps detection 2 Color based DWT,CNN, and SVM 92.5 P 5 Gastro-CADx 97.3 (Alaskar et al., 2019) Ulcer detection 2 AlexNet & Google Net (Owais et al., 2019) Classification of multiple GI diseases 37 ResNet-18 and LSTM diseases logistic regression (Nadeem et al., 2018) Classification of several GI 8 VGG-19, Haralick and LBP texture analysis, and</ns0:cell><ns0:cell>98.65% 0.993 0.9715 97.143% 89.95% 83%</ns0:cell><ns0:cell>cancer which could cause a lesions, selection bias.  Has high  Limited classes.  Used only one type 0.946 -of CNN features  Used spatial and computational cost  Cannot be used in color based -DWT 0.9718 0.9959 only real-time examinations  Limited classes. only  Used individual type textural analysis Cost. based on CNN and  High Computational  Used spatial features features of CNN features  Used only spatial  Small dataset  Used only one type  Low accuracy</ns0:cell><ns0:cell>0.934 0.9715 1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Majid et al., (Fan et al., 2018) 2020) (Nguyen et al., 2020)</ns0:cell><ns0:cell>Bleeding, Ulcer Erosion esophagitis, and polyp, and ulcerative-detection colitis classification Classifying images to</ns0:cell><ns0:cell>5 2 2</ns0:cell><ns0:cell cols='2'>DCT, color based statistical AlexNet features, DWT, VGG-16, GA, and E DenseNet, Inception, and VGG-16</ns0:cell><ns0:cell>96.5% 95.16% 95.34% 70.7%</ns0:cell><ns0:cell>of features  High Computational  Low Accuracy cost.  Limited classes.  Used only one type  Used only spatial of CNN DL features features.  Used only one type of CNN features  Classify images to  The CADx was either normal or</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>normal and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>applied separately abnormal.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>abnormal</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>for  Did not classify ulcer and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(He et al., 2018) (Owais et al.,</ns0:cell><ns0:cell>Hookworm detection Classification</ns0:cell><ns0:cell>2 37</ns0:cell><ns0:cell cols='2'>VGG-16 and Inception DenseNet and LSTM</ns0:cell><ns0:cell>88.5% 95.75%</ns0:cell><ns0:cell>Erosion detection several GI diseases.  Limited classes.  Low accuracy  Used only spatial  High Computational</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2020) (Yuan Meng, 2017) &</ns0:cell><ns0:cell>of multiple Polyps detection GI diseases</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>Stacked sparse auto-encoder with image manifold</ns0:cell><ns0:cell>98%</ns0:cell><ns0:cell>features Cost.  Low accuracy  Used only spatial  Used individual type  Limited classes. of features</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>features</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>(Pei et al., Bowel</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>LSTM and PCA</ns0:cell><ns0:cell>88.8%</ns0:cell><ns0:cell> Limited classes.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54985:1:1:NEW 20 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Arab Academy for Science, Technology, and Maritime Transport (AAST)
College of Engineering and Technology
Alexandria, 1029, Egypt.
January 12th, 2021
Dear Editors
We would like to thank the editor and the reviewers for their generous comments on the manuscript that have helped us in improving the quality of this manuscript. In addition, we have edited the manuscript to address their concerns.
We hope that the manuscript is now suitable for publication in PeerJ Computer Science .
Yours sincerely;
Omneya Attallah
Reviewer 1
Basic reporting
Comment 1: In general, this paper is well written, although part of it should be improved to ensure that readers can clearly understand the text. Some examples where the language could be improved are listed as following.
Line 74, How to get the results after 'Therefore'? The statements before 'Therefore' are not reasons for 'the early diagnosis of GI is essential …'.
Line 402, What does '… as as …' mean?
Lines 423-425, FP and FN are all the images' sum that is not well identified for each class of GI diseases. What is the difference between FP and FN?
Line 655, 'stage two' should be written as 'the second stage'.
Response: Thank you for the valuable corrections. We updated the manuscript by addressing all these comments.
Line 74, How to get the results after 'Therefore'? The statements before 'Therefore' are not reasons for 'the early diagnosis of GI is essential …'.
Response: we removed “therefore” from the first paragraph of the introduction section.
Line 402, What does '… as as …' mean?
Response: It is a typo so we removed the extra “as” from the sentence in the experimental setup section
Lines 423-425, FP and FN are all the images' sum that is not well identified for each class of GI diseases. What is the difference between FP and FN?
Response: We rewrote this paragraph to make clear of the difference. We also added it to the manuscript in the evaluation metrics section.
Where; is the total sum of GI images that are well classified to the GI class which they actually belongs to is known as TP, TN is the sum of GI images that do not belong to the GI class intended to be classified, and truly do not belong to. For each class of GI, FP is the sum of all images that are classified as this GI class but they do not truly belong to. For each class of GI, FN is the entire sum of GI images that are not classified as this GI class.
Line 655, 'stage two' should be written as 'the second stage'.
Response: thank you, we changed it in the conclusion section to “the second stage”
Comment 2:Literature references and sufficient field background have been provided. Moreover, this article is well structured. Figures are relevant to the content of the article. However, the resolution of Figures 5-14 is not sufficient. The raw data are available.
Response: thanks for your valuable comment. We have updated the resolution of the figures 5-15.
Experimental design
Comment 3: The research question is well defined, relevant, and meaningful. Moreover, the rigorous investigation has been performed to a high technical & ethical standard. Methods have been described with sufficient information.
However, the simulation environment is not mentioned. Furthermore, there are some inconsistent statements for the methods, and some information is not clear. Some examples are listed below.
Response: All experiments are done with Matlab 2020 a. The processor used is Intel(R) Core(TM) i7-7700HQ, CPU @ 2.80 GHZ 2.8 GHZ, RAM 16 GB, and 64-bit operating system. The video controller is NVIDIA GeForce GTX 1050. This paragraph is added to the discussion page 18 in the tracked manuscript version.
Line 287, in the first stage, four CNN are used as feature extractors.
Lines 312-320, in the first stage, valuable DL features are mined from the CNNs.
Lines 573-575, in the first stage, four DL features were extracted from images and used to train SVM classifiers.
The problem is which statement describes the real first stage.
Response: these sentences are changed to statement 2 and updated in the manuscript in the proposed method and discussion sections.
Response: Thank you for the comment. We updated the manuscript and we rewrote the explanation of the system in past tense.
Comment 5: Line 336, What are CD1 and CA1?
Response: CA1 and CD2 are the approximation coefficients CA1, and detail coefficients CD1 of the first decomposition level of discrete wavelet transform (DWT). It has been added in the feature reduction section.
Validity of the findings
Comment 6: Although the results and compelling, the data analysis does not include the experimental settings' meaning corresponding to the best results.
Response: Thank you for the comment.
For the first stage of Gastro-CADx, the highest accuracy achieved is 94.4 % using linear SVM trained with DarkNet-19 features for Dataset I (highlighted in red). Whereas, for Dataset II, the peak accuracy attained is 96.9 % using linear SVM trained with DarkNet-19 features.
For the second stage of Gastro-CADx, for Dataset I, the highest accuracy of 94.7% was obtained using linear SVM classifier trained with the DCT coefficients extracted from deep learning features of DarkNet-19 CNN . Whereas, for Dataset II, the peak accuracy of 96.9% is achieved using linear SVM classifier trained with the CA coefficients extracted from deep learning features of ResNet-50 CNN.
For the third stage of Gastro-CADx, the highest accuracy of 97.3% and 99.7% is achieved using cubic SVM classifier trained with the fused CA coefficients extracted using deep learning features of AlexNet+ResNet+DesNet for Dataset I and Dataset II respectively.
These paragraphs are highlighted in red in the discussion section just before figure 15.
Comment 7:The conclusions are well stated and linked to the research question.
Response: thanks so much.
Reviewer 2
Comment 1: Just curious to know why the Author has selected only those four pre-trained CNN for comparison and feature extraction? The significant reason or justification is missing. Did author have experimented with other CNNs available?
Response: we would like first to thank you regarding this useful question. We made a detailed review on the literature and summarized it in table 1. As it is noticed from table 1 that most related studies used AlexNet, ResNet and VGG CNNs. We did not use VGG as it has very high computational cost and number of parameters. Also, the features extracted from this network is of very huge size [1],[2],[3]. Although, AlexNet is one of the oldest architectures, but it is still being used due to its acceptable performance. This is because it has efficient computation ability and performs well with color images like these used in this paper well [4]. We employed more recent CNNs like DarkNet and DenseNet architectures To our own knowledge darknet was not used in the literature, whereas, only few articles used DenseNet for classifying GI diseases but has several drawbacks in their proposed methods. Therefore, we used these two new CNNs architectures to test their performance and ability to classify multiple GI diseases from endoscopic images. This paragraph is added to the deep convolutional neural network architecture section page 7 in the tracked version.
Comment 2: Why author has selected particular methods (DWT and DCT) for feature extraction and reduction? The significant reason is missing. Does this method is superior to other available methods proposed by various researchers? Why not PCA or some other well-known feature reduction methods?
Response: Thanks for your comment, we found that most of the previous studies directly used spatial DL features to perform the classification, however in the paper we tried extracting spatial-temporal-frequency DL features using DWT and spatial-frequency DL features using DCT to examine their influence on the classification performance of Gastro-CADx (stage two of Gastro-CADx). DCT and DCT was also performed to reduce the huge dimension of the DL spatial features. We proved in the discussion section figure 15, that for dataset I, stage two has enhanced the classification performance with reduced feature set, while for dataset II it attained the same accuracy but with lower feature dimension. We employed DWT and DCT as they are popular feature extraction method based on textural analysis. One of the main benefit of DCT is its capability to spatially alter to characteristics of an image for instance discontinuities and changing frequency manner [5]. It offers time-frequency representation of an image. Also, DCT has several advantages, first of all it prevents complicated calculation and presents simplicity of execution in practical purposes. Furthermore, DCT is capable of effectively managing the phase removing problem and demonstrates a powerful energy compaction estate [6],[7]. DWT and DCT are the utmost common approach to extract textural features in the medical image processing area [8], [9],[10], [11],[12],. Textural analysis based methods are useful in extracting texture features from images which is equivalent to simulating human visual learning procedure . it is widely used in medical image processing [30–33]. We added part of this paragraph to the discussion section page 17 lines (729-733) and the other part to the Handcrafted Feature Extraction and Reduction stage (Second Stage of Gastro-CADx) section in page 9 in the tracked version. Examples of research papers using DWT and DCT in GI diseases [13],[14],[15], [16].
Comment 3: Why author has selected particularly SVM classifier, when many advanced multiclass classifiers available? The significant reason is missing. Does this classifier is superior to other various available classifiers? This information on the comparison with other classifiers is missing here.
Response: Thanks so much for your valuable question. SVM is known to be a powerful classifier. It is considered to be one of the best known methods in pattern classification and image classification [17]. It performs well with large dimension space and of multi-class as it uses kernel function which maps the feature space into new domain that can easily separate between classes of a dataset. Therefore, it is commonly used with the huge dimension of DL features extracted from CNNs [18], [19], [20], [21],[22], [23],[24], [25] achieving outperforming results. Also, as you can see in table 1, that SVM is the commonly used in the literature It can be observed that articles that used SVM achieved the highest performance as [26] which achieved an accuracy of 99.13%, [27] achieving an accuracy of 98.4%, [28] obtaining an accuracy of 95%, [29] achieving an accuracy of 98.65%. This paragraph is added to the classification section page 12 in the tracked version.
Comment 4: What is the main novelty of this work other than improved accuracy compared to the many different available CAD systems? Does this method aid in diagnosis of more GI diseases than the proposed by others (14 types of disease by Igarashi et al., 2020)? Is this proposed system superior in terms of computational time? The main or significant novelty is missing for me. I am just curious to know this detail.
Response: Thanks for your valuable question, yes one of our contribution is to construct a CADx capable of accurately diagnosing more GI diseases than the proposed by others. There reference you mention (Igarashi et al., 2020) has 14 types of GI diseases however, the dataset used in this method is not available. It is also unbalanced , therefore it is difficult to achieve reliable results from this kind of data which are not normalized.
We just want to explain our novelty which tries to overcome the limitations found in related studies discussed in table 1:
• First of all, the majority of the current methods studied the detection and classification of a few types of GI anomalies, disease, or anatomical landmark. But, our proposed Gastro-CADx is an automatic highly accurate system to classify several GI diseases and anatomical landmarks.
• Some of the related studies are based on small dataset or used only one dataset to test the efficiency of their classification model, while Gastro-CADx is validated using two large datasets of several GI diseases.
• The few articles that classified several GI diseases achieved low accuracy, not reliable, or used only one type of CNN, ,whereas, Gastro-CADx is an accurate and reliable system that used more four CNNs. This appears in the first stage, Gastro-CADx studies several CNN based methods for feature extraction from spatial domain instead of using one or two networks to benefit from the advantages of several types of CNNs.
• The previous studies were either based only on an end-to-end deep learning which has very high computational cost, used only spatial features extracted from CNNs or only handcrafted feature extractions, but Gastro-CADx is not based only on spatial features, but temporal-frequency and spatial-frequency features using handcrafted feature extraction methods as well not only end-to-end based DL . This appears clearly in the second stage of Gastro-CADx. It extracts handcrafted features based on textural analysis from the temporal-frequency and spatial-temporal domains using the DL features extracted in the first stage. This reduces the high computational cost of end-to-end DL techniques
• Previous related studies indicated that CNN representations have improved the performance and the abstract level for the automatic detection and classification of GI diseases [13], [26], [30]. Nevertheless, the fusion of CNN features with handcrafted variables could enhance diagnostic accuracy [13],[31]. Therefore, in the third stage, a fusion process is introduced which combines the second stage features to benefit from the spatial, temporal- frequency, and spatial-frequency features. This stage can confirm the capacity of every feature abstraction method to mine significant information that might be disregarded from the other method. It can also reduce the computational cost compared to end-to-end DL methods.
This answer was added to the introduction pages 5 and 6 of the tracked version.
Comment 5:The output from DL stage – DL features are of what dimensions? Because the author is mentioning of applying 1-dimensional DWT to reduce the dimensions of DL 2D images? Little confused here. Please clarify these
Response: Thanks for your comment. The dimensions of the DL features are given in table 2. This is a mistake the DL features extracted from the four CNN is 1-D not 2-D. In other words, a DL feature vector is extracted for each image. The size of this feature vector 4096 features from AlexNet, 2048 from Resnet, Darknet provides 8 or 10 features according to the number of classes while DenseNet provides 1920 features. Therefore, the DWT is applied to reduce the dimension of a one dimension DL features. We have corrected this in the DWT section in page 10 in the tracked version of the manuscript as well as figure 4.
Comment 6: Author has performed lot of comparison with in the four pre-trained CNNs and the feature extraction+ reduction and classification parts. However, information or valid reason is missing – why the author has selected these methods which are available and already experimented by other researchers.
Response: we would like first to thank you about this useful question. As mentioned in comment 1, we made a detailed review on the literature and summarized in table 1. As it is noticed from table 1 that most related studies commonly used AlexNet, ResNet and VGG CNNs. We did not use VGG as it has very high computational cost and parameter. Also, the features extracted from this network is of very huge size [1],[2],[3]. Although, AlexNet is one of the oldest architectures, but it is still being used due to its acceptable performance. This is because it has efficient computation ability and performs well with color images like these used in this paper well [4]. We employed more recent CNNs like DarkNet and DenseNet architectures which were not used in the literature to test their performance and ability to classify multiple GI diseases from endoscopic images.
For the feature extraction, as mentioned in comment 2 we found that most of the previous studies directly used spatial DL features to perform the classification, however in the paper we tried extracting spatial-temporal-frequency DL features using DWT and spatial-frequency DL features using DCT to examine their influence on the classification performance of Gastro-CADx (stage two of Gastro-CADx). DCT was also performed to reduce the huge dimension of the DL spatial features. We proved in the discussion section in figure 15, that for dataset I, stage two has enhanced the classification performance with reduced feature set, while for dataset II it attained the same accuracy but with lower feature dimension. We employed DWT and DCT as they are popular feature extraction method based on textural analysis. One of the main benefit of DCT is its capability to spatially alter to characteristics of an image for instance discontinuities and changing frequency manner [5]. It offers time-frequency representation of an image. Also, DCT has several advantages, first of all it prevents complicated calculation and presents simplicity of execution in practical purposes. Furthermore, DCT is capable of effectively managing the phase removing problem and demonstrates a powerful energy compaction estate [6],[7]. DWT and DCT are the utmost common approach to extract textural features in the medical image processing area [8], [9],[10], [11],[12],. Textural analysis based methods are useful in extracting texture features from images which is equivalent to simulating human visual learning procedure . it is widely used in medical image processing [30–33].
For classification, as mentioned in comment 3, we employed SVM as it is known to be a powerful classifier. It is considered to be one of the best known methods in pattern classification and image classification [17]. It performs well with large dimension space and of multi-class. As it uses kernel function which maps the feature space into new domain that can easily separate between classes of a dataset. Therefore, it is commonly used with the huge dimension of DL features extracted from CNNs [18], [19], [20], [21],[22], [23],[24], [25] achieving outperforming results. Also, as you can see in table 1, that SVM is the commonly used in the literature It can be observed that articles that used SVM achieved the highest performance as [26] which achieved an accuracy of 99.13%, [27] achieving an accuracy of 98.4%, [28] obtaining an accuracy of 95%, [29] achieving an accuracy of 98.65%.
These paragraphs are added to the manuscript as stated in previous comments.
Comment 7: If author provides the computational time (to handle this huge data or images) in the discussion table which compares your proposed system with other’s similar work, that would be one of the justification for the significant novelty?, if the computational time is reasonably arguable.
Response: thanks for your valuable suggestion, we added a table 11 to the discussion to provide the computational time of our proposed method. However, we just want to clarify that in order to make a fair comparison regarding the computational time with other related studies, we both should use the same platform and environment like using the same processor and video controller and other specifications which can vary the computational time. Since, this is very hard to be accomplished as an alternative, we compared the computational cost of the proposed Gastro-CADx with ResNet CNN (end-to-end deep learning techniques) which is widely used in the literature and achieved the highest accuracy using both Dataset I and Dataset II as shown in table 10. This comparison is shown in table 11 which compares both the classification accuracy and training time of ResNet CNN using end-to-end procedure with Gastro-CADx. Table 11 proves that Gastro-CADx has much lower computation time than ResNet (end-end classification) while attaining higher accuracy for both datasets. This is because the computation time for ResNet is 80,580 sec and 100,800 sec for Dataset I and II respectively which is much higher than the 210 sec and 780 sec achieved by Gastro-CADx. Also, the accuracy for ResNet is 90.08 % and 94.75% for Dataset I and II respectively which is much higher than the 97.3% and 99.7% obtained by Gastro-CADx Note that we also searched related studies to see if the authors have mentioned the computational time of their proposed methods, but unfortunately this information was missing. Note that we have added the specifications of the processor and the tool used to perform the experiments in the discussion section page 18. We also added this paragraph to the discussion page 18.
Comment 8. The entire manuscript needs to be revised with valid reasons and the significant contribution from the author. I personally feel the proposed manuscript is suitable for a review of currently available methods to develop CADx with maybe improved accuracy level. I do not see this manuscript as new research methodology article.
Response: thanks for your valuable suggestion, we have revised the manuscript and provided valid reason to every thing we used. We also added a paragraph to describe our contribution
Comment 9. In view of these proposed reasons, I propose to revise the entire paper with above comments.
Response: Thanks do much for providing us the opportunity to revise the manuscript and do revisions and accept to review our revisions after addressing your precious comments
References:
[1] Z. Bi, L. Yu, H. Gao, P. Zhou, and H. Yao, “Improved VGG model-based efficient traffic sign recognition for safe driving in 5G scenarios,” International Journal of Machine Learning and Cybernetics, pp. 1–12, 2020.
[2] M. G. Ertosun and D. L. Rubin, “Probabilistic visual search for masses within mammography images using deep learning,” in 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2015, pp. 1310–1315.
[3] D. Su, Y. Li, Y. Zhao, R. Xu, B. Yuan, and W. Wu, “A Face Recognition Algorithm Based on Dual-Channel Images and VGG-cut Model,” in Journal of Physics: Conference Series, 2020, vol. 1693, no. 1, p. 012151.
[4] R. Wang, J. Xu, and T. X. Han, “Object instance detection with pruned Alexnet and extended training data,” Signal Processing: Image Communication, vol. 70, pp. 145–156, 2019.
[5] J. Bennet, C. Arul Ganaprakasam, and K. Arputharaj, “A discrete wavelet based feature extraction and hybrid classification technique for microarray data analysis,” The Scientific world journal, vol. 2014, 2014.
[6] H. Imtiaz and S. A. Fattah, “A DCT-based feature extraction algorithm for palm-print recognition,” in 2010 INTERNATIONAL CONFERENCE ON COMMUNICATION CONTROL AND COMPUTING TECHNOLOGIES, 2010, pp. 657–660.
[7] S. Rashidi, A. Fallah, and F. Towhidkhah, “Feature extraction based DCT on dynamic signature verification,” Scientia Iranica, vol. 19, no. 6, pp. 1810–1819, 2012.
[8] S. Lahmiri and M. Boukadoum, “Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images,” J Med Eng, vol. 2013, 2013, doi: 10.1155/2013/104684.
[9] V. Srivastava and R. K. Purwar, “A Five-Level Wavelet Decomposition and Dimensional Reduction Approach for Feature Extraction and Classification of MR and CT Scan Images,” Applied Computational Intelligence and Soft Computing, 2017. https://www.hindawi.com/journals/acisc/2017/9571262/ (accessed Mar. 12, 2019).
[10] S. Mishra, L. Sharma, B. Majhi, and P. K. Sa, “Microscopic image classification using dct for the detection of acute lymphoblastic leukemia (all),” in Proceedings of International Conference on Computer Vision and Image Processing, 2017, pp. 171–180.
[11] M. Anthimopoulos, S. Christodoulidis, A. Christe, and S. Mougiakakou, “Classification of interstitial lung disease patterns using local DCT features and random forest,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2014, pp. 6040–6043.
[12] N. E. Benhassine, A. Boukaache, and D. Boudjehem, “Medical Image Classification Using the Discriminant Power Analysis (DPA) of Discrete Cosine Transform (DCT) Coefficients,” in Real Perspective of Fourier Transforms, IntechOpen, 2020.
[13] A. Majid, M. A. Khan, M. Yasmin, A. Rehman, A. Yousafzai, and U. Tariq, “Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection,” Microscopy research and technique, vol. 83, no. 5, pp. 562–576, 2020.
[14] B. Li and M. Q.-H. Meng, “Automatic polyp detection for wireless capsule endoscopy images,” Expert Systems with Applications, vol. 39, no. 12, pp. 10952–10958, 2012.
[15] B. Li and M. Q.-H. Meng, “Tumor recognition in wireless capsule endoscopy images using textural features and SVM-based feature selection,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 3, pp. 323–329, 2012.
[16] R. Ponnusamy and S. Sathiamoorthy, “Bleeding and Z-line classification by DWT based SIFT using KNN and SVM,” in International Conference On Computational Vision and Bio Inspired Computing, 2019, pp. 679–688.
[17] L. H. Thai, T. S. Hai, and N. T. Thuy, “Image classification using support vector machine and artificial neural network,” International Journal of Information Technology and Computer Science, vol. 4, no. 5, pp. 32–38, 2012.
[18] D. A. Ragab, M. Sharkas, S. Marshall, and J. Ren, “Breast cancer detection using deep convolutional neural networks and support vector machines,” PeerJ, vol. 7, p. e6201, 2019.
[19] J. M.M., Z. Q., H. I.U., B. S., and J. A., “Three-class mammogram classification based on descriptive CNN features,” Hindawi BioMed Research International, vol. 2017, no. Article ID 3640901, 2017, doi: 10.1155/2017/3640901.
[20] H. Zhang et al., “DE-Ada*: A novel model for breast mass classification using cross-modal pathological semantic mining and organic integration of multi-feature fusions,” Information Sciences, 2020, doi: 10.1016/j.ins.2020.05.080.
[21] D. Das, L. B. Mahanta, B. K. Baishya, and S. Ahmed, “Classification of Childhood Medulloblastoma and its subtypes using Transfer Learning features-A Comparative Study of Deep Convolutional Neural Networks,” in 2020 International Conference on Computer, Electrical & Communication Engineering (ICCECE), 2020, pp. 1–5.
[22] D.-X. Xue, R. Zhang, H. Feng, and Y.-L. Wang, “CNN-SVM for microvascular morphological type recognition with data augmentation,” Journal of medical and biological engineering, vol. 36, no. 6, pp. 755–764, 2016.
[23] J. Leng, T. Li, G. Bai, Q. Dong, and H. Dong, “Cube-CNN-SVM: a novel hyperspectral image classification method,” in 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), 2016, pp. 1027–1034.
[24] H. Wu, Q. Huang, D. Wang, and L. Gao, “A CNN-SVM combined model for pattern recognition of knee motion using mechanomyography signals,” Journal of Electromyography and Kinesiology, vol. 42, pp. 136–142, 2018.
[25] W. B. Sampaio, E. M. Diniz, A. C. Silva, A. C. De Paiva, and M. Gattass, “Detection of masses in mammogram images using CNN, geostatistic functions and SVM,” Computers in Biology and Medicine, vol. 41, no. 8, pp. 653–664, 2011.
[26] M. A. Khan et al., “Gastrointestinal diseases segmentation and classification based on duo-deep architectures,” Pattern Recognition Letters, vol. 131, pp. 193–204, 2020.
[27] M. A. Khan et al., “Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection,” IEEE Access, vol. 8, pp. 132850–132859, 2020.
[28] N. Ghatwary, X. Ye, and M. Zolgharni, “Esophageal abnormality detection using densenet based faster r-cnn with gabor features,” IEEE Access, vol. 7, pp. 84374–84385, 2019.
[29] M. Billah, S. Waheed, and M. M. Rahman, “An automatic gastrointestinal polyp detection system in video endoscopy using fusion of color wavelet and convolutional neural network features,” International journal of biomedical imaging, vol. 2017, 2017.
[30] Y. Yuan and M. Q.-H. Meng, “Deep learning for polyp recognition in wireless capsule endoscopy images,” Medical physics, vol. 44, no. 4, pp. 1379–1389, 2017.
[31] Q. Shi, W. Li, F. Zhang, W. Hu, X. Sun, and L. Gao, “Deep CNN with multi-scale rotation invariance features for ship classification,” Ieee Access, vol. 6, pp. 38656–38668, 2018.
" | Here is a paper. Please give your review comments after reading it. |
6 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Information efficiency is gaining more importance in the development as well as application sectors of information technology. Data mining is a computer-assisted process of massive data investigation that extracts meaningful information from the datasets. The mined information is used in decision-making to understand the behavior of each attribute.</ns0:p><ns0:p>Therefore, a new classification algorithm is introduced in this paper to improve information management. The classical C4.5 decision tree approach is combined with Selfish Herd Optimization (SHO) algorithm to tune the gain of given datasets. The optimal weights for the information gain will be updated based on SHO. Further, the dataset is partitioned into two classes based on quadratic entropy calculation and information gain. Decision tree gain optimization is the main aim of our proposed C4.5-SHO method. The robustness of the proposed method is evaluated on various datasets and compared with classifiers, such as ID3 and CART. The accuracy and area under ROC (AUROC) parameters are estimated and compared with existing algorithms like ant colony optimization, particle swarm optimization and cuckoo search.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Information management is comprised of mining the information, managing data warehouses, visualizing the data, knowledge extraction from data and so on <ns0:ref type='bibr' target='#b10'>[Chen et al., 2018]</ns0:ref>. Consequently, different information management techniques are now being applied to manage the data to be analyzed. Hence, it is necessary to create repositories and consolidate data as well as warehouses. However, most of the data may be unstable; so it is essential to decide the data to be stored and discarded <ns0:ref type='bibr' target='#b3'>[Amin, Chiam & Varathan, 2019</ns0:ref>]. In addition, individual storage is required to manage real-time data to conduct research and predict trends. Data mining techniques are becoming more popular, recently getting attention towards rule mining methods, such as link analysis, clustering and association rule mining <ns0:ref type='bibr' target='#b14'>[Elmaizi et al., 2019]</ns0:ref>. Data mining discovers the substantial information, reasons and possible rules from huge datasets. It stands as an important source for information system based decision-making processes, such as classification, machine learning and so on <ns0:ref type='bibr'>[Sun et al., 2019]</ns0:ref>. Data mining is generally a specific term to define certain computational analysis and results that comply with three main properties like comprehension, accuracy and user requirements. Data mining techniques are very useful while dealing with large datasets having vast amount of data. The data mining research community has been active for many years in analyzing various techniques and different applications of data mining <ns0:ref type='bibr' target='#b20'>[Jadhav, He & Jenkins, 2018]</ns0:ref>.</ns0:p><ns0:p>A system that is combined with both data analysis and classification is suggested to create mining rules for several applications. For extracting the relevant information from systems, functional knowledge or rules automatically activates the mining process to provide rapid, real-time and significant operational basis. The classification approaches broadly used in data mining applications is efficient in processing large datasets <ns0:ref type='bibr' target='#b18'>[Gu et al., 2018]</ns0:ref>. It maps an input data object into one of the pre-defined classes. Therefore, a classification model must be established for the given classification problem <ns0:ref type='bibr' target='#b23'>[Junior & Carmo, 2019]</ns0:ref>. To perform the classification task, the dataset is converted into several target classes. The classification approach assigns a target type for each event of the data and allots the class label to a set of unclassified cases. This process is called supervised learning because all the training data are assigned as class tags. Therefore, classification is used to refer the data items as various pre-defined classes <ns0:ref type='bibr' target='#b29'>[Xie et al., 2020]</ns0:ref>. The classifier is categorized into two approaches namely logical reasoning and statistical analysis. To create a well-trained classifier, training data is are used to signify the key features of the classification problem under analysis <ns0:ref type='bibr' target='#b29'>[Meng et al., 2020]</ns0:ref>. Once the classifier is trained, then the test dataset is evaluated by the classifier. The overall performance of any classifier algorithm is comparatively estimated through the sensitivities of minority target classes. However, the minority target class predictions are usually found below optimal because of the initial algorithm designs that consider identical class distribution in both model and usage <ns0:ref type='bibr' target='#b13'>[Ebenuwa et al., 2019]</ns0:ref>.</ns0:p><ns0:p>The most popular and simple classification technique is decision tree. Decision trees are popular learning tool utilized in functional research, especially in results analysis to achieve a goal. As a general logical model, a decision tree repeats the given training data to create hierarchical classification <ns0:ref type='bibr' target='#b15'>[Es-sabery & Hair, 2020]</ns0:ref>. It is a simplest form of classifier that can be stored densely and effectively in order to categorize the new data. It takes inputs in the form of training data set, attribute list and attribute selection method. A tree node is created by the algorithm in which attribute selection is applied to compute optimal splitting criteria. Then the final node generated is named based on the selected attributes <ns0:ref type='bibr' target='#b11'>[Damanik et al., 2019]</ns0:ref>. The training tuples subset is formed to split the attributes. Hence, parameters (like purity, number of samples, etc.) are still needed for a decision tree. Moreover, it is capable of handling multidimensional data that offers good classification performance for common datasets <ns0:ref type='bibr' target='#b30'>[Ngoc et al., 2019]</ns0:ref>. Decision tree is also known as decision support tool which utilizes the model of tree-like or graph and the consequences are resource costs, utility and event outcomes <ns0:ref type='bibr' target='#b26'>[Lee, 2019]</ns0:ref>. In practical, the methods utilized to create decision trees normally produce trees with a low node factor and modest tests at each node. Also, the classifier contains different algorithms, such as C4.5, ID3 and CART. The C4.5 algorithm is the successor of ID3 which uses gain ratio by splitting criterion for splitting the dataset. The information gain measure used as a split criterion in ID3 is biased to experiments with multiple outcomes as it desires to select attributes with higher number of values <ns0:ref type='bibr'>[Jimnez et al., 2019]</ns0:ref>. To overcome this, the C4.5 algorithm undergoes information gain normalization using split information value which in turn avoids over fitting errors as well.</ns0:p><ns0:p>In C4.5, two criterions are used to rank the possible tests. The first criterion of information gain is to minimize the entropy of subsets and the second criterion of gain ratio is to divide the information gain with the help of test outcome information. As a result, the attributes might be nominal or numeric to determine the format of test outcomes. <ns0:ref type='bibr' target='#b24'>[Kuncheva et al., 2019]</ns0:ref>. On the other hand, the C4.5 algorithm is also a prominent algorithm for data mining employed for various purposes. The generic decision tree method is created default for balanced datasets; so it can deal with imbalanced data too <ns0:ref type='bibr' target='#b25'>[Lakshmanaprabu et al., 2019]</ns0:ref>. The traditional methods for balanced dataset when used for imbalanced datasets cause low sensitivity and bias to the majority classes <ns0:ref type='bibr' target='#b25'>[Lakshmanaprabu et al., 2019]</ns0:ref>. Some of the imbalance class problems include image annotations, anomaly detection, detecting oil spills from satellite images, spam filtering, software defect prediction, etc. <ns0:ref type='bibr' target='#b27'>[Li et al., 2018]</ns0:ref>. The imbalanced dataset problem is seen as a classification problem where class priorities are very unequal and unbalanced. In this imbalance issue, a majority class has larger pre-probability than the minority class <ns0:ref type='bibr' target='#b28'>[Liu, Zhou & Liu, 2019]</ns0:ref>. When this problem occurs, the classification accuracy of the minority class might be disappointing <ns0:ref type='bibr' target='#b38'>[Tang & Chen, 2019</ns0:ref>]. Thus, the aim of the proposed work is to attain high accuracy in addition to high efficiency.</ns0:p><ns0:p>In data classification, accuracy is the main challenge of all applications. Information loss in dataset is problematic during attribute evaluation and so, the probability of attribute density is estimated. For this, the information theory called entropy based gain concept is utilized to enhance the classification task. Furthermore, common uncertainties of numerical data are used to measure the decision systems. A population based algorithm is utilized to optimize the gain attributes and to enhance the classification in complex datasets. The Selfish Herd Optimization (SHO) enhances the feature learning accuracy by effectively removing redundant features thereby providing good global search capability. The main contribution of the proposed work is summarized as follows.</ns0:p><ns0:p> To solve the data classification problem using entropy based C4.5 decision tree approach and gain estimation.</ns0:p><ns0:p> Selfish Herd Optimization (SHO) algorithm is utilized to optimize the information gain attributes of decision tree.</ns0:p><ns0:p> The data are classified with high accuracy and AUROC of datasets is compared with existing techniques.</ns0:p><ns0:p>The organization of this paper is described as follows: introduction about the research paper is presented in Section 1, survey on existing methods and challenges are depicted in Section 2. The preliminaries are explained in Section 3. The working of proposed method is detailed in Section 4. Efficiency of optimization algorithm is evaluated in Section 5 and the conclusion of the proposed method is presented in Section 6.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Works</ns0:head><ns0:p>Multiple learning process and multi-label datasets are widely used in different fields nowadays. [Yahya, 111 2019] evaluated the efficacy of Particle Swarm Classification (PSC) in data mining. PSC was utilized to design the classification model which classifies the queries into Bloom's taxonomy six cognitive-levels. Rocchio algorithm (RA) was used to mitigate the dimensionality of adverse effects in PSC. Finally, RA-based PSC was investigated with various feature selection methods for a scheme of queries. But it is identified that the multi-label classification dealt with some problems where the classifier chain label order has a strong effect on the performance of classification. Nevertheless, it is too hard to find the proper order of chain sequences. Hence, <ns0:ref type='bibr'>[Sun et al., 2019]</ns0:ref> had proposed an ordering method based on the conditional entropy of labels where a single order was generated by this method. Reduced attributes can improve the accuracy of classification performances. The missed attribute values were typically not used in entropy or gain calculation. Information gain based algorithms tend to authenticate the attribute sets. Various measures were certainly affected from redundancy and non-monotonicity during attribute reduction. Therefore, a forward heuristic attribute reduction algorithm was proposed to solve the uncertainties in attribute selection. It simultaneously selects information attributes though unnecessary attributes were reduced in practice. <ns0:ref type='bibr' target='#b17'>[Gao et al., 2019]</ns0:ref> proposed granular maximum decision entropy (GMDE) based on the measurement of monotonic uncertainty. Extreme decision entropy was developed in which the uncertainties of entropy are integrated with granulation knowledge. This investigation was validated with various UCI datasets and found to be computationally inexpensive.</ns0:p><ns0:p>The choice of dataset selection allows the extraction of highly representative information from high-level data; so computational efforts were reduced among other tasks. A hybrid optimization based feature selection was proposed by <ns0:ref type='bibr' target='#b19'>[Ibrahim et al., 2019]</ns0:ref>. The suggested technique is combined with slap swarm algorithm (SSA) and particle swarm optimization methods to enhance the efficacy of global and local search steps. Therefore, the hybrid algorithm was examined on mixed datasets. It requires less time while the nodes quantity is reduced making it more desirable for large datasets. The SSA-PSO was employed to acquire best features from various UCI datasets. Also, redundant features were detached from the original datasets resulting in better accuracy. However, the accuracy is affected in complex datasets. To improve the classification performance of complex data, <ns0:ref type='bibr'>[Lin et al., 2019]</ns0:ref> introduced an attribute reduction method utilizing neighborhood entropy measures. The systems should have the ability to handle continuous data while maintaining its information on attribute classification. The concept of neighborhood entropy was explored to deal with uncertainty and noise of neighborhood systems. It fully reflects the decision-making ability by combining the degree of reliability with the coverage degree of neighborhood systems.</ns0:p><ns0:p>A clustering method based on functional value sequences has been proposed to accurately identify the functional equivalent programs with index variations. Because existing clustering programs were limited to structured metric vectors as in <ns0:ref type='bibr' target='#b39'>[Wang et al., 2020]</ns0:ref>. This strategy is implemented for automated program repair to identify the sample programs from a large set of template programs. The average accuracy and average entropy were 0.95576 and 0.15497, respectively. However, the problem turned out to uncertain as the number of predictions is higher than the number of previous results. This issue was overcome by an alternative solution of priori weights and maximum entropy principle to attain the posteriori weights. [Arellano, Bory-Reyes & Hernandez-Simon, 2018] utilized a machine learning approach with single aggregated prediction from a set of individual predictions. A new factor presents a problem departing from the well-known maximal entropy hypothetical method and taking the distance among original and estimated integrated predictions. The suggested method was applied to estimate and measure predictive capability using prediction datasets.</ns0:p><ns0:p>It is difficult to perform feature selection (FS) for multi-label dimension curse in numerous learning processes. Hence, <ns0:ref type='bibr' target='#b32'>[Paniri, Dowlatshahi & Nezamabadi-pour, 2020]</ns0:ref> proposed a multi-label relevance-redundancy FS scheme based on Ant Colony Optimization (ACO) called ML-ACO. ML-ACO seeks to find the best features with lowest redundancy and many repetitions with class labels. To speed up the convergence, the cosine similarities between features as well as class labels are used as starting pheromone for each ant, and can be classified as a filterbased method. Various parametric entropies of decision tree algorithms are investigated by <ns0:ref type='bibr'>[Bret et al., 2019]</ns0:ref>. Partial empirical evidences were provided to support the notion that parameter adjustment of different entropy activities influences the classification. Receiver operating characteristic (ROC) and Area under the ROC (AUROC) curve analysis provides an accurate criterion for evaluating decision trees based on parametric entropy. Various entropies, such as Shannon entropy, Renyi entropy, Tsallis entropy, Abe entropy and Landsberg-Vedral entropy were discussed.</ns0:p><ns0:p>A new information classification algorithm has been introduced to improve the information management of restricted properties in <ns0:ref type='bibr' target='#b40'>[Wang et al., 2019]</ns0:ref>. Information management efficiency has gained more importance for the development of information technology through its expanded use. Reduce leaf based on optimization ratio (RLBOR) algorithm was utilized to optimize the decision tree ratios. ID3 algorithm is a classical method of data mining that selects attributes with maximum information gain from the dataset at split node. However, decision tree algorithms have some drawbacks; it is not always optimal and it is biased in favor of properties that have higher values. In data classification, accuracy is the main challenge of all datasets. The resulting information loss is problematic for attribute evaluation while estimating the probability density of attributes. Due to the absence of classification information, it is challenging to perform potential classification. Consequently, an improved algorithm is utilized to solve the data classification issues.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Preliminaries</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Entropy based measurements understands the decision system knowledge, its properties and some relations about the measurements. An optimization model is explored to enhance the performance of complex dataset classification. During prediction, the information gain optimal weights will be updated with the help of SHO algorithm. The nominal attributes of the dataset were designed by the ID3 algorithm. The attributes with missing values are not permitted. </ns0:p><ns0:formula xml:id='formula_0'>n K K K ,.... , 2 1 | | |,... | |, | 2 1 n K K K</ns0:formula><ns0:p>, and the sample probability formula of class is given in Equation (1).</ns0:p><ns0:formula xml:id='formula_1'>M i K (1) M K M p i i  ) (</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.1'>Quadratic Entropy</ns0:head><ns0:p>Entropy is used to measure the uncertainty of a class using the probability of particular event or attribute. The gain is inversely proportional to entropy. The information gain is normally dependent on the facts of how much information was offered before knowing the attribute value and after knowing the attribute value. Different types of entropies are utilized in data classification. For a better performance, quadratic entropy is used in our work <ns0:ref type='bibr' target='#b1'>[Adewole & Udeh, 2018]</ns0:ref>. This entropy considers a random variable as finite discrete with complete probability X collection as mentioned in Equation ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>     k i i i p n i p 1 1 ), ,... 2 , 1 ( 0</ns0:formula><ns0:p>Here, the probability of event is denoted as . The quadratic entropy of information is calculated by Equation (3). </ns0:p><ns0:formula xml:id='formula_3'>    n i i i p p x M Entropy 1 ) 1 ( ) (</ns0:formula><ns0:p>Here, specifies the information entropy of (training sample set). For this particular attribute, the entropy ) (M M of information is determined by Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>(4) </ns0:p><ns0:formula xml:id='formula_4'>           G g g g M entropy M M H M entropy ) ( * ) , (</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.2'>Information Gain</ns0:head><ns0:p>The information gain is determined by Equation ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>).</ns0:p><ns0:p>( Equation ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>), information gain is employed to find additional information that provides high information gain on classification. C4.5 algorithm chooses the attribute that has high gain in the dataset and use as the split node attribute. Based on the attribute value, the data subgroup is subdivided and the information gain of each subgroup is recalculated. The decision tree trained process is enormous and deep compared to neural networks, such as KNN, ANN and etc. as it does not take into account the number of leaf nodes. Moreover, the gain ratio is different from information gain. Gain ratio measures the information related to classification obtained on the basis of same partition. C4.5 uses the information gain and allows measuring a gain ratio. Gain ratio is described in Equation ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>).</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_6'>) ) , ( inf _ ) , ( ) , ( _ H M o split H M gain H M ratio gain <ns0:label>6</ns0:label></ns0:formula><ns0:p>Where, ( <ns0:ref type='formula'>7</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_7'>M M M M H M o t spli g n g g 2 1 log ) , ( inf _    </ns0:formula><ns0:p>The attribute with a maximum gain rate is selected for splitting the attributes. When the split information tactics is 0, the ratio becomes volatile. A constraint is added to avoid this, whereby the information gain of the test selected must be large at least as great as the average gain over all tests examined. <ns0:ref type='bibr' target='#b34'>[Quinlan, 2014]</ns0:ref> developed the C4.5 algorithm to generate a decision tree. Many scholars have made various improvements in the tree algorithm. However, the problem is that tree algorithms require multiple scanning and deployment of data collection during the building process of decision trees. For example, large datasets provided into the ID3 algorithm improves the performance but not effective whereas small datasets are more effective in several fields like assessing prospective growth opportunities, demographic data, etc. This is because the processing speed is slow and the larger dataset is too large to fit into the memory. Besides, C4.5 algorithm gives most effective performance with large amount of datasets. Hence, the advantages of C4.5 algorithm are considerable but a dramatic increase in demand for large data would be improved to meet its performance. Manuscript to be reviewed Computer Science node is selected based on the extreme information gain rate. The data is numeric with only the classification as nominal leading category of labeled dataset. Hence, it is necessary to perform supervised data mining on the targeted dataset. This reduces the choice of classifiers in which a pre-defined classification could handle numerical data and classification in decision tree application. Each attribute is evaluated to find its ratio and rank during the learning phase of decision trees. Additionally, correlation coefficient is found to investigate the correlation between attributes as some dataset could not give any relevant result in data mining. In C4.5 decision tree algorithm, the gain is optimized by proposed SHO technique. The information gain is a rank based approach to compute the entropy. In this algorithm, the node with a highest normalized gain value is allowed to make decision, so there is a need to tune the gain parameter. The gain fitness is calculated based on the difference between actual gain value and new gain value. This is the objective function of the gain optimization technique which is described in Equation ( <ns0:ref type='formula'>8</ns0:ref>). ( <ns0:ref type='formula'>8</ns0:ref>)</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>C4.5 decision tree</ns0:head><ns0:formula xml:id='formula_8'>  i i G G fitness min  </ns0:formula><ns0:p>Here, and denotes actual and new gain, respectively. Based on this fitness, the gain error is minimized by</ns0:p><ns0:formula xml:id='formula_9'>i G i G</ns0:formula><ns0:p>SHO and the gain value will be computed by using Equation ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>). SHO can improve the learning accuracy, remove the redundant features and update the weight function of decision trees. The feature of SHO is random initialization generating strategy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Proposed Method: Selfish Herd Optimization (SHO)</ns0:head><ns0:p>SHO is utilized to minimize the gain error in a better way in optimization process. It improves the balancing between exploration and exploitation phase without changing the population size <ns0:ref type='bibr' target='#b16'>[Fausto et al., 2017]</ns0:ref>. SHO algorithm is mainly suitable for gain optimization in decision trees. In meta-heuristic algorithms, SHO is a new branch inspired from group dynamics for gain optimization. It SHO is instigated from the simulations of herd and predators searching their food or prey. The algorithm uses search agents moving in n-dimensional space to find solution for optimization problem. The populations of SHO are herd and predators where the individuals are known as search agents. In optimization areas, SHO is proved to be competitive with particle swarm optimization (PSO) <ns0:ref type='bibr' target='#b16'>[Fausto et al., 2017]</ns0:ref> for many tasks. The theory of Selfish Herd has been establishing the predation phase. Every herd hunts a possible prey to enhance the survival chance by accumulating with other conspecifics in ways that could increase their chances of surviving a predator attack without regard for how such behavior affects other individuals' chances of survival. This may increase the likelihood of a predator escaping from attacks regardless of how such activities disturb the survival probabilities of other individuals. The proposed SHO algorithm consists of different kinds of search agents like a flock of prey that lives in aggregation (mean of selfish herd), package of predators and predators within the said aggregate. This type of search agents is directed separately through fixed evolutionary operators which are centered on the relationship of the prey and the predator <ns0:ref type='bibr' target='#b4'>[Anand & Arora, 2020</ns0:ref>].</ns0:p><ns0:p>The mathematical model of SHO algorithm is given as follows.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Initialization</ns0:head><ns0:p>The iterative process of SHO's first step is to initialize the random populations of animals as prey and predators thereby having one set of separable locations . Here, the population size is denoted by</ns0:p><ns0:formula xml:id='formula_10'>  sN s s S ,... 2 , 1 </ns0:formula><ns0:p>. The position of animals is limited into lower and upper boundaries and the groups are classified into two, like N prey and predator. Equation ( <ns0:ref type='formula'>9</ns0:ref>) is utilized to calculate the number of members in prey group.</ns0:p><ns0:formula xml:id='formula_11'>(9)   ) 9 . 0 , 7 . 0 ( rand n floor n p  </ns0:formula><ns0:p>Here, the quantity of prey group members is denoted as where denotes the population of the prey and the p n n predators. In SHO, the number of prey (herd's size) is randomly selected within range 70% and 90% of the total population , while the remainder individuals are labeled as predators. So, chose 0.7 and 0.9 values as the random n values.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Assignation of survival value</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The survival value of every animal is assigned and it is associated with the current best and worst</ns0:p></ns0:div>
<ns0:div><ns0:head> </ns0:head><ns0:p>SV positions of a known of whole population members. By optimization process, the present best and worst values SV are mentioned in the optimization problem. Then, the survival value will be determined by using Equation ( <ns0:ref type='formula' target='#formula_12'>10</ns0:ref>).</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_12'>) b w b i f f f x f SV    ) (<ns0:label>10</ns0:label></ns0:formula><ns0:p>Where, worst and best fitness values are denoted by , respectively. Here, represents the location of b w f and f i x the prey or the predator.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Herd's leader movement</ns0:head><ns0:p>All herd members' movement is one of the significant steps in SHO. The location of leader of the herd is updated by Equation ( <ns0:ref type='formula'>11</ns0:ref>) as given in <ns0:ref type='bibr'>[Femando et al., 2017]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_13'>(11)                   1 ) ( 2 1 ) ( 2 , , L L best ybest l L L m m P l L L SVh if h y r h SVh if h P r h h m  </ns0:formula><ns0:p>Here, the tested selfish repulsion towards predators by current herd leader is denoted as , and r denotes the l  random number in the range (0, 1). Moreover, the location of the herd member is updated based on two selections. Equation ( <ns0:ref type='formula' target='#formula_14'>12</ns0:ref>) is utilized to a h follow the herd and Equation ( <ns0:ref type='formula'>14</ns0:ref>) is utilized to recompense the group. Also, the selection is prepared based on some random variables.</ns0:p><ns0:p>( </ns0:p><ns0:formula xml:id='formula_14'>) a a a f h h   Where, (13)                     otherwise h h SV SV h h h h f a m h h h h a b h h a L h h a m i u L b a L a ) ( 2 ) ( ) ( 2 , , ,       (14) r SV h y h h a best L h a best y h a a ) 1 ( ) ( 2 ,             <ns0:label>12</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.4'>Predator movement</ns0:head><ns0:p>The movement of every separable set of predators, the endurance of entities in the attacked flock and the distance between the predators from assault predators are taken into account in SHO. Based on the pursuit probability, the predator movement is determined as given in Equation ( <ns0:ref type='formula'>15</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='formula'>16</ns0:ref>). ( <ns0:ref type='formula'>16</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_15'>) ( 2 p r P p X h r X X     </ns0:formula><ns0:p>Where, indicates randomly chosen herd member. In advance, each member of the predator and the prey group r h survival rate is recomputed by Equation (9).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Predation phase</ns0:head><ns0:p>The predation process is executed in this phase. Domain danger is defined by SHO which is signified as area of finite radius around each prey. The domain danger radius of each prey is computed by Equation ( <ns0:ref type='formula'>17</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_16'>𝑅 𝑟 (17) u j l j n j r y y R    1</ns0:formula><ns0:p>Where, upper and lower boundary members are represented by and , respectively and the dimensions are u j y l j y denoted as . After the radius calculation, a pack of targeted prey is computed by Equation (18). </ns0:p><ns0:formula xml:id='formula_17'>n (18) K h R h P SV SV H h T j r j i p h j p i j i       ,</ns0:formula><ns0:formula xml:id='formula_18'>} , { j h K K  (19) i i p m m i j i j i p j T h h p h p h p T h H     , )<ns0:label>( , , ,  </ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.6'>Restoration phase</ns0:head><ns0:p>Finally, the restoration is accomplished by making a set . Here, represents the set of herd</ns0:p><ns0:formula xml:id='formula_19'>K h M j   K</ns0:formula><ns0:p>member slayed for the duration of the predation phase. The mating probabilities are also determined by each member as in Equation ( <ns0:ref type='formula' target='#formula_20'>20</ns0:ref>), </ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 2: Peseudo code for the proposed SHO algorithm in data classification</ns0:head></ns0:div>
<ns0:div><ns0:head>Start</ns0:head><ns0:p>Initialize the parametrs and locations of SHO by eq (9)</ns0:p><ns0:p>For </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Result and Discussion</ns0:head><ns0:p>The efficiency of our proposed method is assessed by comparing its accuracy with other popular classification methods like Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b9'>[Chen et al., 2014]</ns0:ref>, Ant Colony Optimization (ACO) <ns0:ref type='bibr' target='#b31'>[Otero, Freitas & Johnson, 2012]</ns0:ref>, and Cuckoo Search (CS) Optimization <ns0:ref type='bibr' target='#b8'>[Cao et al., 2015]</ns0:ref>. We estimated the performance of proposed algorithm based on the accuracy as tested in 10 UCI datasets. The accuracy of our proposed method is comparable to other optimization methods and various classifiers. We used 10-fold crossvalidation in the datasets and got the mean value. The proposed method is greater than all existing methods taken for comparison. SHO is combined with C4.5 classifier to produce greater accuracy than a standard C4.5 classifier. The proposed decision tree classifier named C4.5-SHO is further compared with C4.5, ID3 and CART. The description of ten data sets is tabulated in The proposed method is compared with existing entropies, optimization algorithms and different classifiers.</ns0:p><ns0:p>The effectiveness is estimated based on the accuracy, AUROC and classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Accuracy</ns0:head><ns0:p>The classification accuracy is measured based on Equation ( <ns0:ref type='formula'>21</ns0:ref> In Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref>, the proposed C4.5-SHO decision tree classification accuracy is compared with other classifiers like C4.5, ID3 and CART. The accuracy of our proposed work is almost stable than the other. The accuracy of classification is depended on the training dataset. The accuracy of Iris data set is high (0.9986) compared to other data sets. The lowest accuracy of the proposed C4.5-SHO is 0.9437 in Scale data set. In comparison with existing classifiers, it is observed that the proposed approach has obtained a good accuracy. <ns0:ref type='table' target='#tab_8'>4</ns0:ref>, the proposed C4.5-SHO decision tree classification accuracy is compared with other algorithms like ACO, PSO and CS. The accuracy of our proposed work is almost stable than the other. The accuracy of Iris data set is high (0.9986) compared to other data sets. The lowest accuracy of the proposed C4.5-SHO is 0.9437 in Scale data set. In comparison with existing algorithms, the proposed approach achieved good accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Area under ROC (AUROC)</ns0:head><ns0:p>The performance of classification model is shown through graph analysis of area under the Receiver Operating Characteristic curve (AUROC). This is dependent upon the attributes as well as classes. The proposed C4.5-SHO is compared with other classifiers like C4.5, ID3 and CART. The AUROC results presented in Table <ns0:ref type='table' target='#tab_9'>5</ns0:ref> which shows that the AUROC value of proposed method is better than other algorithms. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on the Ray's quadratic entropy, the information gain is optimized through SHO algorithm. The entropy with SHO is compared to traditional SHO in terms of other entropies, C4.5-SHO (Shanon entropy), C4.5-SHO (Havrda & charvt entropy), C4.5-SHO (Renyi entropy) and C4.5-SHO (Taneja entropy). The quadratic entropy is the measure of disorder in the range between entire arranged (ordered) and unarranged (disordered) data in the given dataset. The Quadratic entropy is successfully measured for the disorders in the datasets. The classification accuracy is improved by the quadratic entropy than other entropies. Hence, the proposed work follows Ray's quadratic entropy to get a better output. Compared to other entropies, the Quadratic entropy achieved better accuracy in data classification for all data sets. Table <ns0:ref type='table' target='#tab_11'>7</ns0:ref> shows the entropy comparisons with proposed SHO. The gain parameter is optimized by proposed C4.5-SHO algorithm in order to make a decision. An optimal gain value is selected through the fitness function mentioned in Equation ( <ns0:ref type='formula'>8</ns0:ref>). Initially, gain is calculated for each attribute used in the decision tree. If the number of iteration increases, the gain value will be changed on every iteration. Further, the fitness is nothing but the difference between actual gain and new gain. Therefore, the gain values of the attributes are noted for every iteration. The proposed optimization algorithm provided the optimal best gain value at 100 th iteration as seen in the convergence plot in Figure <ns0:ref type='figure' target='#fig_9'>2</ns0:ref>. Finally, the gain error was minimized with the help of C4.5-SHO algorithm.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the convergence plot of proposed SHO and similar existing algorithms for average of all datasets. The proposed SHO achieved good convergence compared to existing techniques. The proposed work is based on gain optimization with SHO algorithm whereas the execution time is also the most important factor in data classification approach. On comparing the time-taken for analysis, the proposed method needs low computational time than the existing algorithms like ACO (0.974s), PSO (0.54s) and CS (0.6s). Table <ns0:ref type='table' target='#tab_12'>8</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref> illustrate the computational time comparison for average of all datasets. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Conclusion</ns0:head><ns0:p>Data mining is a broad area that integrates techniques from several fields including machine learning, statistics, artificial intelligence, and database systems for the analysis of a large amount of data. This paper presented a gain optimization technique termed as C4.5-SHO. The effectiveness of quadratic entropy is estimated and discussed to evaluate the attributes in different datasets. This article presents the most influential algorithms for classification. The gain of data classification information is optimized by the proposed SHO algorithm. The evaluation of C4.5 decision tree based SHO results show that the AUROC is the best measure because of the classification of unbalanced data. The accuracy of proposed C4.5-SHO technique is higher than the existing techniques like C4.5, ID3 and CART. The proposed approach is compared with the algorithms of ACO, PSO and CS for AUROC. A better accuracy (average 0.9762), better AUROC (average 0.9909) and a better computational time (0.49s) are obtained from the gain optimized technique of C.5-SHO. In future, hybrid optimization technique is utilized to improve the data classification information gain. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>C4.5 algorithm, an extension of ID3 can handle datasets with unknown-values, numeric and nominal attributes [Agrawal & Gupta, 2013]. C4.5 is one of the best learning based decision tree algorithm in data mining because of its distinctive features like classifying continuous attributes, deriving rules, handling missing values and so on [Wu et al., 2008]. In decision tree based classification, the training set is assumed as and the M number of training samples is mentioned as . Here, the samples are divided into for various kinds of | | M N where the class sizes are labeled into . A set of training sample is denoted as</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Algorithm 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudo code for C4.5 decision tree algorithm Input: Dataset Output: Decision tree // Start for all attributes in data Calculate information gain end HG= Attribute with highest information gain Tree = Create a decision node for splitting attribute HG New data= Sub datasets based on HG for all New data Tree new= C4.5(New data) Attach tree to corresponding branch of Tree end return The C4.5 algorithm builds a decision tree by learning from a training set in which every sample is built on an attribute-value pair. The current attribute node is calculated based on the information gain rate in which the root PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>utilized to optimize the gain function in data classification operation. Figure 1 displays the flow diagram of SHO algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>through C4.5 classifier.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Convergence evaluation of SHO</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Comparison of computational time</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,70.87,201.88,672.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,287.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,204.37,525.00,197.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,255.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>These datasets include Monks, Car, Chess, Breast-cancer, Hayes, Abalone, Wine, Ionosphere, Iris, and Scale[Arellano, </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Bory-Reyes & Hernandez-Simon, 2018].</ns0:head><ns0:label /><ns0:figDesc>Table 2 shows the algorithm parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Description of data set</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Algorithms parameters and values</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Classification accuracy of the proposed classifier C4.5 with C4.5, ID3 and CART</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification accuracy of the proposed Algorithm with ALO, PSO and CS In Table</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Area under the ROC curve of proposed C4.5 with ID3 and CART</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Area under ROC curve of the proposed Algorithm with ALO, PSO and CS The proposed C4.5-SHO is compared with other optimization algorithms like ACO, PSO and CS. The AUROC results are presented in Table6which shows that the proposed AUROC value is better than existing algorithms. It is revealed that SHO not only reduces the complexity of decision trees but also enhances the accuracy.</ns0:figDesc><ns0:table /><ns0:note>c) Different entropy comparisonPeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Entropy comparison </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Computational Time</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Algorithms parameters and values</ns0:figDesc><ns0:table><ns0:row><ns0:cell>SHO</ns0:cell><ns0:cell /><ns0:cell>ACO</ns0:cell><ns0:cell /><ns0:cell>PSO</ns0:cell><ns0:cell /><ns0:cell>CS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Phromone</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lower</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Heuristic</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>Lower bound and</ns0:cell><ns0:cell>-1 &1</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell>damping ratio</ns0:cell><ns0:cell /><ns0:cell>upper bound</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Upper</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Evaporation</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Personal</ns0:cell><ns0:cell>1.5</ns0:cell><ns0:cell>Number of nests</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>rate</ns0:cell><ns0:cell /><ns0:cell>learning</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Prey's rate</ns0:cell><ns0:cell>0.7,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Global learning</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Transition</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.9</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /><ns0:cell>probability</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Area under the ROC curve of proposed C4.5 with ID3 and CART</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>(Shanon</ns0:cell><ns0:cell>(Havrda &</ns0:cell><ns0:cell>(Quadratic</ns0:cell><ns0:cell>(Renyi</ns0:cell><ns0:cell>(Taneja</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>entropy)</ns0:cell><ns0:cell>charvt entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9429</ns0:cell><ns0:cell>0.9756</ns0:cell><ns0:cell>0.9859</ns0:cell><ns0:cell>0.9926</ns0:cell><ns0:cell>0.9415</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9585</ns0:cell><ns0:cell>0.9527</ns0:cell><ns0:cell>0.9753</ns0:cell><ns0:cell>0.9895</ns0:cell><ns0:cell>0.9700</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9510</ns0:cell><ns0:cell>0.9535</ns0:cell><ns0:cell>0.9907</ns0:cell><ns0:cell>0.9809</ns0:cell><ns0:cell>0.9401</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9852</ns0:cell><ns0:cell>0.9558</ns0:cell><ns0:cell>0.9863</ns0:cell><ns0:cell>0.9564</ns0:cell><ns0:cell>0.9672</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9579</ns0:cell><ns0:cell>0.9460</ns0:cell><ns0:cell>0.9981</ns0:cell><ns0:cell>0.9476</ns0:cell><ns0:cell>0.9102</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9556</ns0:cell><ns0:cell>0.9618</ns0:cell><ns0:cell>0.9789</ns0:cell><ns0:cell>0.9715</ns0:cell><ns0:cell>0.9447</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9485</ns0:cell><ns0:cell>0.9731</ns0:cell><ns0:cell>0.9823</ns0:cell><ns0:cell>0.9297</ns0:cell><ns0:cell>0.9317</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9319</ns0:cell><ns0:cell>0.9415</ns0:cell><ns0:cell>0.9665</ns0:cell><ns0:cell>0.9636</ns0:cell><ns0:cell>0.9036</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9465</ns0:cell><ns0:cell>0.9807</ns0:cell><ns0:cell>0.9832</ns0:cell><ns0:cell>0.9514</ns0:cell><ns0:cell>0.9428</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9725</ns0:cell><ns0:cell>0.8936</ns0:cell><ns0:cell>0.9747</ns0:cell><ns0:cell>0.9617</ns0:cell><ns0:cell>0.9031</ns0:cell></ns0:row><ns0:row><ns0:cell>Average Value</ns0:cell><ns0:cell>0.95505</ns0:cell><ns0:cell>0.95343</ns0:cell><ns0:cell>0.98219</ns0:cell><ns0:cell>0.96449</ns0:cell><ns0:cell>0.93549</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:1:2:CHECK 20 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Entropy comparison </ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "RESPONSE TO REVIEWERS
of “ENTROPY BASED C4.5-SHO ALGORITHM WITH INFORMATION GAIN
OPTIMIZATION IN DATA MINING”
I would like to thank the reviewers for their thoughtful review of the manuscript. They raise
important issues which are very helpful for improving the manuscript. We appreciate the positive
feedback from the reviewer and we have revised the manuscript accordingly. I have responded in detail to
each of the reviewer’s comments. I hope that the reviewers will find the responses to their comments
satisfactory, and I wish to finish the revised version of the manuscript.
Reviewer 1 (Anonymous)
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Comments for the Author
This paper describes a new classification algorithm to improve information management. The use of the
SHO algorithm has a significant role to produce optimal values. This paper also explains that the
proposed method has a better level of accuracy than the previous methods.
This paper is well written. Gap research is well presented. The proposed method can be described
systematically. Unfortunately, there are only a few small errors but it is quite annoying, i.e., in Figure 1.
The SHO algorithm is not well illustrated in the end. If this can be corrected, it will make this paper of
higher quality.
Reply: Line As per your suggestion, Figure 1 is changed and the SHO algorithm is well illustrated in the
end of result section.
Reviewer 2 (Arturo Mora)
Basic reporting
The manuscript is overall well written and documented. However, I have the following comments:
1. Lines 53-56. I suggest to re-write these sentences as a reader may understand that sensitivity is the only
important metric. Or that the models can easily predict the minority classes.
Reply: As per your suggestion, the line 53-56 is changed in the revised manuscript.
2. Lines 63-64. I suggest to modify the sentence as parameters are still needed for decision trees, e.g.,
purity, minimum number of samples in a node, etc.
Reply: As per your suggestion, the lines63-64 is modified in the revised manuscript.
3. Line 78-79. I suggest to move the sentence introducing decision tree to previous paragraph, lines 57-59.
Reply: As per your suggestion, the line 78-79 moved to the lines 57-59.
4. Line 166. Define the acronyms before using them. What is RLBOR?
Reply: An RLBOR acronym is Reduce Leaf Based on Optimization Ratio. This is updated in revised
manuscript.
5. Paragraph 174. Sentence: 'C4.5 is one of the best...'. Add references and justification to prove this
affirmation.
Reply: Paragraph 174 sentence is justified and added the reference in revised manuscript.
6. Paragraph 179. Sentence: 'The decision tree trained process is enormous and deep.' Please explain
why, and compared to what?
Reply: The decision tree trained process is enormous and deep compared to neural networks like KNN,
ANN and etc. To improve the accuracy of the decision tree, the decision tree process is evaluated in deep
concept. This is reduces the depth and this approach does not take into account the number of leaf nodes.
7. Lines 184-187. Please clarify that these disadvantages (e.g., large data) may only happen on some
applications. Decision trees are among the fastest models to train compared to other models (ANN, SVM,
etc.).
Reply: Large datasets provided into the ID3 algorithm improves the performance but not effective
whereas small datasets are more effective in several fields like assessing prospective growth
opportunities, demographic data, etc. This is because the processing speed is slow and the larger dataset is
too large to fit into the memory. Besides, C4.5 algorithm gives most effective performance with large
amount of datasets. Hence, the advantages of C4.5 algorithm are considerable but a dramatic increase in
demand for large data would be improved to meet its performance.
8. Algorithm 1. First loop, 'for i=1: number of data', number of data samples? If so, should this not loop
through the features? That is, calculate the information gain for each feature.
Reply: The first loop is calculating the gain for all the attributes in data. As per your comment, the
pseudo code is altered in the revised manuscript.
9. Algorithm 1. Second for loop is also confusing, should this not be a recursive process? That is, function
calling itself twice with the left and right partition of the data.
Reply: Updated pseudo code is presented in the revised manuscript. The explanation step of pseudo code
is mentioned below.
Compute gain information for all attributes
Choose the best attribute according to the information gain
Create decision node based on the best attribute
Induce dataset based on newly created decision node
For all new data, call C4.5 algorithm to get sub-tree (recursive call)
Attach the tree
Return
10. Paragraph 188. I am not sure I understand the sentence 'The current attribute node is one and...'.
Reply: Thank you for commenting. The paragraph 188 has rewritten in the revised manuscript.
11. Paragraph 188. Sentence: 'It is necessary to perform supervised data mining on the targeted dataset'.
Not sure I understand why is it necessary? For which applications? For deriving decision trees?
Reply: The current attribute node is calculated based on the information gain rate in which the root node
is selected based on the extreme information gain rate. The data is numeric with only the classification as
nominal leading category of labeled dataset. Hence, it is necessary to perform supervised data mining on
the targeted dataset. This reduces the choice of classifiers in which a pre-defined classification could
handle numerical data and classification in decision tree application.
12. Line194. Please indicate why is SHO better and why, compared to what? Add references.
Reply: As per your suggestion, the SHO advantage and corresponding reference is added in the revised
manuscript.
13. Line 199. PSO abbreviation not defined. Also, add references to prove the remark.
Reply: Abbreviation of PSO is Particle Swarm Optimization and the reference is added for the proof.
14. Lines 243, 257, and 266. Fix the number of subsections. They all have 1.
Reply: In lines 243, 257 and 266, we gave the subsection individually.
15. Line 267. Can you please explain why Ray's quadratic entropy is better? Compared to what and why?
Maybe this sentence can be stated after explaining the results from Table 6.
Reply: Ray’s quadratic entropy is updated in revised manuscript. The sentence is changed into after the
explanation of result from Table 6 and in revised manuscript Table 6 is changed into Table 7.
16. Figures 2 and 3. Please increase the resolution. Text is not readable.
Reply: As per your suggestion, the Figures 2 and 3 resolutions have increased in the revised manuscript.
The following are a few grammar suggestions.
1. Use of comma before respectively, e.g., 'x, y, and z, respectively.'
Reply: As per your suggestion, the corrections are changed in the revised manuscript.
2. Capitalize each letter after '.' punctuation mark. Example, line 133.
Reply: As per your suggestion each sentences have checked and corrected the line 133.
3. Avoid starting sentences with the 'It' pronoun. Although the contexts for most sentences are clear,
others may be confusing for a reader. Lines 115, 124, 131,
140, 144, etc. For example, the sentence
in paragraph 179: 'It measures the information related to classification obtained on the basis of the same
prediction'. Does this sentence refer to gain ratio or information gain? Also, in that same sentence,
change 'form' to 'from'.
Reply: As per your suggestion, the sentences are modified in the revised manuscript.
4. Add a comma before 'such as'. Lines 21, 32, etc.
Reply: As per your suggestion, a comma is added before “such as” in lines 21, 32, 34 and 290 in the
revised manuscript.
5. Use a comma before which when it introduces a nonrestrictive phrase. Don't use a comma before which
when it's part of a prepositional phrase, such as 'in which.'
Reply: Lines 75: As per your suggestion, the manuscript is modified and highlighted the corrections.
6. Please use ID3 abbreviation consistently. The manuscript has Id3 in different paragraphs.
Reply: The revised manuscript is followed ID3 instead of id3.
7. Data is plural, please replace 'data is' by 'data are' in the manuscript. Finally, please ensure references
follow the PeerJ reference style.
Reply: Lines 51, 108: As per this comment, the “data is” replaced by “data are”. We have made the
references in the PeerJ reference style.
Experimental design
The experimental design is well performed. However, I have the next suggestions:
1. Equation 9. Why rand(0.7,0.9)?
Reply: Line: 227-229. For the SHO algorithm validation, the random numbers are needed. In SHO, the
number of prey (herd’s size) is randomly selected within range 70% and 90% of the total population n ,
while the remainder individuals are labeled as predators. So, chose 0.7 and 0.9 values as the random
values.
2. Equation 11. Define all variables used, e.g., hl and hm. Equation 12. Define hb.
Reply: Lines 235-236, 239-240. As per your suggestion, the all variables are defined in the revised
manuscript. In equation (11), hl hm , hb and p m are indicated as herd leader, herds center of mass, present
herds’ leader position and predators center of mass, respectively.
3. Equation 16. Define hr.
Reply: Line 242. hr is represented as randomly chosen herd member.
4. Equation 17. Define Rr, i.e., the radius is calculated for each pray or pack of prays?
Reply: Line 244. Rr is each prey domain danger radius. Domain danger is defined by SHO which is
signified as area of finite radius around each prey.
5. Equation 18. Define K and H.
Reply: Line 244. K is set of killed herd members and the herds’ population is H .
6. I would suggest adding an additional paragraph and possibly a figure explaining how the SHO is
implemented for the optimization of the gain. Moreover, fitness defined in equation 8 considers only the
gain at a specific node. How is SHO used to optimize all gain for splitting non-terminal nodes? Is it
required to run one SHO per non-terminal node?
Reply: Lines 298-304, 203. As per your suggestion additional paragraph is added in result section and
gave the explanation with figure. Figure 3 have shown the convergence of the SHO algorithm. The
equation (8) is not for specific node. Number of nodes should be used in the fitness gain equation and the
fitness equation is now altered for number of nodes. SHO performs gain optimization during the
formation of each sub-tree. Yes, SHO is performed during each split. Also, we have run the experiment
for 100 times and presented the best results.
7. Line 235. Although the manuscript mentions that cross-validation was used, there are no details about
the tuning of the parameters for the different models. Please define if default parameters were used or if
these are tuned. If tuned, which technique did you consider to tune them (grid search, random search, etc.)
and which data used (did you use validation set?). Finally, mention that 10-fold cross-validation was
used.
Reply: In this manuscript, we are using default parameters. We have used the 10 UCI datasets for the
experimental validation.
8. Can you please describe the parameters used for SHO, ACO, lower/upper bounds, etc.? That is, how
many iterations, trees, etc. were used, and why? Where these set randomly or optimized?
Reply: The parameters of SHO, ACO, PSO and CS values are tabulated in Table 2. The parameter table
is updated in the revised manuscript. We have taken random values for our proposed optimization work
like iterations and number of populations.
Validity of the findings
1. Results are well described. I would only suggest adding an average row in Tables 2-6 to better observe
the overall performance improvement. Additionally, highlighting in bold the best performance obtained
for each dataset would increase readability.
Reply: As per your suggestion, the average row is added in Tables 3-7 in revised manuscript. The result
performances are highlighted in bold for readability.
2. Fig. 2 and Table 7. Are convergence and computational time shown for a single dataset or an average?
Reply: Lines 305, 310. The convergence and computational time graph have shown the average dataset
results.
3. Can you please explain why RUN.m uses k=100 for the number of trees? Are you reporting the best
results obtained from these 100 SHO runs?
Reply: k=100 is for the number of times the tree is generated. RUN.m file is run 100 times. After 100
SHO runs, we have attain the best result that result is presented in the article.
4. While the code is very well documented, it only includes the SHO implementation. For reproducibility
purposes, I would suggest providing the other optimization algorithms (ACO and PSO) if implemented.
Otherwise, refer to the code/source used.
Reply: We have referred the source code and the corresponding reference papers for ACO, PSO and CS
algorithms. For comparison we have implemented C4.5 with ACO, PSO and CS in the place of SHO. The
following are the source code link and reference paper citation.
Code ACO- https://in.mathworks.com/matlabcentral/fileexchange/52859-ant-colony-optimization-aco
Otero FE, Freitas AA, Johnson CG. (2012). Inducing decision trees with an ant colony optimization
algorithm. Applied Soft Computing. 12(11): 3615-3626.
Code PSO- https://in.mathworks.com/matlabcentral/fileexchange/52857-particle-swarm-optimization-pso
Chen KH, Wan KJ, Wang KM, Angelia MA. (2014). Applying particle swarm optimization-based
decision tree classifier for cancer classification on gene expression data. Applied Soft Computing. 24: 773780.
Code CS- https://in.mathworks.com/matlabcentral/fileexchange/35635-cuckoo-optimization-algorithm
Cao M, Tang GA, Shen Q, Wang Y. (2015). A new discovery of transition rules for cellular automata
by using cuckoo search algorithm. International Journal of Geographical Information Science. 29(5):
806-824.
" | Here is a paper. Please give your review comments after reading it. |
7 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Information efficiency is gaining more importance in the development as well as application sectors of information technology. Data mining is a computer-assisted process of massive data investigation that extracts meaningful information from the datasets. The mined information is used in decision-making to understand the behavior of each attribute.</ns0:p><ns0:p>Therefore, a new classification algorithm is introduced in this paper to improve information management. The classical C4.5 decision tree approach is combined with Selfish Herd Optimization (SHO) algorithm to tune the gain of given datasets. The optimal weights for the information gain will be updated based on SHO. Further, the dataset is partitioned into two classes based on quadratic entropy calculation and information gain. Decision tree gain optimization is the main aim of our proposed C4.5-SHO method. The robustness of the proposed method is evaluated on various datasets and compared with classifiers, such as ID3 and CART. The accuracy and area under ROC (AUROC) parameters are estimated and compared with existing algorithms like ant colony optimization, particle swarm optimization and cuckoo search.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>attributes. Hence, parameters (like purity, number of samples, etc.) are still needed for a decision tree. Moreover, it is capable of handling multidimensional data that offers good classification performance for common datasets <ns0:ref type='bibr' target='#b30'>[Ngoc et al., 2019]</ns0:ref>. Decision tree is also known as decision support tool which utilizes the model of tree-like or graph and the consequences are resource costs, utility and event outcomes <ns0:ref type='bibr' target='#b25'>[Lee, 2019]</ns0:ref>. In practical, the methods utilized to create decision trees normally produce trees with a low node factor and modest tests at each node. Also, the classifier contains different algorithms, such as C4.5, ID3 and CART. The C4.5 algorithm is the successor of ID3 which uses gain ratio by splitting criterion for splitting the dataset. The information gain measure used as a split criterion in ID3 is biased to experiments with multiple outcomes as it desires to select attributes with higher number of values <ns0:ref type='bibr'>[Jimnez et al., 2019]</ns0:ref>. To overcome this, the C4.5 algorithm undergoes information gain normalization using split information value which in turn avoids over fitting errors as well.</ns0:p><ns0:p>In C4.5, two criterions are used to rank the possible tests. The first criterion of information gain is to minimize the entropy of subsets and the second criterion of gain ratio is to divide the information gain with the help of test outcome information. As a result, the attributes might be nominal or numeric to determine the format of test outcomes. <ns0:ref type='bibr' target='#b23'>[Kuncheva et al., 2019]</ns0:ref>. On the other hand, the C4.5 algorithm is also a prominent algorithm for data mining employed for various purposes. The generic decision tree method is created default for balanced datasets; so it can deal with imbalanced data too <ns0:ref type='bibr' target='#b24'>[Lakshmanaprabu et al., 2019]</ns0:ref>. The traditional methods for balanced dataset when used for imbalanced datasets cause low sensitivity and bias to the majority classes <ns0:ref type='bibr' target='#b24'>[Lakshmanaprabu et al., 2019]</ns0:ref>. Some of the imbalance class problems include image annotations, anomaly detection, detecting oil spills from satellite images, spam filtering, software defect prediction, etc. <ns0:ref type='bibr' target='#b26'>[Li et al., 2018]</ns0:ref>. The imbalanced dataset problem is seen as a classification problem where class priorities are very unequal and unbalanced. In this imbalance issue, a majority class has larger pre-probability than the minority class <ns0:ref type='bibr' target='#b27'>[Liu, Zhou & Liu, 2019]</ns0:ref>. When this problem occurs, the classification accuracy of the minority class might be disappointing <ns0:ref type='bibr' target='#b38'>[Tang & Chen, 2019</ns0:ref>]. Thus, the aim of the proposed work is to attain high accuracy in addition to high efficiency.</ns0:p><ns0:p>In data classification, accuracy is the main challenge of all applications. Information loss in dataset is problematic during attribute evaluation and so, the probability of attribute density is estimated. For this, the information theory called entropy based gain concept is utilized to enhance the classification task. Furthermore, common uncertainties of numerical data are used to measure the decision systems. A population based algorithm is utilized to optimize the gain attributes and to enhance the classification in complex datasets. The Selfish Herd Optimization (SHO) enhances the feature learning accuracy by effectively removing redundant features thereby providing good global search capability. The main contribution of the proposed work is summarized as follows.</ns0:p><ns0:p> To solve the data classification problem using entropy based C4.5 decision tree approach and gain estimation.</ns0:p><ns0:p> Selfish Herd Optimization (SHO) algorithm is utilized to optimize the information gain attributes of decision tree.</ns0:p><ns0:p> The data are classified with high accuracy and AUROC of datasets is compared with existing techniques.</ns0:p><ns0:p>The organization of this paper is described as follows: introduction about the research paper is presented in Section 1, survey on existing methods and challenges are depicted in Section 2. The preliminaries are explained in Section 3. The working of proposed method is detailed in Section 4. Efficiency of optimization algorithm is evaluated in Section 5 and the conclusion of the proposed method is presented in Section 6.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Works</ns0:head><ns0:p>Multiple learning process and multi-label datasets are widely used in different fields nowadays. <ns0:ref type='bibr'>[Yahya, 111 2019</ns0:ref>] evaluated the efficacy of Particle Swarm Classification (PSC) in data mining. PSC was utilized to design the classification model which classifies the queries into Bloom's taxonomy six cognitive-levels. Rocchio algorithm (RA) was used to mitigate the dimensionality of adverse effects in PSC. Finally, RA-based PSC was investigated with various feature selection methods for a scheme of queries. But it is identified that the multi-label classification dealt with some problems where the classifier chain label order has a strong effect on the performance of classification. Nevertheless, it is too hard to find the proper order of chain sequences. Hence, <ns0:ref type='bibr'>[Sun et al., 2019]</ns0:ref> had proposed an ordering method based on the conditional entropy of labels where a single order was generated by this method. Reduced attributes can improve the accuracy of classification performances. The missed attribute values were typically not used in entropy or gain calculation. Information gain based algorithms tend to authenticate the attribute sets. Various measures were certainly affected from redundancy and non-monotonicity during attribute reduction. Therefore, a forward heuristic attribute reduction algorithm was proposed to solve the uncertainties in attribute selection. It simultaneously selects information attributes though unnecessary attributes were reduced in practice. <ns0:ref type='bibr' target='#b17'>[Gao et al., 2019]</ns0:ref> proposed granular maximum decision entropy (GMDE) based on the measurement of monotonic uncertainty. Extreme decision entropy was developed in which the uncertainties of entropy are integrated with granulation knowledge. This investigation was validated with various UCI datasets and found to be computationally inexpensive.</ns0:p><ns0:p>The choice of dataset selection allows the extraction of highly representative information from high-level data; so computational efforts were reduced among other tasks. A hybrid optimization based feature selection was proposed by <ns0:ref type='bibr' target='#b19'>[Ibrahim et al., 2019]</ns0:ref>. The suggested technique is combined with slap swarm algorithm (SSA) and particle swarm optimization methods to enhance the efficacy of global and local search steps. Therefore, the hybrid algorithm was examined on mixed datasets. It requires less time while the nodes quantity is reduced making it more desirable for large datasets. The SSA-PSO was employed to acquire best features from various UCI datasets. Also, redundant features were detached from the original datasets resulting in better accuracy. However, the accuracy is affected in complex datasets. To improve the classification performance of complex data, <ns0:ref type='bibr'>[Lin et al., 2019]</ns0:ref> introduced an attribute reduction method utilizing neighborhood entropy measures. The systems should have the ability to handle continuous data while maintaining its information on attribute classification. The concept of neighborhood entropy was explored to deal with uncertainty and noise of neighborhood systems. It fully reflects the decision-making ability by combining the degree of reliability with the coverage degree of neighborhood systems.</ns0:p><ns0:p>A clustering method based on functional value sequences has been proposed to accurately identify the functional equivalent programs with index variations. Because existing clustering programs were limited to structured metric vectors as in <ns0:ref type='bibr' target='#b39'>[Wang et al., 2020]</ns0:ref>. This strategy is implemented for automated program repair to identify the sample programs from a large set of template programs. The average accuracy and average entropy were 0.95576 and 0.15497, respectively. However, the problem turned out to uncertain as the number of predictions is higher than the number of previous results. This issue was overcome by an alternative solution of priori weights and maximum entropy principle to attain the posteriori weights. [Arellano, Bory-Reyes & Hernandez-Simon, 2018] utilized a machine learning approach with single aggregated prediction from a set of individual predictions. A new factor presents a problem departing from the well-known maximal entropy hypothetical method and taking the distance among original and estimated integrated predictions. The suggested method was applied to estimate and measure predictive capability using prediction datasets.</ns0:p><ns0:p>It is difficult to perform feature selection (FS) for multi-label dimension curse in numerous learning processes. Hence, <ns0:ref type='bibr' target='#b32'>[Paniri, Dowlatshahi & Nezamabadi-pour, 2020]</ns0:ref> proposed a multi-label relevance-redundancy FS scheme based on Ant Colony Optimization (ACO) called ML-ACO. ML-ACO seeks to find the best features with lowest redundancy and many repetitions with class labels. To speed up the convergence, the cosine similarities between features as well as class labels are used as starting pheromone for each ant, and can be classified as a filterbased method. Various parametric entropies of decision tree algorithms are investigated by <ns0:ref type='bibr'>[Bret et al., 2019]</ns0:ref>. Partial empirical evidences were provided to support the notion that parameter adjustment of different entropy activities influences the classification. Receiver operating characteristic (ROC) and Area under the ROC (AUROC) curve analysis provides an accurate criterion for evaluating decision trees based on parametric entropy. Various entropies, such as Shannon entropy, Renyi entropy, Tsallis entropy, Abe entropy and Landsberg-Vedral entropy were discussed.</ns0:p><ns0:p>A new information classification algorithm has been introduced to improve the information management of restricted properties in <ns0:ref type='bibr' target='#b40'>[Wang et al., 2019]</ns0:ref>. Information management efficiency has gained more importance for the development of information technology through its expanded use. Reduce leaf based on optimization ratio (RLBOR) algorithm was utilized to optimize the decision tree ratios. ID3 algorithm is a classical method of data mining that selects attributes with maximum information gain from the dataset at split node. However, decision tree algorithms have some drawbacks; it is not always optimal and it is biased in favor of properties that have higher values. In data classification, accuracy is the main challenge of all datasets. The resulting information loss is problematic for attribute evaluation while estimating the probability density of attributes. Due to the absence of classification information, it is challenging to perform potential classification. Consequently, an improved algorithm is utilized to solve the data classification issues.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Preliminaries</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_15'>2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Entropy based measurements understands the decision system knowledge, its properties and some relations about the measurements. An optimization model is explored to enhance the performance of complex dataset classification. During prediction, the information gain optimal weights will be updated with the help of SHO algorithm. The nominal attributes of the dataset were designed by the ID3 algorithm. The attributes with missing values are not permitted. C4.5 algorithm, an extension of ID3 can handle datasets with unknown-values, numeric and nominal attributes <ns0:ref type='bibr' target='#b1'>[Agrawal & Gupta, 2013]</ns0:ref>. C4.5 is one of the best learning based decision tree algorithm in data mining because of its distinctive features like classifying continuous attributes, deriving rules, handling missing values and so on <ns0:ref type='bibr' target='#b41'>[Wu et al., 2008]</ns0:ref> where the class sizes are labeled into . A set of training sample is denoted as</ns0:p><ns0:formula xml:id='formula_0'>n K K K ,.... , 2 1 | | |,... | |, | 2 1 n K K K</ns0:formula><ns0:p>, and the sample probability formula of class is given in Equation ( <ns0:ref type='formula'>1</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_1'>M i K (1) M K M p i i  ) (</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.1'>Quadratic Entropy</ns0:head><ns0:p>Entropy is used to measure the uncertainty of a class using the probability of particular event or attribute. The gain is inversely proportional to entropy. The information gain is normally dependent on the facts of how much information was offered before knowing the attribute value and after knowing the attribute value. Different types of entropies are utilized in data classification. For a better performance, quadratic entropy is used in our work <ns0:ref type='bibr' target='#b0'>[Adewole & Udeh, 2018]</ns0:ref>. This entropy considers a random variable as finite discrete with complete probability X collection as mentioned in Equation ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>     k i i i p n i p 1 1 ), ,... 2 , 1 ( 0</ns0:formula><ns0:p>Here, the probability of event is denoted as . The quadratic entropy of information is calculated by Equation (3).</ns0:p><ns0:formula xml:id='formula_3'>i p (3)     n i i i p p x M Entropy 1 ) 1 ( ) (</ns0:formula><ns0:p>Here, specifies the information entropy of (training sample set). For this particular attribute, the entropy ) (M M of information is determined by Equation ( <ns0:ref type='formula'>4</ns0:ref>).</ns0:p><ns0:p>(4) </ns0:p><ns0:formula xml:id='formula_4'>           G g g g M entropy M M H M entropy ) ( * ) , (</ns0:formula><ns0:formula xml:id='formula_5'>g g M M H | | g M elements in</ns0:formula><ns0:p>, and number of elements of in .</ns0:p><ns0:formula xml:id='formula_6'>g M | | M M</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.2'>Information Gain</ns0:head><ns0:p>The information gain is determined by Equation ( <ns0:ref type='formula' target='#formula_7'>5</ns0:ref>).</ns0:p><ns0:p>( <ns0:ref type='table' target='#tab_15'>-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:ref> Manuscript to be reviewed Equation ( <ns0:ref type='formula' target='#formula_7'>5</ns0:ref>), information gain is employed to find additional information that provides high information gain on classification. C4.5 algorithm chooses the attribute that has high gain in the dataset and use as the split node attribute. Based on the attribute value, the data subgroup is subdivided and the information gain of each subgroup is recalculated. The decision tree trained process is enormous and deep compared to neural networks, such as KNN, ANN and etc. as it does not take into account the number of leaf nodes. Moreover, the gain ratio is different from information gain. Gain ratio measures the information related to classification obtained on the basis of same partition. C4.5 uses the information gain and allows measuring a gain ratio. Gain ratio is described in Equation ( <ns0:ref type='formula' target='#formula_8'>6</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_7'>) ) , ( ) ( ) , ( H M entropy M entropy H M gain   PeerJ Comput. Sci. reviewing PDF | (CS<ns0:label>5</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(</ns0:p><ns0:formula xml:id='formula_8'>) ) , ( inf _ ) , ( ) , ( _ H M o split H M gain H M ratio gain <ns0:label>6</ns0:label></ns0:formula><ns0:p>Where, ( <ns0:ref type='formula'>7</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_9'>M M M M H M o t spli g n g g 2 1 log ) , ( inf _    </ns0:formula><ns0:p>The attribute with a maximum gain rate is selected for splitting the attributes. When the split information tactics is 0, the ratio becomes volatile. A constraint is added to avoid this, whereby the information gain of the test selected must be large at least as great as the average gain over all tests examined. <ns0:ref type='bibr' target='#b34'>[Quinlan, 2014]</ns0:ref> developed the C4.5 algorithm to generate a decision tree. Many scholars have made various improvements in the tree algorithm. However, the problem is that tree algorithms require multiple scanning and deployment of data collection during the building process of decision trees. For example, large datasets provided into the ID3 algorithm improves the performance but not effective whereas small datasets are more effective in several fields like assessing prospective growth opportunities, demographic data, etc. This is because the processing speed is slow and the larger dataset is too large to fit into the memory. Besides, C4.5 algorithm gives most effective performance with large amount of datasets. Hence, the advantages of C4.5 algorithm are considerable but a dramatic increase in demand for large data would be improved to meet its performance. Manuscript to be reviewed Computer Science node is selected based on the extreme information gain rate. The data is numeric with only the classification as nominal leading category of labeled dataset. Hence, it is necessary to perform supervised data mining on the targeted dataset. This reduces the choice of classifiers in which a pre-defined classification could handle numerical data and classification in decision tree application. Each attribute is evaluated to find its ratio and rank during the learning phase of decision trees. Additionally, correlation coefficient is found to investigate the correlation between attributes as some dataset could not give any relevant result in data mining. In C4.5 decision tree algorithm, the gain is optimized by proposed SHO technique. The information gain is a rank based approach to compute the entropy. In this algorithm, the node with a highest normalized gain value is allowed to make decision, so there is a need to tune the gain parameter. The gain fitness is calculated based on the difference between actual gain value and new gain value. This is the objective function of the gain optimization technique which is described in Equation ( <ns0:ref type='formula'>8</ns0:ref>). ( <ns0:ref type='formula'>8</ns0:ref>)</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>C4.5 decision tree</ns0:head><ns0:formula xml:id='formula_10'>  i i G G fitness min  </ns0:formula><ns0:p>Here, and denotes actual and new gain, respectively. Based on this fitness, the gain error is minimized by</ns0:p><ns0:formula xml:id='formula_11'>i G i G</ns0:formula><ns0:p>SHO and the gain value will be computed by using Equation ( <ns0:ref type='formula' target='#formula_7'>5</ns0:ref>). SHO can improve the learning accuracy, remove the redundant features and update the weight function of decision trees. The feature of SHO is random initialization generating strategy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Proposed Method: Selfish Herd Optimization (SHO)</ns0:head><ns0:p>SHO is utilized to minimize the gain error in a better way in optimization process. It improves the balancing between exploration and exploitation phase without changing the population size <ns0:ref type='bibr' target='#b16'>[Fausto et al., 2017]</ns0:ref>. SHO algorithm is mainly suitable for gain optimization in decision trees. In meta-heuristic algorithms, SHO is a new branch inspired from group dynamics for gain optimization. SHO is instigated from the simulations of herd and predators searching their food or prey. The algorithm uses search agents moving in n-dimensional space to find solution for optimization problem. The populations of SHO are herd and predators where the individuals are known as search agents. In optimization areas, SHO is proved to be competitive with particle swarm optimization (PSO) <ns0:ref type='bibr' target='#b16'>[Fausto et al., 2017]</ns0:ref> for many tasks. The theory of Selfish Herd has been establishing the predation phase. Every herd hunts a possible prey to enhance the survival chance by accumulating with other conspecifics in ways that could increase their chances of surviving a predator attack without regard for how such behavior affects other individuals' chances of survival. This may increase the likelihood of a predator escaping from attacks regardless of how such activities disturb the survival probabilities of other individuals. The proposed SHO algorithm consists of different kinds of search agents like a flock of prey that lives in aggregation (mean of selfish herd), package of predators and predators within the said aggregate. This type of search agents is directed separately through fixed evolutionary operators which are centered on the relationship of the prey and the predator <ns0:ref type='bibr' target='#b3'>[Anand & Arora, 2020</ns0:ref>].</ns0:p><ns0:p>The mathematical model of SHO algorithm is given as follows.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Initialization</ns0:head><ns0:p>The iterative process of SHO's first step is to initialize the random populations of animals as prey and predators thereby having one set of separable locations . Here, the population size is denoted by</ns0:p><ns0:formula xml:id='formula_12'>  sN s s S ,... 2 , 1 </ns0:formula><ns0:p>. The position of animals is limited into lower and upper boundaries and the groups are classified into two, like N prey and predator. Equation ( <ns0:ref type='formula'>9</ns0:ref>) is utilized to calculate the number of members in prey group. predators. In SHO, the number of prey (herd's size) is randomly selected within range 70% and 90% of the total population , while the remainder individuals are labeled as predators. Therefore, 0.7 and 0.9 were the selected n random values.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Assignation of survival value</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The survival value of every animal is assigned and it is associated with the current best and worst</ns0:p></ns0:div>
<ns0:div><ns0:head> </ns0:head><ns0:p>SV positions of a known of whole population members. By optimization process, the present best and worst values SV are mentioned in the optimization problem. Then, the survival value will be determined by using Equation ( <ns0:ref type='formula' target='#formula_13'>10</ns0:ref>).</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_13'>) b w b i f f f x f SV    ) (<ns0:label>10</ns0:label></ns0:formula><ns0:p>Where, worst and best fitness values are denoted by , respectively. Here, represents the location of b w f and f i x the prey or the predator.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Herd's leader movement</ns0:head><ns0:p>All herd members' movement is one of the significant steps in SHO. The location of leader of the herd is updated by Equation ( <ns0:ref type='formula'>11</ns0:ref>) as given in <ns0:ref type='bibr'>[Femando et al., 2017]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_14'>(11)                   1 ) ( 2 1 ) ( 2 , , L L best ybest l L L m m P l L L SVh if h y r h SVh if h P r h h m  </ns0:formula><ns0:p>Here, the tested selfish repulsion towards predators by current herd leader is denoted as , and r denotes the l  random number in the range (0, 1). Moreover, the location of the herd member is updated based on two selections. Equation ( <ns0:ref type='formula' target='#formula_15'>12</ns0:ref>) is utilized to a h follow the herd and Equation ( <ns0:ref type='formula'>14</ns0:ref>) is utilized to recompense the group. Also, the selection is prepared based on some random variables.</ns0:p><ns0:p>( </ns0:p><ns0:formula xml:id='formula_15'>) a a a f h h   Where, (13)                     otherwise h h SV SV h h h h f a m h h h h a b h h a L h h a m i u L b a L a ) ( 2 ) ( ) ( 2 , , ,       (14) r SV h y h h a best L h a best y h a a ) 1 ( ) ( 2 ,             <ns0:label>12</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.4'>Predator movement</ns0:head><ns0:p>The movement of every separable set of predators, the endurance of entities in the attacked flock and the distance between the predators from assault predators are taken into account in SHO. Based on the pursuit probability, the predator movement is determined as given in Equation ( <ns0:ref type='formula'>15</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='formula'>16</ns0:ref>). ( <ns0:ref type='formula'>16</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_16'>) ( 2 p r P p X h r X X     </ns0:formula><ns0:p>Where, indicates randomly chosen herd member. In advance, each member of the predator and the prey group r h survival rate is recomputed by Equation (9).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Predation phase</ns0:head><ns0:p>The predation process is executed in this phase. Domain danger is defined by SHO which is signified as area of finite radius around each prey. The domain danger radius of each prey is computed by Equation ( <ns0:ref type='formula'>17</ns0:ref>). </ns0:p><ns0:formula xml:id='formula_17'>𝑅 𝑟 (17) u j l j n j r y y R    1</ns0:formula><ns0:formula xml:id='formula_18'>} , { j h K K  (19) i i p m m i j i j i p j T h h p h p h p T h H     , ) ( , , ,  </ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.6'>Restoration phase</ns0:head><ns0:p>Finally, the restoration is accomplished by making a set . Here, represents the set of herd</ns0:p><ns0:formula xml:id='formula_19'>K h M j   K</ns0:formula><ns0:p>member slayed for the duration of the predation phase. The mating probabilities are also determined by each member as in Equation ( <ns0:ref type='formula'>20</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 2: Peseudo code for the proposed SHO algorithm in data classification</ns0:head></ns0:div>
<ns0:div><ns0:head>Start</ns0:head><ns0:p>Initialize the parametrs and locations of SHO by eq ( <ns0:ref type='formula'>9</ns0:ref>)</ns0:p><ns0:p>For <ns0:ref type='formula'>16</ns0:ref>)</ns0:p></ns0:div>
<ns0:div><ns0:head>End for</ns0:head><ns0:p>Re-compute survival value using eq (10)</ns0:p><ns0:p>Compute dangerous radius by ( <ns0:ref type='formula'>17</ns0:ref>) Predation performance by ( <ns0:ref type='formula'>18</ns0:ref>) & ( <ns0:ref type='formula'>19</ns0:ref>) Restoration performance by eqn ( <ns0:ref type='formula'>20</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_20'>1   K K</ns0:formula></ns0:div>
<ns0:div><ns0:head>End while</ns0:head><ns0:p>Global optimal output Fitness for global optimal output End</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Result and Discussion</ns0:head><ns0:p>The efficiency of our proposed method is assessed by comparing its accuracy with other popular classification methods like Particle Swarm Optimization (PSO) <ns0:ref type='bibr' target='#b9'>[Chen et al., 2014]</ns0:ref>, Ant Colony Optimization (ACO) <ns0:ref type='bibr' target='#b31'>[Otero, Freitas & Johnson, 2012]</ns0:ref>, and Cuckoo Search (CS) Optimization <ns0:ref type='bibr' target='#b7'>[Cao et al., 2015]</ns0:ref>. We estimated the performance of proposed algorithm based on the accuracy as tested in 10 UCI datasets. The accuracy of our proposed method is comparable to other optimization methods and various classifiers. But the cross validation is not performed in the proposed approach. The proposed method is greater than all existing methods taken for comparison. SHO is combined with C4.5 classifier to produce greater accuracy than a standard C4.5 classifier. The proposed decision tree classifier named C4.5-SHO is further compared with C4.5, ID3 and CART. The description of ten data sets is tabulated in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>. These datasets include Monks, Car, Chess, Breast-cancer, Hayes, Abalone, Wine, Ionosphere, Iris, and Scale [Arellano, Bory-Reyes & Hernandez-Simon, 2018]. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the algorithm parameters. Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> shows the algorithm parameters for decision tree.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data set</ns0:head><ns0:p>No The proposed method is compared with existing entropies, optimization algorithms and different classifiers.</ns0:p><ns0:p>The effectiveness is estimated based on the accuracy, AUROC and classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Accuracy</ns0:head><ns0:p>The classification accuracy is measured based on Equation ( <ns0:ref type='formula'>21</ns0:ref>) <ns0:ref type='bibr'>[Polat & Gne, 2009]</ns0:ref>, In Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, the proposed C4.5-SHO decision tree classification accuracy is compared with other classifiers like C4.5, ID3 and CART. The accuracy of our proposed work is more stable compared to the accuracy achieved by the other considered algorithms. The accuracy of classification is depended on the training dataset. The dataset is split up into a training set and test set. The classifier model is trained with training set. Then to evaluate the accuracy of the classifier, we use test set to predict the labels (which we know) in the test set. The accuracy of Iris data set is high (0.9986) compared to other data sets. The lowest accuracy of the proposed C4.5-SHO is 0.9437 in Scale data set. In comparison with existing classifiers, it is observed that the proposed approach has obtained a good accuracy. The proposed C4.5-SHO is compared with other optimization algorithms like ACO, PSO and CS. The AUROC results are presented in Table <ns0:ref type='table' target='#tab_9'>7</ns0:ref> which shows that the proposed AUROC value is better than existing algorithms. It is revealed that SHO not only reduces the complexity of decision trees but also enhances the accuracy.</ns0:p><ns0:formula xml:id='formula_21'>(21) A a A a assess A accuracy i A i i     , | | ) ( ) ( | | 1 (22)      otherwise c a a classify if a assess ,<ns0:label>0 . ) ( , 1 ) ( Here</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Data</ns0:head></ns0:div>
<ns0:div><ns0:head>c) Different entropy comparison</ns0:head><ns0:p>Based on the Ray's quadratic entropy, the information gain is optimized through SHO algorithm. The entropy with SHO is compared to traditional SHO in terms of other entropies, such as C4.5-SHO (Shanon entropy), C4.5-SHO (Havrda & charvt entropy), C4.5-SHO (Renyi entropy) and C4.5-SHO (Taneja entropy). The quadratic entropy is the measure of disorder in the range between entire arranged (ordered) and unarranged (disordered) data in the given dataset. The Quadratic entropy is successfully measured for the disorders in the datasets. The classification accuracy is improved by the quadratic entropy than other entropies. Hence, the proposed work follows Ray's quadratic entropy to get a better output. Compared to other entropies, the Quadratic entropy achieved better accuracy in data classification for all data sets. The gain parameter is optimized by proposed C4.5-SHO algorithm in order to make a decision. An optimal gain value is selected through the fitness function mentioned in Equation ( <ns0:ref type='formula'>8</ns0:ref>). Initially, gain is calculated for each attribute used in the decision tree. If the number of iteration increases, the gain value will be changed on every iteration. Further, the fitness is nothing but the difference between actual gain and new gain. Therefore, the gain values of the attributes are noted for every iteration. The proposed optimization algorithm provided the optimal best gain value at 100 th iteration as seen in the convergence plot in Figure <ns0:ref type='figure'>2</ns0:ref>. Finally, the gain error was minimized with the help of C4.5-SHO algorithm.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the convergence plot of proposed SHO and similar existing algorithms for average of all datasets. The proposed SHO achieved good convergence compared to existing techniques. The proposed work is based on gain optimization with SHO algorithm whereas the execution time is also the most important factor in data classification approach. On comparing the time-taken for analysis, the proposed method needs low computational time than the existing algorithms like ACO (0.974s), PSO (0.54s) and CS (0.6s). Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>In decision tree based classification, the training set is assumed as and the M number of training samples is mentioned as . Here, the samples are divided into for various kinds of | | M N</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Algorithm 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudo code for C4.5 decision tree algorithm Input: Dataset Output: Decision tree // Start for all attributes in data Calculate information gain end HG= Attribute with highest information gain Tree = Create a decision node for splitting attribute HG New data= Sub datasets based on HG for all New data Tree new= C4.5(New data) Attach tree to corresponding branch of Tree end return The C4.5 algorithm builds a decision tree by learning from a training set in which every sample is built on an attribute-value pair. The current attribute node is calculated based on the information gain rate in which the root PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>quantity of prey group members is denoted as where denotes the population of the prey and the p n n</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>utilized to optimize the gain function in data classification operation. Figure 1 displays the flow diagram of SHO algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 2 :Figure 3 :</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2: Convergence evaluation of SHO</ns0:figDesc><ns0:graphic coords='15,98.96,527.47,436.59,156.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Comparison of computational time</ns0:figDesc><ns0:graphic coords='17,74.03,72.00,468.28,212.24' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,82.88,72.00,446.25,247.81' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,70.87,201.88,672.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,287.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,204.37,525.00,197.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,255.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Description of data set</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>of attributes</ns0:cell><ns0:cell cols='2'>No of samples</ns0:cell><ns0:cell>Classes</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Monks</ns0:cell><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell>432</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Car</ns0:cell><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>1728</ns0:cell><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Chess</ns0:cell><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>28056</ns0:cell><ns0:cell /><ns0:cell>36</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Breast-cancer</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>699</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Hayes</ns0:cell><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>160</ns0:cell><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Abalone</ns0:cell><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>4177</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wine</ns0:cell><ns0:cell /><ns0:cell>13</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Ionosphere</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>351</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Iris</ns0:cell><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Scale</ns0:cell><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>625</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SHO</ns0:cell><ns0:cell /><ns0:cell>ACO</ns0:cell><ns0:cell /><ns0:cell>PSO</ns0:cell><ns0:cell /><ns0:cell>CS</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>50</ns0:cell></ns0:row><ns0:row><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /><ns0:cell>populations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Phromone</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lower</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Heuristic</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>Lower bound and</ns0:cell><ns0:cell>-1 &1</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell>damping ratio</ns0:cell><ns0:cell /><ns0:cell>upper bound</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Upper</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Evaporation</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Personal and</ns0:cell><ns0:cell>1.5 &</ns0:cell><ns0:cell>Number of nests</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>rate</ns0:cell><ns0:cell /><ns0:cell>global learning</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Prey's rate</ns0:cell><ns0:cell>0.7,</ns0:cell><ns0:cell>Lower bound</ns0:cell><ns0:cell>-1 &1</ns0:cell><ns0:cell>Lower bound</ns0:cell><ns0:cell>-10</ns0:cell><ns0:cell>Transition</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.9</ns0:cell><ns0:cell>and upper</ns0:cell><ns0:cell /><ns0:cell>and upper</ns0:cell><ns0:cell>&10</ns0:cell><ns0:cell>probability</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>bound</ns0:cell><ns0:cell /><ns0:cell>bound</ns0:cell><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell cols='2'>100 Number of runs</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of runs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>runs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>runs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Algorithms parameters and values</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Algorithms parameters for decision tree</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification accuracy of the proposed classifier C4.5 with C4.5, ID3 and CART</ns0:figDesc><ns0:table><ns0:row><ns0:cell>set</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5</ns0:cell><ns0:cell>ID3</ns0:cell><ns0:cell>CART</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9832</ns0:cell><ns0:cell>0.966</ns0:cell><ns0:cell>0.951</ns0:cell><ns0:cell>0.954</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9725</ns0:cell><ns0:cell>0.923</ns0:cell><ns0:cell>0.9547</ns0:cell><ns0:cell>0.8415</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9959</ns0:cell><ns0:cell>0.9944</ns0:cell><ns0:cell>0.9715</ns0:cell><ns0:cell>0.8954</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9796</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.9621</ns0:cell><ns0:cell>0.9531</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9553</ns0:cell><ns0:cell>0.8094</ns0:cell><ns0:cell>0.9014</ns0:cell><ns0:cell>0.7452</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9667</ns0:cell><ns0:cell>0.9235</ns0:cell><ns0:cell>0.9111</ns0:cell><ns0:cell>0.9111</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9769</ns0:cell><ns0:cell>0.963</ns0:cell><ns0:cell>0.9443</ns0:cell><ns0:cell>0.9145</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9899</ns0:cell><ns0:cell>0.9421</ns0:cell><ns0:cell>0.9364</ns0:cell><ns0:cell>0.9087</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9986</ns0:cell><ns0:cell>0.9712</ns0:cell><ns0:cell>0.7543</ns0:cell><ns0:cell>0.8924</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9437</ns0:cell><ns0:cell>0.7782</ns0:cell><ns0:cell>0.7932</ns0:cell><ns0:cell>0.7725</ns0:cell></ns0:row><ns0:row><ns0:cell>Average value</ns0:cell><ns0:cell>0.97623</ns0:cell><ns0:cell>0.92208</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.87884</ns0:cell></ns0:row><ns0:row><ns0:cell>Data set</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>ACO</ns0:cell><ns0:cell>PSO</ns0:cell><ns0:cell>CS</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Classification accuracy of the proposed Algorithm with ACO, PSO and CS In Table5, the proposed C4.5-SHO decision tree classification accuracy is compared with other algorithms like ACO, PSO and CS. The accuracy of our proposed work is more stable compared to the accuracy achieved by the other considered algorithms. The accuracy of Iris data set is high (0.9986) compared to other data sets. The lowest accuracy of the proposed C4.5-SHO is 0.9437 in Scale data set. In comparison with existing algorithms, the proposed approach achieved good accuracy.The performance of classification model is shown through graph analysis of area under the Receiver Operating Characteristic curve (AUROC). This is dependent upon the attributes as well as classes. The proposed C4.5-SHO is compared with other classifiers like C4.5, ID3 and CART. The AUROC results presented in Table6which shows that the AUROC value of proposed method is better than other algorithms.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>b) Area under ROC (AUROC)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5</ns0:cell><ns0:cell>ID3</ns0:cell><ns0:cell>CART</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9619</ns0:cell><ns0:cell>0.95713</ns0:cell><ns0:cell>0.9636</ns0:cell><ns0:cell>0.9791</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9819</ns0:cell><ns0:cell>0.9393</ns0:cell><ns0:cell>0.9891</ns0:cell><ns0:cell>0.8933</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9673</ns0:cell><ns0:cell>0.9252</ns0:cell><ns0:cell>0.9090</ns0:cell><ns0:cell>0.9049</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9793</ns0:cell><ns0:cell>0.9171</ns0:cell><ns0:cell>0.9730</ns0:cell><ns0:cell>0.9218</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9874</ns0:cell><ns0:cell>0.9069</ns0:cell><ns0:cell>0.9108</ns0:cell><ns0:cell>0.8360</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9647</ns0:cell><ns0:cell>0.9224</ns0:cell><ns0:cell>0.9573</ns0:cell><ns0:cell>0.9082</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9914</ns0:cell><ns0:cell>0.9772</ns0:cell><ns0:cell>0.9497</ns0:cell><ns0:cell>0.9739</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9943</ns0:cell><ns0:cell>0.9680</ns0:cell><ns0:cell>0.9059</ns0:cell><ns0:cell>0.9560</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9890</ns0:cell><ns0:cell>0.9048</ns0:cell><ns0:cell>0.7945</ns0:cell><ns0:cell>0.9481</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9850</ns0:cell><ns0:cell>0.8562</ns0:cell><ns0:cell>0.7845</ns0:cell><ns0:cell>0.8007</ns0:cell></ns0:row><ns0:row><ns0:cell>Average value</ns0:cell><ns0:cell>0.98022</ns0:cell><ns0:cell>0.92742</ns0:cell><ns0:cell>0.91374</ns0:cell><ns0:cell>0.9122</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Area under the ROC curve of proposed C4.5 with ID3 and CART</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>ACO</ns0:cell><ns0:cell>PSO</ns0:cell><ns0:cell>CS</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9935</ns0:cell><ns0:cell>0.9874</ns0:cell><ns0:cell>0.97668</ns0:cell><ns0:cell>0.9733</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.98452</ns0:cell><ns0:cell>0.97908</ns0:cell><ns0:cell>0.97583</ns0:cell><ns0:cell>0.9659</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.99931</ns0:cell><ns0:cell>0.98612</ns0:cell><ns0:cell>0.9815</ns0:cell><ns0:cell>0.9503</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9854</ns0:cell><ns0:cell>0.9795</ns0:cell><ns0:cell>0.9695</ns0:cell><ns0:cell>0.9581</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.99616</ns0:cell><ns0:cell>0.92611</ns0:cell><ns0:cell>0.9442</ns0:cell><ns0:cell>0.9571</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9885</ns0:cell><ns0:cell>0.9828</ns0:cell><ns0:cell>0.9694</ns0:cell><ns0:cell>0.9566</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9932</ns0:cell><ns0:cell>0.9830</ns0:cell><ns0:cell>0.8977</ns0:cell><ns0:cell>0.8964</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9954</ns0:cell><ns0:cell>0.9741</ns0:cell><ns0:cell>0.9630</ns0:cell><ns0:cell>0.9569</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9873</ns0:cell><ns0:cell>0.9687</ns0:cell><ns0:cell>0.9656</ns0:cell><ns0:cell>0.9578</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9858</ns0:cell><ns0:cell>0.9266</ns0:cell><ns0:cell>0.9165</ns0:cell><ns0:cell>0.8968</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Area under ROC curve of the proposed Algorithm with ALO, PSO and CS</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Table 8 shows the entropy comparisons with proposed SHO. Entropy comparison</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>(Shanon</ns0:cell><ns0:cell>(Havrda &</ns0:cell><ns0:cell>(Quadratic</ns0:cell><ns0:cell>(Renyi</ns0:cell><ns0:cell>(Taneja</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>entropy)</ns0:cell><ns0:cell>charvt entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9429</ns0:cell><ns0:cell>0.9756</ns0:cell><ns0:cell>0.9859</ns0:cell><ns0:cell>0.9926</ns0:cell><ns0:cell>0.9415</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9585</ns0:cell><ns0:cell>0.9527</ns0:cell><ns0:cell>0.9753</ns0:cell><ns0:cell>0.9895</ns0:cell><ns0:cell>0.9700</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9510</ns0:cell><ns0:cell>0.9535</ns0:cell><ns0:cell>0.9907</ns0:cell><ns0:cell>0.9809</ns0:cell><ns0:cell>0.9401</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9852</ns0:cell><ns0:cell>0.9558</ns0:cell><ns0:cell>0.9863</ns0:cell><ns0:cell>0.9564</ns0:cell><ns0:cell>0.9672</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9579</ns0:cell><ns0:cell>0.9460</ns0:cell><ns0:cell>0.9981</ns0:cell><ns0:cell>0.9476</ns0:cell><ns0:cell>0.9102</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9556</ns0:cell><ns0:cell>0.9618</ns0:cell><ns0:cell>0.9789</ns0:cell><ns0:cell>0.9715</ns0:cell><ns0:cell>0.9447</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9485</ns0:cell><ns0:cell>0.9731</ns0:cell><ns0:cell>0.9823</ns0:cell><ns0:cell>0.9297</ns0:cell><ns0:cell>0.9317</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9319</ns0:cell><ns0:cell>0.9415</ns0:cell><ns0:cell>0.9665</ns0:cell><ns0:cell>0.9636</ns0:cell><ns0:cell>0.9036</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9465</ns0:cell><ns0:cell>0.9807</ns0:cell><ns0:cell>0.9832</ns0:cell><ns0:cell>0.9514</ns0:cell><ns0:cell>0.9428</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9725</ns0:cell><ns0:cell>0.8936</ns0:cell><ns0:cell>0.9747</ns0:cell><ns0:cell>0.9617</ns0:cell><ns0:cell>0.9031</ns0:cell></ns0:row><ns0:row><ns0:cell>Average Value</ns0:cell><ns0:cell>0.95505</ns0:cell><ns0:cell>0.95343</ns0:cell><ns0:cell>0.98219</ns0:cell><ns0:cell>0.96449</ns0:cell><ns0:cell>0.93549</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Table9and Figure4illustrate the computational time comparison for average of all datasets. Computational Time</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithm</ns0:cell><ns0:cell>Time(sec)</ns0:cell></ns0:row><ns0:row><ns0:cell>ACO</ns0:cell><ns0:cell>0.974</ns0:cell></ns0:row><ns0:row><ns0:cell>PSO</ns0:cell><ns0:cell>0.54</ns0:cell></ns0:row><ns0:row><ns0:cell>CS</ns0:cell><ns0:cell>0.6</ns0:cell></ns0:row><ns0:row><ns0:cell>SHO</ns0:cell><ns0:cell>0.49</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Algorithms parameters and values</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>Maximum</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /><ns0:cell>iterations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>Phromone</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Dimension</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lower</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Heuristic</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Inertia weight</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>Lower bound and</ns0:cell><ns0:cell>-1 &1</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>Exponential</ns0:cell><ns0:cell /><ns0:cell>damping ratio</ns0:cell><ns0:cell /><ns0:cell>upper bound</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Weight</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Upper</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Evaporation</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>Personal and</ns0:cell><ns0:cell>1.5 &</ns0:cell><ns0:cell>Number of nests</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>boundary</ns0:cell><ns0:cell /><ns0:cell>rate</ns0:cell><ns0:cell /><ns0:cell>global learning</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Prey's rate</ns0:cell><ns0:cell>0.7,</ns0:cell><ns0:cell>Lower bound</ns0:cell><ns0:cell>-1 &1</ns0:cell><ns0:cell>Lower bound</ns0:cell><ns0:cell>-10</ns0:cell><ns0:cell>Transition</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.9</ns0:cell><ns0:cell>and upper</ns0:cell><ns0:cell /><ns0:cell>and upper</ns0:cell><ns0:cell>&10</ns0:cell><ns0:cell>probability</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>bound</ns0:cell><ns0:cell /><ns0:cell>bound</ns0:cell><ns0:cell /><ns0:cell>coefficient</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell cols='2'>100 Number of runs</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Number of runs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>runs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>runs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Algorithms parameters for decision tree</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5</ns0:cell><ns0:cell>ID3</ns0:cell><ns0:cell>CART</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9619</ns0:cell><ns0:cell>0.95713</ns0:cell><ns0:cell>0.9636</ns0:cell><ns0:cell>0.9791</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9819</ns0:cell><ns0:cell>0.9393</ns0:cell><ns0:cell>0.9891</ns0:cell><ns0:cell>0.8933</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9673</ns0:cell><ns0:cell>0.9252</ns0:cell><ns0:cell>0.9090</ns0:cell><ns0:cell>0.9049</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9793</ns0:cell><ns0:cell>0.9171</ns0:cell><ns0:cell>0.9730</ns0:cell><ns0:cell>0.9218</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9874</ns0:cell><ns0:cell>0.9069</ns0:cell><ns0:cell>0.9108</ns0:cell><ns0:cell>0.8360</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9647</ns0:cell><ns0:cell>0.9224</ns0:cell><ns0:cell>0.9573</ns0:cell><ns0:cell>0.9082</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9914</ns0:cell><ns0:cell>0.9772</ns0:cell><ns0:cell>0.9497</ns0:cell><ns0:cell>0.9739</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9943</ns0:cell><ns0:cell>0.9680</ns0:cell><ns0:cell>0.9059</ns0:cell><ns0:cell>0.9560</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9890</ns0:cell><ns0:cell>0.9048</ns0:cell><ns0:cell>0.7945</ns0:cell><ns0:cell>0.9481</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9850</ns0:cell><ns0:cell>0.8562</ns0:cell><ns0:cell>0.7845</ns0:cell><ns0:cell>0.8007</ns0:cell></ns0:row><ns0:row><ns0:cell>Average value</ns0:cell><ns0:cell>0.98022</ns0:cell><ns0:cell>0.92742</ns0:cell><ns0:cell>0.91374</ns0:cell><ns0:cell>0.9122</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Area under the ROC curve of proposed C4.5 with ID3 and CART</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5 -SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell><ns0:cell>C4.5-SHO</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>(Shanon</ns0:cell><ns0:cell>(Havrda &</ns0:cell><ns0:cell>(Quadratic</ns0:cell><ns0:cell>(Renyi</ns0:cell><ns0:cell>(Taneja</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>entropy)</ns0:cell><ns0:cell>charvt entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell><ns0:cell>entropy)</ns0:cell></ns0:row><ns0:row><ns0:cell>Monks</ns0:cell><ns0:cell>0.9429</ns0:cell><ns0:cell>0.9756</ns0:cell><ns0:cell>0.9859</ns0:cell><ns0:cell>0.9926</ns0:cell><ns0:cell>0.9415</ns0:cell></ns0:row><ns0:row><ns0:cell>Car</ns0:cell><ns0:cell>0.9585</ns0:cell><ns0:cell>0.9527</ns0:cell><ns0:cell>0.9753</ns0:cell><ns0:cell>0.9895</ns0:cell><ns0:cell>0.9700</ns0:cell></ns0:row><ns0:row><ns0:cell>Chess</ns0:cell><ns0:cell>0.9510</ns0:cell><ns0:cell>0.9535</ns0:cell><ns0:cell>0.9907</ns0:cell><ns0:cell>0.9809</ns0:cell><ns0:cell>0.9401</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast-cancer</ns0:cell><ns0:cell>0.9852</ns0:cell><ns0:cell>0.9558</ns0:cell><ns0:cell>0.9863</ns0:cell><ns0:cell>0.9564</ns0:cell><ns0:cell>0.9672</ns0:cell></ns0:row><ns0:row><ns0:cell>Hayes</ns0:cell><ns0:cell>0.9579</ns0:cell><ns0:cell>0.9460</ns0:cell><ns0:cell>0.9981</ns0:cell><ns0:cell>0.9476</ns0:cell><ns0:cell>0.9102</ns0:cell></ns0:row><ns0:row><ns0:cell>Abalone</ns0:cell><ns0:cell>0.9556</ns0:cell><ns0:cell>0.9618</ns0:cell><ns0:cell>0.9789</ns0:cell><ns0:cell>0.9715</ns0:cell><ns0:cell>0.9447</ns0:cell></ns0:row><ns0:row><ns0:cell>Wine</ns0:cell><ns0:cell>0.9485</ns0:cell><ns0:cell>0.9731</ns0:cell><ns0:cell>0.9823</ns0:cell><ns0:cell>0.9297</ns0:cell><ns0:cell>0.9317</ns0:cell></ns0:row><ns0:row><ns0:cell>Ionosphere</ns0:cell><ns0:cell>0.9319</ns0:cell><ns0:cell>0.9415</ns0:cell><ns0:cell>0.9665</ns0:cell><ns0:cell>0.9636</ns0:cell><ns0:cell>0.9036</ns0:cell></ns0:row><ns0:row><ns0:cell>Iris</ns0:cell><ns0:cell>0.9465</ns0:cell><ns0:cell>0.9807</ns0:cell><ns0:cell>0.9832</ns0:cell><ns0:cell>0.9514</ns0:cell><ns0:cell>0.9428</ns0:cell></ns0:row><ns0:row><ns0:cell>Scale</ns0:cell><ns0:cell>0.9725</ns0:cell><ns0:cell>0.8936</ns0:cell><ns0:cell>0.9747</ns0:cell><ns0:cell>0.9617</ns0:cell><ns0:cell>0.9031</ns0:cell></ns0:row><ns0:row><ns0:cell>Average Value</ns0:cell><ns0:cell>0.95505</ns0:cell><ns0:cell>0.95343</ns0:cell><ns0:cell>0.98219</ns0:cell><ns0:cell>0.96449</ns0:cell><ns0:cell>0.93549</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Entropy comparison </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='2'>1 rn r r</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54852:2:0:NEW 8 Feb 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "RESPONSE TO REVIEWERS
of “ENTROPY BASED C4.5-SHO ALGORITHM WITH INFORMATION GAIN OPTIMIZATION IN DATA MINING”
I would like to thank the reviewers for their thoughtful review of the manuscript. They raise important issues which are very helpful for improving the manuscript. We appreciate the positive feedback from the reviewer and we have revised the manuscript accordingly. I have responded in detail to each of the reviewer’s comments. I hope that the reviewers will find the responses to their comments satisfactory, and I wish to finish the revised version of the manuscript.
Reviewer: Arturo Magana-Mora
Basic reporting
Caption Table 4. Replace “ALO” by “ACO”
Reply: In the revised manuscript, ACO is replaced by ALO.
Few grammar suggestions:
Line 228. Replace “So, chose 0.7 and 0.9 values as the random values.” By “Therefore, 0.7 and 0.9 were the selected random values.”
Reply: Line 254. As per your suggestion, the sentence is replaced in the revised manuscript.
Line 236. Use comma before respectively.
Reply: Line 267. As per your suggestion, the comma is used in the revised manuscript.
Line 274, and 268. Replace “The accuracy of our proposed work is almost stable than the other.” By “The accuracy of our proposed work is more stable compared to the accuracy achieved by the other considered algorithms.”
Reply: Line 323 and 332. As per your suggestion, the sentences are replaced in revised manuscript.
Line 269. Can you please explain what do you mean by: “The accuracy of classification is depended on the training dataset.”. This phrase may be interpreted as the reported results are those obtained from the training data.
Reply: Line 324. The dataset is split up into a training set and test set. The classifier model is trained with training set. Then to evaluate the accuracy of the classifier, we use test set to predict the labels (which we know) in the test set.
Validity of the findings
Below I repeat my original comment, Authors reply, and my new comment:
Original reviewer’s comment: Can you please explain why RUN.m uses k=100 for the number of trees? Are you reporting the best results obtained from these 100 SHO runs?
Author’s reply: k=100 is for the number of times the tree is generated. RUN.m file is run 100 times. After 100 SHO runs, we have attain the best result that result is presented in the article.
Second revision – reviewer’s comment: Ideally, data would need to be split into training, validation, and test sets. A validation set is used to tune the parameters. In this case, if you ran 100 executions, the best performing model (based on validation set) is then used to test the performance of the final model by using the test set. What I am trying to say here, the execution of those 100 different trees is considered to be part of the training phase.
Some studies reserve the validation from the training set, while other use nested cross-validation (see https://stats.stackexchange.com/questions/103828/use-of-nested-cross-validation).
My concern is that other algorithms (ACO, PSO, and CS) were not given the same opportunities to achieve better results if these algorithms were not run 100 times too. Arguably, you would be able to continue to improve results on test data just by expanding the number of executions (that is why these iterations should be considered as the training phase). These models would overfit the test data.
With respect to the other Author’s response: “The parameters of SHO, ACO, PSO, and CS values are tabulated in Table 2. The parameter table is updated in the revised manuscript. We have taken random values for our proposed optimization work like iterations and number of populations”
These number of iterations and populations should be part of the training phase which is optimized by using the validation set. Final results would be those obtained by testing the final model on the test data that has not been seen during any training phase.
If validation set was not used to compare C4.5-SHO against C4.5, ID3, and CART. Then authors need to carefully indicate the parameters used for each of these algorithms to demonstrate that all were given the same opportunities and indicate so in the manuscript. The authors should indicate that a validation set was not used.
Similarly, I suggest authors describe the search space for the other C4.5 implementations (C4.5 ACO, C4.5 PSO, and C4.5 CS), i.e., where they also ran 100 times? Was a random search also performed?
Reply: We have not used the validation test. This detail is mentioned in the revised manuscript. The search space as upper and lower boundaries of C4.5 ACO, C4.5 PSO, and C4.5 CS) are given in Table 2. These algorithms are also run 100 number of times. Random search is not performed. We have used metaheuristic optimization algorithm for gain parameter tuning. So, random search technique is not needed.
Comments for the Author
I thank the authors for theS modifications.
While the Authors answered most of my comments. I still have a concern related to the optimization of parameters (optimization algorithms and decision tree algorithms).
Reply: In the revised manuscript, the optimization parameters of decision tree are tabulated in Table 3.
" | Here is a paper. Please give your review comments after reading it. |
8 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The popularity of the internet, smartphones, and social networks has contributed to the proliferation of misleading information like fake news and fake reviews on news blogs, online newspapers, and e-commerce applications. Fake news has a worldwide impact and potential to change political scenarios, deceive people into increasing product sales, defaming politicians or celebrities, and misguiding the visitors to stop visiting a place or country. Therefore, it is vital to find automatic methods to detect fake news online. Several past studies' focus was the English language, but the resource-poor languages have been completely ignored because of the scarcity of labeled corpus. In this study, we investigate this issue in the Urdu language. Our contribution is threefold. First, we design an annotated corpus of Urdu news articles for the fake news detection tasks. Second, we explore three individual machine learning models to detect fake news. Third, we use five ensemble learning methods to ensemble the base-predictors' predictions to improve the fake news detection system's overall performance. Our experiment results on two Urdu news corpora show the superiority of ensemble models over individual machine learning models. Four performance metrics balanced accuracy, the area under the curve, time, and mean absolute error used to find that Ensemble Selection and Vote models outperform the other machine learning and ensemble learning models.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fake news is also known as deceptive news or misinformation. A news story is a piece of fake news if its authenticity is verifiable false, and it intends to mislead the reader. As compared to fake news, the authenticity of legitimate news is verifiable real, and it plans to convey authentic information to the users <ns0:ref type='bibr' target='#b0'>(Abonizio et al., 2020)</ns0:ref>. Fake news can take on numerous structures including, edited text stories, photoshopped pictures, and unordered video clips. Fake news is similar in appearance to legitimate news, but the aims are different. The aims of spreading fake news are multipurpose, including deceiving readers into benefiting the author, propaganda about a politician to win the election, increased sale of a product by posting fake positive reviews to benefit a businessman, and defame a showbiz star <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018)</ns0:ref>. There are numerous hazardous impacts on our society of the proliferation of fake news. Fake news changes the manner of the individual to interpret and reply to legitimate news. Besides, fake news makes individuals skeptical by destroying consumers' trust in the media by posting fabricated and biased news stories <ns0:ref type='bibr' target='#b1'>(Agarwal & Dixit, 2020)</ns0:ref>. Spreading fake news is not a new problem in our time. Before the advent of the internet, fake news was transmitted through face-to-face (oral), radio, newspaper, and television. In recent years with the advent of the computer, the internet, smartphones, websites, news blogs, and social media applications have contributed to transmitting fake news. There are several reasons for spreading fake news through the internet and social media. It requires less cost and time than traditional news media. It is very easy to manipulate legitimate digital news and share the fabricated news story rapidly. Since 2017, there is a 13% global increase in social media users <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>. Fake news influences different groups of people, products, companies, politicians, showbiz, news agencies, and businessman. It requires more energy, cost, and time to manually identify and remove fake news or fake reviews from social media. Some previous studies conclude that humans perform poorly than automated systems to separate legitimate news from fake news <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018)</ns0:ref>. For the last few years, machine learning methods' focus is to differentiate between fake and legitimate news automatically. After the U.S. presidential elections in 2015, few popular social media applications like Twitter, Facebook, and Google started to pay attention to design machine learning and natural language processing (NLP) based mechanisms to detect and combat fake news. The remarkable development of supervised machine learning models paved the way for designing expert systems to identify fake news for English, Portuguese <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Silva et al., 2020)</ns0:ref>, Spanish <ns0:ref type='bibr' target='#b33'>(Posadas-Durán et al., 2019)</ns0:ref>, Indonesian <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref>, German, Latin, and Slavic languages <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>. A major problem of machine learning models is that different models perform differently on the same corpus. Their performance is sensitive to corpus properties like corpus size, distribution of instances into classes <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>. For example, the performance of K-nearest neighbor (KNN) depends on the number of nearest points (k) in the dataset. SVM suffers from numerical instability when solving optimization problems <ns0:ref type='bibr' target='#b40'>(Xiao, 2019)</ns0:ref>. Similarly, the performance of an artificial neural network (ANN) is prone to optimal architecture and tuning its parameters <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Ensemble learning is considered an efficient technique that can boost the efficiency of individual machine learning models, also called base-models, base-predictors, or base-learners, by aggregating the predictions of these models in some way <ns0:ref type='bibr' target='#b26'>(Lee et al., 2020)</ns0:ref>. Ensemble learning aims to exploit the diversity of base-predictors to handle multiple types of errors to increase overall performance. Ensemble learning techniques show superior performance in various recent studies about fake news detection. In a recent study, the ensemble learning technique outperformed the four deep learning models including the deep structured semantic model (DSSM) with RNN, intentCapsNet, LSTM model, and capsule neural network <ns0:ref type='bibr' target='#b18'>(Hakak et al., 2021)</ns0:ref>. In another recent study, <ns0:ref type='bibr' target='#b27'>(Mahabub, 2020)</ns0:ref> applied eleven machine learning classifiers including the neural network-based model MLP on a fake news detection corpus. After that three out of eleven machine models were selected to ensemble a voting model. Ensemble voting with soft voting outperformed the other models. <ns0:ref type='bibr' target='#b17'>(Gutierrez-Espinoza et al., 2020)</ns0:ref> applied two ensemble methods bagging and boosting with SVM and MLP base-predictors to detect fake reviews detection. Experiments show that boosting with MLP outperforms the other. This can be achieved in numerous ways, including homogenous models with diverse parameters, heterogeneous models, resampling the training corpus, or using different methods to combine predictions of base-predictors <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>. Ensemble learning can be of two types: parallel and sequential. In the parallel ensemble, base-predictors are trained independently in parallel. In the sequential ensemble, base-predictors are trained sequentially, where a model attempts to correct its predecessor <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>. Ensemble learning methods have shown good performance in various applications, including solar irradiance prediction <ns0:ref type='bibr' target='#b26'>(Lee et al., 2020)</ns0:ref>, slope stability analysis <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>, natural language processing <ns0:ref type='bibr' target='#b35'>(Sangamnerkar et al., 2020)</ns0:ref>, malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>, traffic incident detection <ns0:ref type='bibr' target='#b40'>(Xiao, 2019)</ns0:ref>. In the past, several studies explored machine learning models for fake news detection task in a few languages like Portuguese <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Silva et al., 2020</ns0:ref><ns0:ref type='bibr'>), Spanish (Posadas-Durán et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abonizio et al., 2020)</ns0:ref>, Urdu <ns0:ref type='bibr'>(Amjad et al., 2020;</ns0:ref><ns0:ref type='bibr'>Amjad, Sidorov & Zhila, 2020)</ns0:ref>, Arabic <ns0:ref type='bibr' target='#b6'>(Alkhair et al., 2019)</ns0:ref>, Slavic <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Kapusta & Obonya, 2020)</ns0:ref>, and English <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020;</ns0:ref><ns0:ref type='bibr' target='#b31'>Ozbay & Alatas, 2020)</ns0:ref>. As compared to machine learning, a few efforts have been made to explore ensemble learning for fake news detection like Indonesian <ns0:ref type='bibr' target='#b5'>(Al-Ash & Wibowo, 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Al-Ash et al., 2019)</ns0:ref>, English <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Sangamnerkar et al., 2020)</ns0:ref>. Therefore, this study aims to investigate ensemble learning methods for the fake news detection task. Urdu is the national language of Pakistan and the 8 th most spoken language globally, with more than 100 million speakers <ns0:ref type='bibr' target='#b2'>(Akhter et al., 2020a)</ns0:ref>. Urdu is the South Asian severely resource-poor language. As compared to resource-rich languages like English, a few annotated corpus from very few domains are available for research purposes. Besides, insufficient linguistic resources like stemmers and annotated corpora make the research more challenging and inspired. Particularly in Urdu, to study fake news detection has several challenges. First, unavailability of some sufficient annotated corpus. A recent study <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref> proposed an annotated quickly spread it in the world. Therefore, it is vital to design some accurate, efficient, and reliable automated systems to detect fake news from a large corpus. In the past, numerous machine learning methods have been used to combat fake news. <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018)</ns0:ref> showed that the multi-layer perceptron (MLP) model outperforms the NB and random forest to identify fake news from a large news corpus. The study of <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref> concludes that SVM with bag-of-word (BoW) feature outperformed the other on five corpora of three languages Germanic, Latin, and Slavic. A benchmarking study for fake news detection concludes that SVM with linguistic-based word embedding features enables us to classify fake news with high accuracy <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019)</ns0:ref>. A study about Portuguese fake news detection reveals that random forest outperforms the other five machine learning models <ns0:ref type='bibr' target='#b37'>(Silva et al., 2020)</ns0:ref>. AdaBoost achieves the best performance on a small corpus than the other six models to separate fake news from legitimate news <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>. A recent study of fake news detection using supervised artificial intelligence methods shows that the DT is the best model out of twenty-three models <ns0:ref type='bibr' target='#b31'>(Ozbay & Alatas, 2020)</ns0:ref>. After analyzing the above studies, we can conclude designing an effective and high-performing system with a careful selection of the machine learning model and the feature selection method. To overcome individual machine learning models' issues and increase the classification performance, an ensemble of several diverse machine learning models has shown superior performance than individual machine learning in several applications. Xiao applied ensemble techniques with SVM and KNN base learners to detect traffic incidents <ns0:ref type='bibr' target='#b40'>(Xiao, 2019)</ns0:ref>. Experiments show the superiority of the ensemble model over individual machine learning models. A recent study about detecting fraud in credit cards concludes that the ensemble approach based on cost-sensitive meta-learning outperforms traditional machine learning models. <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref> used four ensemble models: boosted trees, bagged trees, random forest, and generalized random forest for slope stability analysis. Experimental results prove the best performance of extreme gradient boosting over other ensemble models and machine learning models. For malware detection, <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref> used five base-predictors, and the output of each base-predictor was ranked by calculating and aggregating the output weights. Then using two ensemble techniques Voting and Stacking to rank the output. A higher value ranked by the ensemble technique was the final prediction of the model. After extensive experiments, the study reveals that weighted Voting shows the best performance than Stacking. Compared to other applications, fake news detection using ensemble learning techniques has very few studies in the past. <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref> proposed a multi-level Voting model for the fake news detection task. The study concludes that the proposed model outperforms the other eleven individual machine learning and ensemble learning models. For multiclass fake news detection, <ns0:ref type='bibr' target='#b21'>(Kaliyar, Goswami & Narang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abonizio et al., 2020)</ns0:ref> used Gradient Boosting ensemble techniques and compare their performance with several individual machine learning models. Results on multiple corpora show that Gradient Boosting achieves the best performance than any other individual models. A recent study <ns0:ref type='bibr' target='#b20'>(Huang & Chen, 2020)</ns0:ref> proposed a self-adaptive harmony search algorithm to get optimized weights of ensemble models. The proposed algorithm achieved outstanding performance with 99.4% accuracy. The Bagging approach to detect fake news showed superior performance than SVM, Multinomial Naïve Bayes, and Random Forest <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref>. English is a resource-rich language, and many linguistic resources are publically available for research purposes. Therefore, several research studies have been performed for the fake news detection task. There is a severe lack of research studies for fake news detection in languages other than English. For the Indonesian language, a study of <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref> shows that the bagging model outperforms the three individual models: SVM, Random Forest, and Multinomial Naïve Bayes. In <ns0:ref type='bibr' target='#b0'>(Abonizio et al., 2020)</ns0:ref> applied three machine learning models (KNN, SVM, and random Forest) and extreme gradient boosting (ensemble model) on five datasets of three languages (English, Portuguese, and Spanish). In another study about fake news detection for Portuguese, random forest shows high accuracy in most of the experience <ns0:ref type='bibr' target='#b37'>(Silva et al., 2020)</ns0:ref>. Extreme gradient boosting shows the best performance than other individual models. For the Urdu language <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>, Adaboost outperforms the other seven machine learning models on a very small corpus. DT improves the classification accuracy for fake news detection for the Slovak language <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>. The lack of availability of a benchmarked annotated corpora of resource-poor languages is the major problem to investigate and compare the performance of numerous automated methods for fake news detection. Therefore, in several other than English studies, authors designed their own corpus by collecting news articles from the internet and other web resources and manually annotating these articles into fake and legitimate news. A list of corpora for several resourcepoor languages is given in Table <ns0:ref type='table'>1</ns0:ref>. It can be noticed that all the corpora except 'Fake.br' are very small in size. Because corpus designing is a costly and time-consuming task, the annotation process requires several experts from various fields to decide about the news article <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>. To date, our proposed corpus Urdu fake news (UFN) is the largest corpus than others.</ns0:p></ns0:div>
<ns0:div><ns0:head>Machine Learning and Ensemble Learning Models Machine Learning Models</ns0:head><ns0:p>This section gives a brief overview of three traditional machine learning models: Naïve Bayes, Decision Tree, and Support Vector Machine. We also described their significant drawbacks, which limit their performance on various tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Naïve Bayes (NB)</ns0:head><ns0:p>Naïve Bayes used a probabilistic approach based on Bayesian Theorem with two assumptions: 1) all the features are independent of each other in the dataset, and 2) all the features have equal effects. It is simple, popular, and useful for the classification of a large corpus, although the corpus does not hold independence. NB is challenging to interpret for a large corpus, and its assumption about features independence makes its performance poor, especially when the data distribution is very skewed <ns0:ref type='bibr' target='#b25'>(Komiya et al., 2011)</ns0:ref>. Several studies have used NB for fake news detection tasks like for Portuguese <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018)</ns0:ref> and English <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:54064:1:1:NEW 17 Jan 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science Decision Tree (DT)</ns0:head><ns0:p>The decision tree algorithm learns a decision rule inferred from the training data to design a decision tree model. Nodes of the tree represent the feature vectors taken from the text of the news article. Leaf nodes represent the set of possible labels or classes in the corpus. In our case, there are two possible labels: fake and legitimate. The DT predicts the article's label by learning features from the tree's root to one of the leaf nodes. It is simple and easy to interpret because all the information about the model behavior and influential variables is available. Therefore, the DT is also known as a white-box model. Drawbacks of the DT include overfitting and instability, a complex tree for a high-dimensional dataset that is not easy to interpret <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>. For the fake news detection task, DT has shown good performance for Slovak <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>, Portuguese <ns0:ref type='bibr' target='#b37'>(Silva et al., 2020)</ns0:ref>, English <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Ozbay & Alatas, 2020)</ns0:ref>, and Urdu <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref> languages. Support Vector Machine (SVM) Support vector machine is a non-parametric machine learning model. The performance of SVM is usually considered suitable for binary classification tasks with high-dimensional data. SVM maps all the features obtained from news articles' text into n-dimensional space where a feature represents the particular coordinate. During training, SVM learns a hyper-plan that best discriminates the features of one class to another. Therefore, SVM is also known as a discriminative classifier. Although SVM performs well with high-dimensionality data, it is difficult to interpret, requires significant computing resources, and faces numerical instability for optimization problems <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>. SVM shows excellent performance for fake news detection task in several studies of various languages like English <ns0:ref type='bibr' target='#b28'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gravanis et al., 2019)</ns0:ref>, Urdu <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>, Portuguese <ns0:ref type='bibr' target='#b37'>(Silva et al., 2020)</ns0:ref>, Dutch <ns0:ref type='bibr' target='#b39'>(Verhoeven & Daelemans, 2014)</ns0:ref>, Germanic, Latin, and Slavic <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Learning Models</ns0:head><ns0:p>Ensemble learning aggregates the individual machine learning models (base-predictors) to design a superior model to increase overall accuracy by handling the shortcomings of the basepredictors. It is known as the most efficient technique for improving the performance of machine learning models. Nowadays, ensemble learning methods are gaining popularity over traditional individual machine learning models in numerous classification tasks like fake news detection <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>, malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>. Ensemble learning methods fall into two categories: parallel ensemble and sequential ensemble. Both are shown in Figure <ns0:ref type='figure'>1(a)</ns0:ref> and Figure <ns0:ref type='figure'>1(b</ns0:ref>). In the parallel ensemble, the base-predictors are trained in parallel on the input corpus. The parallel ensemble has the advantages of simultaneous predictions, utilizing different CPU cores to execute the models simultaneously, and utilizing the characteristics of independence among them. In the sequential ensemble, the base-predictors are trained sequentially where the output of the one base-predictor plus the input corpus is the input to the next base-predictor. In other words, the base-predictors are dependent on each other. The next base-predictor challenge is to try to correct the errors of the previous base-predictor to improve the overall prediction accuracy <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>. Base-predictors can be homogenous or heterogeneous. In homogenous, a single machine learning model (like DT or NB) is trained in parallel or sequentially, while in heterogeneous different machine learning models (like DT and NB) are trained in parallel or sequentially. The ensemble learning method is advantageous if the heterogeneous machine learning models are used as base-predictor <ns0:ref type='bibr' target='#b24'>(Kittler, Hater & Duin, 1996)</ns0:ref>. Heterogeneous ensemble learning can be performed using different classifiers with different feature sets, training sets, and evaluation methods. In this section, we provide a brief description of the five ensemble models used in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stacking</ns0:head><ns0:p>Stacking model ensembles several base-predictors machine learning models using the stacking method. It was initially proposed by <ns0:ref type='bibr' target='#b38'>(Ting & Witten, 1997)</ns0:ref> and used in several studies for classification tasks like malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>, credit card fraud detection <ns0:ref type='bibr' target='#b30'>(Olowookere & Adewale, 2020)</ns0:ref>, and spam detection <ns0:ref type='bibr' target='#b34'>(Saeed, Rady & Gharib, 2019)</ns0:ref>. It can perform both classification and regression on data. Base-predictors are trained on input data, and the output of these base-predictors is given to a meta-classifier which makes the final prediction about the class of an input sample. Meta-classifier can be any classifier like Adaboost, Regression, etc. The meta-classifier's primary aim is to learn the optimal weights to combine the predictions of base-predictors and produce better prediction results than individual base-predictor results. Therefore, Stacking reduces the variance and improve the quality of classification. For unseen test articles from the test set, the article is passed to the base-predictor to classify these test articles. Their classification is then passed to stacking-based ensemble learners to make the final classification of that article as either fake or legitimate.</ns0:p></ns0:div>
<ns0:div><ns0:head>Voting</ns0:head><ns0:p>Voting is a meta-classifier that combines several base-predictors using different combination rules. Base-predictor can be any machine learning model. Individual base-predictors are trained on training data. The output of these base-predictors is combined using some combination rules like majority voting, minimum or maximum probabilities, or product of probabilities <ns0:ref type='bibr' target='#b24'>(Kittler, Hater & Duin, 1996)</ns0:ref>. Majority voting is also known as hard-voting as the class with majority votes is considered the input article's final class. In soft-voting, the final class is a class with the highest probability averaged over the individual predictors <ns0:ref type='bibr' target='#b14'>(González et al., 2020)</ns0:ref>. Voting method have used in several classification tasks like fake news detection <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>, spam detection <ns0:ref type='bibr' target='#b34'>(Saeed, Rady & Gharib, 2019)</ns0:ref>, and slope stability analysis <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Grading</ns0:head><ns0:p>Grading is an ensemble model originally presented by <ns0:ref type='bibr' target='#b36'>(Seewald & Fürnkranz, 2001)</ns0:ref>. Grading is a type of meta-classifier that corrects the graded predictions of base-predictors at base-level assuming that different base-predictors make different errors. Graded predictions are those predictions that are marked as incorrect or correct predictions by the base-predictor. A metaclassifier is learned for each base-predictor whose basic job is to predict when the base-predictor will error. These meta-classifiers are trained on a training set constructed from the graded predictions of corresponding base-predictors as new class labels. Grading is different from Stacking that uses the incorrect predictions of base-predictors as the attribute value for metaclassifier. Several studies show that Grading outperforms the Voting and Stacking ensemble models on classification tasks <ns0:ref type='bibr' target='#b36'>(Seewald & Fürnkranz, 2001;</ns0:ref><ns0:ref type='bibr' target='#b14'>González et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cascade Generalization</ns0:head><ns0:p>Cascade Generalization belongs to a special Stacking generalization that uses a sequentially layered architecture for combining multiple classifiers. The predictions of several base-predictors are used in the next stage for final prediction <ns0:ref type='bibr' target='#b13'>(Gama & Brazdil, 2000)</ns0:ref>. An extension of the original data is achieved at each level by inserting new attributes derived from the probability class distribution given by the base-predictors. Cascade Generalization is different from Stacking generalization in that Stacking is parallel in nature, while Cascade is sequential in nature.</ns0:p><ns0:p>Because of its sequential nature, intermediate models have access to the original attributes and the low-level models' predictions. Cascade provides the rules to choose the high-level and lowlevel models for classification. A major problem of the Cascade is that it transforms the feature space into a new high-dimensional feature space (the curse of dimensionality) that sometimes leads to a more difficult learning problem <ns0:ref type='bibr' target='#b13'>(Gama & Brazdil, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Selection</ns0:head><ns0:p>Ensemble selection is a method to construct an ensemble of several machine learning models. It starts with an empty ensemble and adds a model into the ensemble if it increases the performance of the ensemble. This process is repeated up to a specified number of iterations or until all the models have been used <ns0:ref type='bibr' target='#b10'>(Caruana, Ksikes & Crew, 2014)</ns0:ref>. Models are added into an ensemble using numerous ways like forwarding selection, backward elimination, and the best model. It stops adding models into the ensemble when the ensemble's performance starts to decrease after achieving the best performance. Ensemble selection allows ensembles to be optimized to performance metrics such as accuracy, cross-entropy, mean precision, or ROC Area <ns0:ref type='bibr' target='#b9'>(Ballard & Wang, 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Nguyen et al., 2020)</ns0:ref>. In a recent study, Ensemble Selection outperforms the other ensemble models to classify 62 datasets <ns0:ref type='bibr' target='#b29'>(Nguyen et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology and corpus construction</ns0:head><ns0:p>The proposed framework for fake news detection consists of four-phases. The first phase describes the procedure to design a corpus of Urdu news articles. The second phase explains the preprocess operations performed on the text of news articles. The third phase shows feature selection or dimensionality reduction approaches. The fourth phase provides the description of individual machine learning models or base-predictors for ensemble models. At last, the fifth phase describes the numerous ensemble learning models used in this study. The architecture with five layers is shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Corpus Design</ns0:head><ns0:p>In this study, we choose two corpora of text news articles of Urdu language for experiments. As Urdu is a resource-poor language, there is no standard corpus available for fake news detection task to the best of our knowledge. Because of the lack of linguistic resources, the collection of news articles from multiple sources is a tough task. Besides, the annotation process of these news articles based on the article's contents needs expert knowledge, a lot of time, and budget. Therefore, augmented corpus design is the only way to perform research about fake news detection for resource-poor languages. Our first corpus is Urdu Fake News (UFN) augmented corpus. It contains two thousand news articles randomly selected and translated from English language fake news corpus using online Google Translator. The original English corpus contains nearly 8000 news articles. We picked a subset of two thousand articles because 1) manual translation of all the articles in the original corpus is time taking and difficult to perform, 2) English-Urdu translation using Google Translator is not hundred percent accurate and to the best of our knowledge, no study investigates this problem. This is an open research question and can be investigated in future studies. 3) we aim to explore the detection performance of ensemble learning techniques. Several recent studies about fake news detection in Slovak <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>, Italian <ns0:ref type='bibr' target='#b12'>(Fornaciari & Poesio, 2013)</ns0:ref>, and Spanish (Posadas-Durán et al., 2019) used corpora with even less than two thousand news articles (see Table <ns0:ref type='table'>1</ns0:ref>). Our second corpus is a small size Bend the Truth (BET) corpus designed and annotated by <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>. This corpus contains only 900 original news articles in Urdu. A sample of the Urdu news articles is shown in Table <ns0:ref type='table'>2</ns0:ref>. After translation, the Urdu article label was the same as in the English corpus's corresponding article. The final corpus is available online on GitHub in CSV file format. The statistics of both corpora are shown in Table <ns0:ref type='table'>3</ns0:ref>. It can be noticed that our designed corpus UFN is larger than the BET corpus based on the total number of articles, size of vocabulary, and the length of the article.</ns0:p></ns0:div>
<ns0:div><ns0:head>Corpus Preparation and Preprocessing</ns0:head><ns0:p>Articles in the corpus are in an unstructured format and cannot be processed directly by the machine learning models. We must have to perform a series of operations on the corpus to convert an unstructured corpus into a structured corpus. We have cleaned and processed both corpora's news articles before generating the feature vectors for feature selection. We tokenized the text using space characters. Special characters, email addresses, and website URLs were removed from the text. After cleaning the text, we removed the most frequent and rare words of the Urdu language (also known as stopwords) from the text. The cleaned and the preprocessed articles were converted into numeric feature vectors using the term frequency-inverse document frequency (TF-IDF) method as used in a recent study <ns0:ref type='bibr' target='#b31'>(Ozbay & Alatas, 2020)</ns0:ref>. Both corpora were passed through the same number of preprocessing steps.</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>In our experiments, we have performed the experiments using three feature selection methods character tri-grams, bag-of-word (BOW), and information gain (IG). A recent study shows the superiority of the character n-gram method over word-level n-grams in short text classification tasks (i.e. offensive language detection) in Urdu text comments <ns0:ref type='bibr' target='#b3'>(Akhter et al., 2020b)</ns0:ref>. Character n-gram is a contiguous sequence of characters in the text. In character n-grams, the value of n is taken as three, which means the combination of three characters makes a tri-gram feature. From the UFN corpus, 1084 character n-grams and from the BET corpus 1091 n-grams were extracted.</ns0:p><ns0:p>BOW is a content-based feature representation in which a news article is represented as a set of words that occur in it at least once. IG measures the goodness of the features in the text. A comparative study concludes that IG is the best feature selection method for document-level text classification of Urdu. In our experiments, we have selected the top one thousand IG features from both corpora. 1225 and 1214 BoW features from BET and UFN, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heterogeneous Machine Learning Models</ns0:head><ns0:p>In our experiments, for machine learning classification, we use three individual machine learning models NB, SVM, and DT to detect fake news. All three models are heterogeneous. The working of these models is entirely different from each other. Using character-level n-grams from text articles, these models analyze the article's text and classify it into one of the categories legitimate or fake. Detail description of these machine learning models is given in Section 3.1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Learning Models</ns0:head><ns0:p>Ensemble classification is usually based on two levels: base-level and ensemble-level. We use three diverse machine learning models, SVM, DT, and NB, as base-predictors at the base-level. Input to these base-predictors is the character-level n-grams extracted from the news articles. Output predictions of these base-predictors are input to ensemble-level models. The basic aim of using the ensemble model is to overcome the base-predictors' shortcomings and improve overall prediction accuracy. We use five ensemble models for ensemble classification, known as Voting, Grading, Stacking, Cascading Generalization, and Ensemble Selection. A brief description of our ensemble models is given in Section 3.2.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Measures</ns0:head><ns0:p>To compare the performance of individual machine learning models and ensemble learning models, in this study, we employed the four well-known performance measures mean absolute error (MAE), balanced accuracy (BA), area under the curve (AUC), and time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Balanced Accuracy (BA)</ns0:head><ns0:p>Separation of fake news from legitimate news is a binary classification task. A model has to decide about an article, either a legitimate article or a fake article. As the focus of this study is to detect both classes correctly, we used the balanced accuracy performance measure to compare the performance of our models. Balanced accuracy calculates the average of the proportion of corrects of each class individually. Balanced accuracy can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_0'>𝐵𝑎𝑙𝑎𝑛𝑐𝑒𝑑 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 (𝐵𝐴) = [ 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 𝑇𝑁 + 𝐹𝑁 ] /2</ns0:formula><ns0:p>(1)</ns0:p></ns0:div>
<ns0:div><ns0:head>Area Under the receiver operating characteristic Curve (AUC)</ns0:head><ns0:p>Area Under the receiver operating characteristic curve, also known as area under the curve (AUC), is used to estimate the performance of a machine learning model using a single value. AUC provides the probability that the model will rank a randomly chosen positive sample higher than a randomly chosen negative sample. AUC can be calculated by Equation <ns0:ref type='formula'>2</ns0:ref>. is the 𝑇𝑃 𝑟𝑎𝑡𝑒 ratio of correctly predicted articles as fake articles. It is also known as recall and can be PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:1:1:NEW 17 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science calculated as above in Equation <ns0:ref type='formula' target='#formula_2'>3</ns0:ref>.</ns0:p><ns0:p>is the number of legitimate news articles that are 𝐹𝑃 𝑟𝑎𝑡𝑒 misclassified or incorrectly predicted as fake news articles.</ns0:p><ns0:formula xml:id='formula_1'>𝐴𝑈𝐶 = 1 + 𝑇𝑃 𝑟𝑎𝑡𝑒 -𝐹𝑃 𝑟𝑎𝑡𝑒</ns0:formula></ns0:div>
<ns0:div><ns0:head>2</ns0:head><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>𝐹𝑃 𝑟𝑎𝑡𝑒 = 𝐹𝑃 𝐹𝑃 + 𝑇𝑁<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>General rules for categorizing the performance of the machine learning model using AUC are given in Table <ns0:ref type='table'>4</ns0:ref>. These rules are used and discussed in <ns0:ref type='bibr' target='#b32'>(Pham et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean Absolute Error (MAE)</ns0:head><ns0:p>The error refers to the absolute difference between the actual values and the predicted values. MAE measures the average magnitude of the error from a set of predictions made by the model. MAE can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_3'>𝑀𝑒𝑎𝑛 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑀𝐴𝐸) = 1 𝑁 𝑛 ∑ 𝑗 = 1 |𝑦 𝑗 -𝑦 𝑗 |<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment Setup</ns0:head><ns0:p>As mentioned earlier, in this study, three diverse machine learning models NB, DT, and SVM have been used to classify news articles into legitimate and fake classes. We use a well-known data mining tool, WEKA, for experiments. WEKA provides a list of supervised and unsupervised machine learning models, data preprocessing techniques, and various performance evaluation methods. Machine learning models have few parameters, called hyper-parameters, to minimize the difference between training error and testing error. In our experiments, we do not fine-tune the hyper-parameters of all these models. We use the default parameters given in the WEKA as the default parameters have already the best values in most of the cases. We use the J48 algorithm for DT implementation. LibLINEAR algorithm for SVM implementation. We use the same DT, SVM, and NB models as base-predictors for all the ensemble models. For Voting and Stacking, along with the three base-predictors, we use Adaboost as a meta-classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Model training and testing</ns0:head><ns0:p>For training and validation of individual machine learning models and ensemble models, we use k-fold cross-validation as mentioned in Section 4.1 that both corpora have not been divided into a training subset and testing subset. k-fold cross-validation is a popular choice and used in many past research studies. In our experiments, we use 10-fold cross-validation, where k-1 folds are used for training, and one-fold is used to test the model's prediction performance. This process is repeated ten times to achieve the final performance score.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion of machine learning models</ns0:head><ns0:p>The experiment results achieved using 10-fold cross-validation from individual machine learning models are shown using the bar-graph in Table <ns0:ref type='table'>5</ns0:ref>. We compare the performance using BA, AUC, MAE, and time. A close observation of the results reveals that SVM outperforms the other for all corpora performance metrics. A model is considered as an accurate classifier if its balanced accuracy is higher than the other models. The BA metric shows that SVM outperforms the others on the UFN corpus. SVM achieves BA scores 81.6%, 86.7%, and 87.3% using tri-gram, BoW, and IG features, respectively. SVM also outperforms the others on the BET corpus. It achieves 76.3, 62.7, and 62.4 using tri-gram, BoW, and IG features, respectively. IG feature outperforms the others and achieves the maximum BA score 87.3% on large UFN corpus while tri-gram approach achieves maximum BA scores 76.3% on BET corpus. With the lowest balanced accuracy scores, NB shows the worst performance. It is noticed that SVM has higher accuracy at UFN than BET. The size of the UFN corpus, in terms of the number of articles and vocabulary size, is almost double the of BET, and SVM is considered a good model for the classification of high-dimensional feature space <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>.</ns0:p><ns0:p>Similarly, AUC scores of the SVM model are the highest score than DT and NB on both corpora. SVM achieves 87.3% and 76.3% AUC metric values on UFN and BET corpora, respectively.</ns0:p><ns0:p>Here, again IG proves the best feature selection method for UFN while tri-gram on BET corpus as SVM achieves the maximum AUC scores on IG and tri-gram features. Further, as per the rules of Table <ns0:ref type='table'>4</ns0:ref>, a comparison of AUC scores of all the models concludes that the performance of SVM on UFN is excellent ( ) on all the features. On the BET corpus, SVM 0.8 ≤ 𝐴𝑈𝐶 < 0.9 performance is only acceptable ( ) on tri-gram features. The performance of DT 0.7 ≤ 𝐴𝑈𝐶 < 0.8 and NB are just acceptable.</ns0:p><ns0:p>From Table <ns0:ref type='table'>5</ns0:ref>, it can be seen that in terms of MAE, the prediction error of SVM is the lowest than others. SVM achieves the lowest MAE score 12.7% with IG on UFN while 23.5% with trigram on BET corpus. The highest MAE values of NB proves its worst performance to detect Urdu fake news. A model is considered efficient if it takes minimum time than other models to build the model on some corpus. Again, SVM takes a minimum time of 0.15 on BOW and 0.17 seconds on IG to build the model on UFN and BET. DT takes the highest time on all features to build the model for both corpora. Further, it is notable that all the models perform betters on our designed machine-translated UFN corpus than the original news article's corpus BET. It shows that Google API translated text, preprocessing methods, and feature selection methods all together improve the classification accuracy of our models to detect fake news. Therefore, after analyzing the results, we conclude that SVM is a more accurate, reliable, efficient, and robust classification model among the three models to detect Urdu text's fake news articles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion of ensemble models</ns0:head><ns0:p>The values of four evaluation metrics balanced accuracy, the area under curve, mean absolute error, and time achieved by five ensemble models on both corpora are given in Table <ns0:ref type='table'>6</ns0:ref>. Time and MEA are the two metrics whose minimum values are required by the models. The other two metrics balanced accuracy and AUC, whose maximum values are required to achieve by a model. For the time metric, it is visible that Voting takes the minimum time than other models to build the model on the input corpus. Voting takes 11.52 seconds and 3.12 seconds to build the model on UFN and BET corpora, respectively. As the size of the BET model is very small, the Voting takes the minimum time to build the model than all the other models. It can also be noticed that the minimum time taken by Voting to build a model on both corpora is using trigram. It shows the efficiency of the tri-gram method over IG and BoW to build a model. For the MAE metric, again, the Voting model achieves the minimum values than others on both corpora, which shows that magnitude of the error is significantly less in predicting the labels of both types of articles. The average magnitude error of the Voting model is 18.41% on tri-gram and 10.7% on IG for BET and UFN, respectively. It means that IG is a good feature selection method over other methods on large size UFN corpus while tri-gram is good for small size BET corpus.</ns0:p><ns0:p>To estimate an ensemble model's performance and decide whether a model's performance is acceptable or not, we use a performance ranking metric AUC. On the BET corpus, only is achieved by the Ensemble Selection model over the tri-gram feature method. With 𝐴𝑈𝐶 ≥ 90 IG and BoW features the AUC scores of all the other models are which means the 𝐴𝑈𝐶 < 75 performance of these models is acceptable. On UNF corpus, Cascade Generalization achieves outstanding performance to detect fake news with BoW and IG (see Table <ns0:ref type='table'>4</ns0:ref>). It achieves 92.0% and 92.7% AUC scores for BoW and IG methods. Cascade Generalization with (𝐴𝑈𝐶 ≥ 90) 86.8% AUC score categorizes its performance ranking to excellent. Again, Ensemble Selection achieves the best AUC score using IG on UFN while Cascade Generalization achieves the best AUC using tri-gram features on BET. As we are interested to know the performance of a model to predict both labels ('fake', 'legitimate) correctly in the corpus, we use balanced accuracy. The maximum BA achieved by a model means that the model is performing more accurately than others to distinguish fake articles from legitimate articles. Experiment results reveal that Ensemble Selection and Voting outperform the other models on BET corpus and UFN corpora. Ensemble Selection achieves the maximum 83.3% BA on BET corpus using tri-gram feature On the UFN corpus, the Voting model significantly outperforms the other four ensemble models and achieves 89.3% BA score using IG feature. Again, it is noticed that IG outperforms the other methods on UFN while trigram outperforms the other feature selection methods on the BET corpus. Voting model ensemble the numerous base-predictors using some ensemble rule. The four popular rules are majority voting, the product of probabilities, minimum probabilities, and maximum probabilities. By impressive Voting performance on both corpora using time, balanced accuracy, MAE, and Time metrics, as given in Table <ns0:ref type='table'>6</ns0:ref> and discussed above, we further investigate its performance using different ensemble rules. The mean absolute error values achieved by each ensemble rule using tri-gram is shown in Figure <ns0:ref type='figure'>3</ns0:ref>. We conclude that the minimum probabilities rule is impressive to ensemble the predictions of the base-predictors at it achieves the lowest error values on both corpora. Vote model achieves 16.74% and 18.41% MAE scores on UFN and BET corpora. Hence, in our experiments, minimum probabilities, and product of probabilities, both rules perform the same on both corpora.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance comparison of machine learning and ensemble learning models</ns0:head><ns0:p>It is important to know the difference in the performance of ensemble models and individual machine learning models. A summary of the results achieved by the best ML and EL model with the best feature selection method is given in Table <ns0:ref type='table'>7</ns0:ref>. Comparative analysis of the results shows that machine learning models are efficient than EL models and take less time to build a model on both corpora. SVM takes a minimum time of less than a second to build the model on both corpora. Among the EL models, Voting is efficient and takes 11.52 and 3.12 seconds on UFN and BET. But Voting is much costly than SVM. It is because of the multiple base-predictors in the EL model. EL model combines the three heterogeneous ML models and then the final Voting model predicts the final label based on the prediction of the base-models. For error analysis, MAE values show that EL models have the lowest values of MAE than individual ML models. Again SVM outperforms the NB and DT by achieving minimum MAE scores on both corpora. On the other side, Voting outperforms the other EL models on both corpora. The lowest score of MAE for EL models means that these models are more accurate in fake news detection. EL models reduce MAE at two levels: at the base-predictors level and ensemble-level. Voting takes the advantage of MAEs of its base-predictor. It reduces the MAE scores of its three base-predictors using minimum probability to predict the final class. SVM achieves maximum scores of AUC 87.3% and 76.3% on UFN and BET. AUC scores rank SVM predictions to excellent on UFN and acceptable on BET. Cascade Learning and Ensemble Learning achieve 92.7% and 91.0% AUC scores on UFN and BET. It categorizes the detection performance of both models as outstanding. SVM outperforms the other ML models and it achieves the maximum BA scores. SVM achieves 87.3% BA on UFN and 76.3% on BET. From EL models, Voting achieves 89.3% BA and outperforms the other EL and ML models on the UFN corpus. On BET corpus, Ensemble Selection models produce 83.3% BA that is the maximum BA among all the models. The comparison of EL and ML methods using three feature selection methods is interesting valuable. SVM shows the best performance among the three ML models on small and large corpora. SVM achieves the best scores in all the performance measures. Character tri-gram works well on small size corpus BET while IG works well on large size corpus UFN to boost SVM performance. Voting performance is the best performance among EL models using Time and MAE performance measures on both corpora. Ensemble Selection is good at small corpus BET on two performance measures. IG feature works well with Voting to predict the class of a news article on UFN while tri-gram is the best with Voting and Ensemble Learning. Further, it can be seen that the IG feature works well on large size corpus while character tri-gram is good on small size corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Fake news detection through ensemble models is the most prominent topic of machine learning. If the traditional machine learning models are used for fake news detection task, the performance is not encouraging because their performance is limited to corpus characteristics. In this study, we deliberately choose ensemble methods to classify fake and legitimate news articles of the Urdu language. First, we use three machine learning models to classify two fake news corpora. Our experiments on two Urdu news corpora conclude that the individual machine learning model SVM outperforms the DT and NB on both corpora. SVM achieves the best scores of balanced accuracy and AUC and the minimum score of MAE and time. Second, we use five ensemble models for the same task. We find that ensemble models with three base-predictors DT, NB, and SVM, Ensemble Selection, and Vote models outperform the other on BET and UFN corpora, respectively. After the analysis of MAE, AUC, time, and BA values, we conclude that Voting with minimum probability is the best EL model for the fake news detection task. IG feature works well with large size corpus while character tri-gram works well on small size corpora. This study has several limitations that need to be addressed in future studies. The proposed corpus UFN still needs to grow by adding more news articles to it. We used online Google translation API in English-to-Urdu translation and we believe that translation accuracy is not a hundred percent. A study is vital in the future to explore the translation accuracy and quality of various translation APIs like Baidu, Google, etc. The potential of deep learning models also can be explored to detect fake news for Urdu. Further, we also hope to design another multilingual fake news corpus of English and Urdu news articles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4: Rules for classifying the discrimination using AUC AUC values</ns0:head><ns0:p>Classifier categories 𝐴𝑈𝐶 < 0.5</ns0:p><ns0:p>No Discrimination 0.7 ≤ 𝐴𝑈𝐶 < 0.8 Acceptable 0.8 ≤ 𝐴𝑈𝐶 < 0.9 Excellent 0.9 ≤ 𝐴𝑈𝐶 Outstanding</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54064:1:1:NEW 17 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,367.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:1:1:NEW 17 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Original Article Title: “Supervised ensemble learning methods towards automatically filtering Urdu fake news within social media”
To: PeerJ Computer Science Editor
Re: Response to reviewers
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments.
We are also very thankful for all the reviewers and the editor for their valuable feedback that help and guide us to improve the quality of our manuscript. We have tried our best to fix all the identified gaps, technical issues raised by the honorable reviewers and the editor. To understand the changes in the manuscript, we have highlighted the text in the manuscript.
We are uploading (a) our point-by-point response to the comments (below) (response to reviewers), (b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated manuscript without highlights (PDF main document).
Best regards,
Akhter et al.
Editor Comments:
In particular, more results or explanations are required to:
Comment 1: (1)Address the accuracy of the English-to-Urdu translation. and (2) to validate the effectiveness of ensemble learning methods compared to the single classifiers.
Response: (1): we have explained it in corpus design section from line 356-366 and in conclusion section from line 592 to 598. (2) We have revised our results by adding more experiments using multiple feature selection methods to show the effectiveness of ensemble learning methods. Please see the result section.
Reviewer 1 (Anonymous)
Basic Reporting
Comment 1: This paper utilizes ensemble methods to classify fake and legitimate news articles of the Urdu language. Authors built an annotated news corpus composed of Urdu news articles using Google Translate. Moreover, they perform experiments with traditional machine learning classifiers and five ensemble models to detect fake news and comparatively analyze the performance of their models. The language used in this paper is clear and unambiguous. Their work, innovations, contributions, and results are clearly presented in the introduction section, and the whole structure meets the requirements.
Response 1: Thank you for your positive reviews and encouraging evaluation about our study.
Comment 2: However, the resolution of the figures needs to be improved, especially Figure1&2. In addition, all of the captions for the figures and tables are written as 'Figure 1' and 'Table 1'.
Response 2: Thank you for the identification of these typos. We have carefully reviewed the complete manuscript and corrected these issues.
Experimental Design
Comment 3: This paper presents a meaningful research topic and fills the gap in the current knowledge gap with high relevance for real-life situations. This paper innovatively builds a corpus using translated labelled English corpus, which addresses the problem that the scarcity of Urdu language corpus.
Response 3: Thank you so much to endorse our innovating contribution to this study.
However, there exist two problems:
Comment 4: Frist, the accuracy of the English-to-Urdu translation is not verified. As mentioned in lines 461-462, the idea that all models perform better on UFN than BET is not a sufficient reason to illustrate the accuracy of Google translate API.
Response 4: Thank you for your concern about designing corpus using translation method of Google API. We are pretty sure that English to Urdu translation is not hundred percent accurate and to the best of our knowledge, no study investigates this problem. This is an open research question and can be investigated in future studies. Urdu is a resource-poor language. Because of the lack of linguistic resources, collection of news articles from multiple sources is a tough task. Besides, the annotation process of these news articles based on the article's contents needs expert knowledge, a lot of time, and budget. Therefore, augmented corpus design is the only way to perform research about fake news detection for resource-poor languages. Further the aim of this study is to explore the ensemble learning technique to detection fake news. The performance of various online translation APIs like Google, Baidu or Bing for Urdu-to-English or English-to-Urdu can be investigated in future research studies and is beyond the scope of this study. This explanation is embedded in the article on 346-356 lines.
Comment 5: Second, line 351 mentions that they randomly selected news articles. Why they were randomly selected instead of using all the data?
Response 5: Thank you to raise this question. The original English corpus contains nearly 8000 news articles. We picked a subset of two thousands articles because 1) manual translation of all the articles in original corpus is time taking and difficult to perform, 2) English-Urdu translation using Google Translator is not hundred percent accurate and to the best of our knowledge, no study investigates this problem. This is an open research question and can be investigated in future studies. 3) Our aim is to explore the detection performance of ensemble learning techniques. Several recent studies about fake news detection in Slovak (Kapusta & Obonya, 2020), Italian (Fornaciari & Poesio, 2013), and Spanish (Posadas-Durán et al., 2019) used corpora with even less than two thousands news articles (see Table 1). In future, instead of using augmented corpus, a real corpus of Urdu need to design and compare the performance of ensemble learning and machine learning models on both real and augmented corpora. We have made changes in the article to justify your question on line 346-356.
Validity of the Findings:
Comment 6: All underlying data have been provided and the conclusions are well presented. The final experiments are able to demonstrate that they design a classifier that performed well on their dataset.
Response 6: Thank you for your encouraging feedback.
Comments to the Author:
Comment 7: This paper presents a meaningful research topic and fills the gap in the current knowledge gap. Authors built an annotated news corpus composed of Urdu news articles using Google Translate, perform experiments with traditional machine learning classifiers and ensemble models to detect fake news.
Response 7: Thank you so much for your positive and encouraging comments about our effort made in this study. We are happy after reading your comments.
Comment 8: the section about corpus building needs to be supplemented to illustrate the accuracy of the methodology
Response 8: Thank you. We have supplemented the corpus building section to provide more clarification in corpus understanding.
Comment 9: In addition, there are some typos:
line 38: is a fake news: a piece of fake news.
line 441: accruacy: accuracy
line 499: shown: shown in.
Response 9: We have put considerable efforts into improving the quality of our writing. In so doing, we have carefully reviewed the entire manuscript and all such spelling issues have been fixed.
Comment 10: the resolution of the figures needs to be improved and all captions of the tables and figures need to be revised.
Response 10: Thank you for your suggestion and identifying the typos. We have redesigned the figure and their quality is improved now. Further, all the captions of figure and tables have been carefully revised.
Reviewer 2 (XinYi Zhou)
Basic Reporting:
Comment 1: In general, this paper is easy-to-follow. However, one concern is on the use of 'Feature Selection'; I think the authors meant extracting the features (feature extraction) rather than feature selection (for dimensionality reduction).
Response 1: Thank you. We have corrected it in the text.
Comment 2: It has a comparatively comprehensive reference list, while more advanced relevant work with the topic of fake news detection could be considered, e.g., neural-network-based studies.
Response 2: we have updated the manuscript with a few most advanced neural-network based studies on fake news detection topic. Thank you for concern about it.
Comment 3: Most figures and tables are clear. It would be more professional if (1) accurately numbering figures and (2) just like other figures, note '%' in Figure 3.
Response 3: Thank you for highlighting these two typing issues. We have carefully revised the whole article and such issues have been resolved.
Comment 4: I didn't find access to the raw data.
Response 4: I am sorry for that. The corpus used in this study is available our GitHub. The link will be given in the accepted manuscript. However the link is also provided in the online system during article submission process.
Experimental Design:
Comment 5: This work is within the Aims and Scope of the journal. It contains a detailed experimental setup and some of the necessary experiments.
Response 5: Thank you so much for your positive and encouraging comments
Comment 6: However, the experiments are insufficient. If the authors aim to validate the effectiveness of ensemble learning methods compared to the single classifiers. Multiple groups of features are requested for validation. For example, separating the combined n-grams or trying other common feature sets, like LIWC.
Response 6: Thank you for suggestion. We have revised our results by adding more experimental results using three feature sets selected from both corpora. Our results show the effectiveness of ensemble models over machine learning classifiers. Please see result section.
Validating of the findings:
Comment 7: The work's limitations have not been discussed yet. For example, the Urdu news articles were obtained not directly but by translating the English news articles. The translation quality is hard to be perfect; the data might need to be further improved from this perspective.
Response 7: Thank you to address the limitations of this study. We have discussed the limitations in the last paragraph in Conclusion section.
Comments to the Authors:
Comment 8: In general, this work contains most of the necessary information, components, and analyses. However, my major concern is about its innovation since rather than proposing news methods, the authors mainly use some existing methods to detect fake news. I suggest the authors explicitly claim their innovation in the paper.
Response 8: We have revised the text on line 133-144 and our contribution to the paper is explicitly expressed.
" | Here is a paper. Please give your review comments after reading it. |
9 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The popularity of the internet, smartphones, and social networks has contributed to the proliferation of misleading information like fake news and fake reviews on news blogs, online newspapers, and e-commerce applications. Fake news has a worldwide impact and potential to change political scenarios, deceive people into increasing product sales, defaming politicians or celebrities, and misguiding visitors to stop visiting a place or country. Therefore, it is vital to find automatic methods to detect fake news online. In several past studies, the focus was the English language, but the resource-poor languages have been completely ignored because of the scarcity of labeled corpus. In this study, we investigate this issue in the Urdu language. Our contribution is threefold. First, we design an annotated corpus of Urdu news articles for the fake news detection tasks. Second, we explore three individual machine learning models to detect fake news. Third, we use five ensemble learning methods to ensemble the base-predictors' predictions to improve the fake news detection system's overall performance. Our experiment results on two Urdu news corpora show the superiority of ensemble models over individual machine learning models. Three performance metrics balanced accuracy, the area under the curve, and mean absolute error used to find that Ensemble Selection and Vote models outperform the other machine learning and ensemble learning models.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fake news is also known as deceptive news or misinformation. A news story is a piece of fake news if its authenticity is verifiable false, and it intends to mislead the reader. As compared to fake news, the authenticity of legitimate news is verifiable real, and it plans to convey authentic information to the users <ns0:ref type='bibr' target='#b0'>(Abonizio et al., 2020)</ns0:ref>. Fake news can take on numerous structures including, edited text stories, photoshopped pictures, and unordered video clips. Fake news is similar in appearance to legitimate news, but the aims are different. The aims of spreading fake news are multipurpose, including deceiving readers into benefiting the author, propaganda about a politician to win the election, increased sale of a product by posting fake positive reviews to benefit a businessman, and defame a showbiz star <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018)</ns0:ref>. There are numerous hazardous impacts on our society of the proliferation of fake news. Fake news changes the manner of the individual to interpret and reply to legitimate news. Besides, fake news makes individuals skeptical by destroying consumers' trust in the media by posting fabricated and biased news stories <ns0:ref type='bibr' target='#b1'>(Agarwal & Dixit, 2020)</ns0:ref>. Spreading fake news is not a new problem in our time. Before the advent of the internet, fake news was transmitted through face-to-face (oral), radio, newspaper, and television. In recent years with the advent of the computer, the internet, smartphones, websites, news blogs, and social media applications have contributed to transmitting fake news. There are several reasons for spreading fake news through the internet and social media. It requires less cost and time than traditional news media. It is very easy to manipulate legitimate digital news and share the fabricated news story rapidly. Since 2017, there is a 13% global increase in social media users <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>. Fake news influences different groups of people, products, companies, politicians, showbiz, news agencies, and businessman. It requires more energy, cost, and time to manually identify and remove fake news or fake reviews from social media. Some previous studies conclude that humans perform poorly than automated systems to separate legitimate news from fake news <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018)</ns0:ref>. For the last few years, machine learning methods' focus is to differentiate between fake and legitimate news automatically. After the U.S. presidential elections in 2015, few popular social media applications like Twitter, Facebook, and Google started to pay attention to design machine learning and natural language processing (NLP) based mechanisms to detect and combat fake news. The remarkable development of supervised machine learning models paved the way for designing expert systems to identify fake news for English, Portuguese <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Silva et al., 2020)</ns0:ref>, Spanish <ns0:ref type='bibr' target='#b34'>(Posadas-Durán et al., 2019)</ns0:ref>, Indonesian <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref>, German, Latin, and Slavic languages <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>. A major problem of machine learning models is that different models perform differently on the same corpus. Their performance is sensitive to corpus properties like corpus size, distribution of instances into classes <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>. For example, the performance of K-nearest neighbor (KNN) depends on the number of nearest points (k) in the dataset. SVM suffers from numerical instability when solving optimization problems <ns0:ref type='bibr'>(Xiao, 2019)</ns0:ref>. Similarly, the performance of an artificial neural network (ANN) is prone to optimal architecture and tuning its parameters <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Ensemble learning is considered an efficient technique that can boost the efficiency of individual machine learning models, also called base-models, base-predictors, or base-learners, by aggregating the predictions of these models in some way <ns0:ref type='bibr' target='#b27'>(Lee et al., 2020)</ns0:ref>. Ensemble learning aims to exploit the diversity of base-predictors to handle multiple types of errors to increase overall performance. Ensemble learning techniques show superior performance in various recent studies about fake news detection. In a recent study, the ensemble learning technique outperformed the four deep learning models including the deep structured semantic model (DSSM) with RNN, intentCapsNet, LSTM model, and capsule neural network <ns0:ref type='bibr' target='#b18'>(Hakak et al., 2021)</ns0:ref>. In another recent study, <ns0:ref type='bibr' target='#b28'>(Mahabub, 2020)</ns0:ref> applied eleven machine learning classifiers including the neural network-based model MLP on a fake news detection corpus. After that three out of eleven machine models were selected to ensemble a voting model. Ensemble voting with soft voting outperformed the other models. <ns0:ref type='bibr' target='#b17'>(Gutierrez-Espinoza et al., 2020)</ns0:ref> applied two ensemble methods bagging and boosting with SVM and MLP base-predictors to detect fake reviews detection. Experiments show that boosting with MLP outperforms the other. This can be achieved in numerous ways, including homogenous models with diverse parameters, heterogeneous models, resampling the training corpus, or using different methods to combine predictions of base-predictors <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>. Ensemble learning can be of two types: parallel and sequential. In the parallel ensemble, base-predictors are trained independently in parallel. In the sequential ensemble, base-predictors are trained sequentially, where a model attempts to correct its predecessor <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>. Ensemble learning methods have shown good performance in various applications, including solar irradiance prediction <ns0:ref type='bibr' target='#b27'>(Lee et al., 2020)</ns0:ref>, slope stability analysis <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>, natural language processing <ns0:ref type='bibr' target='#b36'>(Sangamnerkar et al., 2020)</ns0:ref>, malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>, traffic incident detection (Xiao, 2019). In the past, several studies explored machine learning models for fake news detection task in a few languages like Portuguese <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Silva et al., 2020)</ns0:ref>, Spanish <ns0:ref type='bibr' target='#b34'>(Posadas-Durán et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abonizio et al., 2020)</ns0:ref>, Urdu <ns0:ref type='bibr'>(Amjad et al., 2020;</ns0:ref><ns0:ref type='bibr'>Amjad, Sidorov & Zhila, 2020)</ns0:ref>, Arabic <ns0:ref type='bibr' target='#b6'>(Alkhair et al., 2019)</ns0:ref>, Slavic <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Kapusta & Obonya, 2020)</ns0:ref>, and English <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ozbay & Alatas, 2020)</ns0:ref>. As compared to machine learning, a few efforts have been made to explore ensemble learning for fake news detection like Indonesian <ns0:ref type='bibr' target='#b5'>(Al-Ash & Wibowo, 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Al-Ash et al., 2019)</ns0:ref>, English <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020;</ns0:ref><ns0:ref type='bibr' target='#b36'>Sangamnerkar et al., 2020)</ns0:ref>. Therefore, this study aims to investigate ensemble learning methods for the fake news detection task. Urdu is the national language of Pakistan and the 8 th most spoken language globally, with more than 100 million speakers <ns0:ref type='bibr' target='#b2'>(Akhter et al., 2020a)</ns0:ref>. Urdu is the South Asian severely resource-poor language. As compared to resource-rich languages like English, a few annotated corpus from very few domains are available for research purposes. Besides, insufficient linguistic resources like stemmers and annotated corpora make the research more challenging and inspired. Particularly in Urdu, studying fake news detection has several challenges. First, unavailability of some sufficient annotated corpus. A recent study <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref> proposed an annotated fake news corpus with a few hundred news articles. Experiments on this corpus reveal the poor performance of machine learning models. Second, labeling a news article as 'fake' or 'legitimate' needs experts' opinions, which is time taking. Last, hiring experts in the relevant domains is an expensive task. Therefore, in this study, we design a machine-translated corpus of Urdu news articles translated from English news articles using Google Translate. We followed the same procedure in the study <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>. Experiments reveal that machine learning models do not perform well on machine-translated corpus compared to the real dataset <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>. Because of the small size, the corpus is not sufficient to make any conclusion about machine learning models' performance. Further, to the best of our knowledge, no study explores ensemble learning models for Urdu fake news detection tasks. Inspired by the work done in other languages, we are investigating the issue of fake news detection in the Urdu language. The major aim of this study is to explore the capability of ensemble learning models in improving fake news predictions in resource-poor language Urdu. Our significant contributions to this study have been summarized below:</ns0:p><ns0:p> We manually built an annotated news corpus composed of Urdu news articles distributed into legitimate and fake categories.</ns0:p><ns0:p> We perform several experiments using three diverse traditional machine learning classifiers Naïve Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM), and five ensemble models, including Stacking, Voting, Grading, Cascade Generalization, and Ensemble Selection, to achieve improved prediction quality relative to conventional individual machine learning models.</ns0:p><ns0:p> We investigate the performance of our models using three feature sets generated through character-level, word-level, and statistical-based feature selection methods.</ns0:p><ns0:p> We report experiments of both machine learning and ensemble learning models on two fake news corpora of the Urdu language.</ns0:p><ns0:p> We comparatively analyze the performance of our models using four performance measures, including balanced accuracy, the area under the curve, time and mean absolute error. Henceforward this article is organized as follows: Section 2 presents the essential related works. Section 3 provides a brief overview of machine learning and ensemble learning models used in this study. Section 4 will show the architecture of the adopted framework and corpus characteristics. The results of the experiments are comparatively discussed in Section 5. Finally, Section 6 ends the article with conclusions and future directions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Online social media and instant messaging applications like Facebook, Google, and Twitter are popular these days in talking to your loved ones, expressing your opinion, sharing professional information, or posting news about the subject of interest. Further, it is common to find some information on the internet quickly. Unfortunately, all the information available on social media sites is not accurate and reliable as it is straightforward to manipulate digital information and quickly spread it in the world. Therefore, it is vital to design some accurate, efficient, and reliable automated systems to detect fake news from a large corpus.</ns0:p><ns0:p>In the past, numerous machine learning methods have been used to combat fake news. <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018)</ns0:ref> showed that the multi-layer perceptron (MLP) model outperforms the NB and random forest to identify fake news from a large news corpus. The study of <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref> concludes that SVM with bag-of-word (BoW) feature outperformed the other on five corpora of three languages Germanic, Latin, and Slavic. A benchmarking study for fake news detection concludes that SVM with linguistic-based word embedding features enables us to classify fake news with high accuracy <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019)</ns0:ref>. A study about Portuguese fake news detection reveals that random forest outperforms the other five machine learning models <ns0:ref type='bibr' target='#b39'>(Silva et al., 2020)</ns0:ref>. AdaBoost achieves the best performance on a small corpus than the other six models to separate fake news from legitimate news <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>. A recent study of fake news detection using supervised artificial intelligence methods shows that the DT is the best model out of twenty-three models <ns0:ref type='bibr' target='#b32'>(Ozbay & Alatas, 2020)</ns0:ref>. After analyzing the above studies, we can conclude designing an effective and high-performing system with a careful selection of the machine learning model and the feature selection method. To overcome individual machine learning models' issues and increase the classification performance, an ensemble of several diverse machine learning models has shown superior performance than individual machine learning in several applications. Xiao applied ensemble techniques with SVM and KNN base learners to detect traffic incidents <ns0:ref type='bibr'>(Xiao, 2019)</ns0:ref>. Experiments show the superiority of the ensemble model over individual machine learning models. A recent study about detecting fraud in credit cards concludes that the ensemble approach based on cost-sensitive meta-learning outperforms traditional machine learning models. <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref> used four ensemble models: boosted trees, bagged trees, random forest, and generalized random forest for slope stability analysis. Experimental results prove the best performance of extreme gradient boosting over other ensemble models and machine learning models. For malware detection, <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref> used five base-predictors, and the output of each base-predictor was ranked by calculating and aggregating the output weights. Then using two ensemble techniques Voting and Stacking to rank the output. A higher value ranked by the ensemble technique was the final prediction of the model. After extensive experiments, the study reveals that weighted Voting shows the best performance than Stacking. Compared to other applications, fake news detection using ensemble learning techniques has very few studies in the past. <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref> proposed a multi-level Voting model for the fake news detection task. The study concludes that the proposed model outperforms the other eleven individual machine learning and ensemble learning models. For multiclass fake news detection, <ns0:ref type='bibr' target='#b21'>(Kaliyar, Goswami & Narang, 2019;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abonizio et al., 2020)</ns0:ref> used Gradient Boosting ensemble techniques and compare their performance with several individual machine learning models. Results on multiple corpora show that Gradient Boosting achieves the best performance than any other individual models. A recent study <ns0:ref type='bibr' target='#b20'>(Huang & Chen, 2020)</ns0:ref> proposed a self-adaptive harmony search algorithm to get optimized weights of ensemble models. The proposed algorithm achieved outstanding performance with 99.4% accuracy. The Bagging approach to detect fake news showed superior performance than SVM, Multinomial Naïve Bayes, and Random Forest <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref>. English is a resource-rich language, and many linguistic resources are publically available for research purposes. Therefore, several research studies have been performed for the fake news detection task. A study gives a comparison of twenty three publically available datasets of English <ns0:ref type='bibr' target='#b38'>(Sharma et al., 2019)</ns0:ref>. A recent survey compares different techniques to identify fake news, their credibility detection, and discusses fundamental theories and opportunities (Zhou & Zafarani, 2020). There is a severe lack of research studies for fake news detection in languages other than English. For the Indonesian language, a study by <ns0:ref type='bibr' target='#b4'>(Al-Ash et al., 2019)</ns0:ref> shows that the bagging model outperforms the three individual models: SVM, Random Forest, and Multinomial Naïve Bayes. In <ns0:ref type='bibr' target='#b0'>(Abonizio et al., 2020)</ns0:ref> applied three machine learning models (KNN, SVM, and random Forest) and extreme gradient boosting (ensemble model) on five datasets of three languages (English, Portuguese, and Spanish). In another study about fake news detection for Portuguese, random forest shows high accuracy in most of the experience <ns0:ref type='bibr' target='#b39'>(Silva et al., 2020)</ns0:ref>. Extreme gradient boosting shows the best performance than other individual models. For the Urdu language <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>, Adaboost outperforms the other seven machine learning models on a very small corpus. DT improves the classification accuracy for fake news detection for the Slovak language <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>. The lack of availability of a benchmarked annotated corpora of resource-poor languages are the major problem to investigate and compare the performance of numerous automated methods for fake news detection. Therefore, in several other than English studies, authors designed their corpus by collecting news articles from the internet and other web resources and manually annotating these articles into fake and legitimate news. A list of corpora for several resourcepoor languages is given in Table <ns0:ref type='table'>1</ns0:ref>. It can be noticed that all the corpora except 'Fake.Br' are very small in size. Because corpus designing is a costly and time-consuming task, the annotation process requires several experts from various fields to decide about the news article <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>. To date, our proposed corpus Urdu fake news (UFN) is the largest corpus than others.</ns0:p></ns0:div>
<ns0:div><ns0:head>Machine Learning and Ensemble Learning Models Machine Learning Models</ns0:head><ns0:p>This section gives a brief overview of three traditional machine learning models: Naïve Bayes, Decision Tree, and Support Vector Machine. We also described their significant drawbacks, which limit their performance on various tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Naïve Bayes (NB)</ns0:head><ns0:p>Naïve Bayes used a probabilistic approach based on Bayesian Theorem with two assumptions: 1) all the features are independent of each other in the dataset, and 2) all the features have equal effects. It is simple, popular, and useful for the classification of a large corpus, although the corpus does not hold independence. NB is challenging to interpret for a large corpus, and its assumption about features independence makes its performance poor, especially when the data PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science distribution is very skewed <ns0:ref type='bibr' target='#b26'>(Komiya et al., 2011)</ns0:ref>. Several studies have used NB for fake news detection tasks like for Portuguese <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018)</ns0:ref> and English <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decision Tree (DT)</ns0:head><ns0:p>The decision tree algorithm learns a decision rule inferred from the training data to design a decision tree model. Nodes of the tree represent the feature vectors taken from the text of the news article. Leaf nodes represent the set of possible labels or classes in the corpus. In our case, there are two possible labels: fake and legitimate. The DT predicts the article's label by learning features from the tree's root to one of the leaf nodes. It is simple and easy to interpret because all the information about the model behavior and influential variables is available. Therefore, the DT is also known as a white-box model. Drawbacks of the DT include overfitting and instability, a complex tree for a high-dimensional dataset that is not easy to interpret <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>. For the fake news detection task, DT has shown good performance for Slovak <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>, Portuguese <ns0:ref type='bibr' target='#b39'>(Silva et al., 2020)</ns0:ref>, English <ns0:ref type='bibr' target='#b15'>(Gravanis et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ozbay & Alatas, 2020)</ns0:ref>, and Urdu <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref> languages. Support Vector Machine (SVM) Support vector machine is a non-parametric machine learning model. The performance of SVM is usually considered suitable for binary classification tasks with high-dimensional data. SVM maps all the features obtained from news articles' text into n-dimensional space where a feature represents the particular coordinate. During training, SVM learns a hyper-plan that best discriminates the features of one class to another. Therefore, SVM is also known as a discriminative classifier. Although SVM performs well with high-dimensionality data, it is difficult to interpret, requires significant computing resources, and faces numerical instability for optimization problems <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>. SVM shows excellent performance for fake news detection task in several studies of various languages like English <ns0:ref type='bibr' target='#b29'>(Monteiro et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gravanis et al., 2019)</ns0:ref>, Urdu <ns0:ref type='bibr'>(Amjad, Sidorov & Zhila, 2020)</ns0:ref>, Portuguese <ns0:ref type='bibr' target='#b39'>(Silva et al., 2020)</ns0:ref>, Dutch (Verhoeven & Daelemans, 2014), Germanic, Latin, and Slavic <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Learning Models</ns0:head><ns0:p>Ensemble learning aggregates the individual machine learning models (base-predictors) to design a superior model to increase overall accuracy by handling the shortcomings of the basepredictors. It is known as the most efficient technique for improving the performance of machine learning models. Nowadays, ensemble learning methods are gaining more popularity than traditional individual machine learning models in numerous classification tasks like fake news detection <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>, malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>. Ensemble learning methods fall into two categories: parallel ensemble and sequential ensemble. Both are shown in Figure <ns0:ref type='figure'>1</ns0:ref>(a) and Figure <ns0:ref type='figure'>1(b</ns0:ref>). In the parallel ensemble, the base-predictors are trained in parallel on the input corpus. The parallel ensemble has the advantages of simultaneous predictions, utilizing different CPU cores to execute the models simultaneously, and utilizing the characteristics of independence among them. In the sequential ensemble, the base-predictors are trained sequentially where the output of the one base-predictor plus the input corpus is the input to the next base-predictor. In other words, the base-predictors are dependent on each other. The next base-predictor challenge is to try to correct the errors of the previous base-predictor to improve the overall prediction accuracy <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>. Base-predictors can be homogenous or heterogeneous. In homogenous, a single machine learning model (like DT or NB) is trained in parallel or sequentially, while in heterogeneous different machine learning models (like DT and NB) are trained in parallel or sequentially. The ensemble learning method is advantageous if the heterogeneous machine learning models are used as base-predictor <ns0:ref type='bibr' target='#b24'>(Kittler, Hater & Duin, 1996)</ns0:ref>. Heterogeneous ensemble learning can be performed using different classifiers with different feature sets, training sets, and evaluation methods. In this section, we provide a brief description of the five ensemble models used in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stacking</ns0:head><ns0:p>Stacking model ensembles several base-predictors machine learning models using the stacking method. It was initially proposed by <ns0:ref type='bibr' target='#b40'>(Ting & Witten, 1997)</ns0:ref> and used in several studies for classification tasks like malware detection <ns0:ref type='bibr' target='#b16'>(Gupta & Rani, 2020)</ns0:ref>, credit card fraud detection <ns0:ref type='bibr' target='#b31'>(Olowookere & Adewale, 2020)</ns0:ref>, and spam detection <ns0:ref type='bibr' target='#b35'>(Saeed, Rady & Gharib, 2019)</ns0:ref>. It can perform both classification and regression on data. Base-predictors are trained on input data, and the output of these base-predictors is given to a meta-classifier which makes the final prediction about the class of an input sample. Meta-classifier can be any classifier like Adaboost, Regression, etc. The primary aim of meta-classifier is to learn the optimal weights to combine the predictions of base-predictors and produce better prediction results than individual basepredictor results. Therefore, Stacking reduces the variance and improve the quality of classification. For unseen test articles from the test set, the article is passed to the base-predictor to classify these test articles. Their classification is then passed to stacking-based ensemble learners to make the final classification of that article as either fake or legitimate.</ns0:p></ns0:div>
<ns0:div><ns0:head>Voting</ns0:head><ns0:p>Voting is a meta-classifier that combines several base-predictors using different combination rules. Base-predictor can be any machine learning model. Individual base-predictors are trained on training data. The output of these base-predictors is combined using some combination rules like majority voting, minimum or maximum probabilities, or product of probabilities <ns0:ref type='bibr' target='#b24'>(Kittler, Hater & Duin, 1996)</ns0:ref>. Majority voting is also known as hard-voting as the class with majority votes is considered the input article's final class. In soft-voting, the final class is a class with the highest probability averaged over the individual predictors <ns0:ref type='bibr' target='#b14'>(González et al., 2020)</ns0:ref>. Voting method have used in several classification tasks like fake news detection <ns0:ref type='bibr' target='#b23'>(Kaur, Kumar & Kumaraguru, 2020)</ns0:ref>, spam detection <ns0:ref type='bibr' target='#b35'>(Saeed, Rady & Gharib, 2019)</ns0:ref>, and slope stability analysis <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Grading</ns0:head><ns0:p>Grading is an ensemble model originally presented by <ns0:ref type='bibr' target='#b37'>(Seewald & Fürnkranz, 2001)</ns0:ref>. Grading is a type of meta-classifier that corrects the graded predictions of base-predictors at the base-level assuming that different base-predictors make different errors. Graded predictions are those predictions that are marked as incorrect or correct predictions by the base-predictor. A metaclassifier is learned for each base-predictor whose basic job is to predict when the base-predictor will error. These meta-classifiers are trained on a training set constructed from the graded predictions of corresponding base-predictors as new class labels. Grading is different from Stacking that uses the incorrect predictions of base-predictors as the attribute value for metaclassifier. Several studies show that Grading outperforms the Voting and Stacking ensemble models on classification tasks <ns0:ref type='bibr' target='#b37'>(Seewald & Fürnkranz, 2001;</ns0:ref><ns0:ref type='bibr' target='#b14'>González et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cascade Generalization</ns0:head><ns0:p>Cascade Generalization belongs to a special Stacking generalization that uses a sequentially layered architecture for combining multiple classifiers. The predictions of several base-predictors are used in the next stage for final prediction <ns0:ref type='bibr' target='#b13'>(Gama & Brazdil, 2000)</ns0:ref>. An extension of the original data is achieved at each level by inserting new attributes derived from the probability class distribution given by the base-predictors. Cascade Generalization is different from Stacking generalization in that Stacking is parallel, while Cascade is sequential in nature. Because of its sequential nature, intermediate models have access to the original attributes and the low-level models' predictions. Cascade provides the rules to choose the high-level and low-level models for classification. A major problem of the Cascade is that it transforms the feature space into a new high-dimensional feature space (the curse of dimensionality) that sometimes leads to a more difficult learning problem <ns0:ref type='bibr' target='#b13'>(Gama & Brazdil, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Selection</ns0:head><ns0:p>Ensemble selection is a method to construct an ensemble of several machine learning models. It starts with an empty ensemble and adds a model into the ensemble if it increases the performance of the ensemble. This process is repeated up to a specified number of iterations or until all the models have been used <ns0:ref type='bibr' target='#b10'>(Caruana, Ksikes & Crew, 2014)</ns0:ref>. Models are added into an ensemble using numerous ways like forwarding selection, backward elimination, and the best model. It stops adding models into the ensemble when the ensemble's performance starts to decrease after achieving the best performance. Ensemble selection allows ensembles to be optimized to performance metrics such as accuracy, cross-entropy, mean precision, or ROC Area <ns0:ref type='bibr' target='#b9'>(Ballard & Wang, 2016;</ns0:ref><ns0:ref type='bibr' target='#b30'>Nguyen et al., 2020)</ns0:ref>. In a recent study, Ensemble Selection outperforms the other ensemble models to classify 62 datasets <ns0:ref type='bibr' target='#b30'>(Nguyen et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology and corpus construction</ns0:head><ns0:p>The proposed framework for fake news detection consists of four-phases. The first phase describes the procedure to design a corpus of Urdu news articles. The second phase explains the preprocess operations performed on the text of news articles. The third phase shows feature selection or dimensionality reduction approaches. The fourth phase provides the description of individual machine learning models or base-predictors for ensemble models. At last, the fifth phase describes the numerous ensemble learning models used in this study. The architecture with five layers is shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Corpus Design</ns0:head><ns0:p>In this study, we choose two corpora of text news articles of Urdu language for experiments. As Urdu is a resource-poor language, there is no standard corpus available for fake news detection task to the best of our knowledge. Because of the lack of linguistic resources, the collection of news articles from multiple sources is a tough task. Besides, the annotation process of these news articles based on the articles contents needs expert knowledge, a lot of time, and budget. Therefore, augmented corpus design is the only way to perform research about fake news detection for resource-poor languages. Our first corpus is Urdu Fake News (UFN) augmented corpus. It contains two thousand news articles randomly selected and translated from English language fake news corpus using online Google Translator. The original English corpus contains nearly 8000 news articles. We picked a subset of two thousand articles because 1) manual translation of all the articles in the original corpus is time taking and difficult to perform, 2) English-Urdu translation using Google Translator is not hundred percent accurate and to the best of our knowledge, no study investigates this problem. This is an open research question and can be investigated in future studies. 3) we aim to explore the detection performance of ensemble learning techniques. Several recent studies about fake news detection in Slovak <ns0:ref type='bibr' target='#b22'>(Kapusta & Obonya, 2020)</ns0:ref>, Italian <ns0:ref type='bibr' target='#b12'>(Fornaciari &</ns0:ref><ns0:ref type='bibr' target='#b12'>Poesio, 2013), and</ns0:ref><ns0:ref type='bibr'>Spanish (Posadas-Durán et al., 2019)</ns0:ref> used corpora with even less than two thousand news articles (see Table <ns0:ref type='table'>1</ns0:ref>). Our second corpus is a small size Bend the Truth (BET) corpus designed and annotated by <ns0:ref type='bibr'>(Amjad et al., 2020)</ns0:ref>. This corpus contains only 900 original news articles in Urdu. A sample of the Urdu news articles is shown in Table <ns0:ref type='table'>2</ns0:ref>. After translation, the Urdu article label was the same as in the English corpus's corresponding article. The final corpus is available online on GitHub in CSV file format. The statistics of both corpora are shown in Table <ns0:ref type='table'>3</ns0:ref>. It can be noticed that our designed corpus UFN is larger than the BET corpus based on the total number of articles, size of vocabulary, and the length of the article.</ns0:p></ns0:div>
<ns0:div><ns0:head>Corpus Preparation and Preprocessing</ns0:head><ns0:p>Articles in the corpus are in an unstructured format and cannot be processed directly by the machine learning models. We must have to perform a series of operations on the corpus to convert an unstructured corpus into a structured corpus. We have cleaned and processed both corpora's news articles before generating the feature vectors for feature selection. We tokenized the text using space characters. Special characters, email addresses, and website URLs were removed from the text. After cleaning the text, we removed the most frequent and rare words of the Urdu language (also known as stopwords) from the text. The cleaned and the preprocessed articles were converted into numeric feature vectors using the term frequency-inverse document frequency (TF-IDF) method as used in a recent study <ns0:ref type='bibr' target='#b32'>(Ozbay & Alatas, 2020)</ns0:ref>. Both corpora were passed through the same number of preprocessing steps.</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature Selection</ns0:head><ns0:p>In our experiments, we have performed the experiments using three feature selection methods character tri-grams, bag-of-word (BOW), and information gain (IG). A recent study shows the superiority of the character n-gram method over word-level n-grams in short text classification tasks (i.e. offensive language detection) in Urdu text comments <ns0:ref type='bibr' target='#b3'>(Akhter et al., 2020b)</ns0:ref>. Character n-gram is a contiguous sequence of characters in the text. In character n-grams, the value of n is PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science taken as three, which means the combination of three characters makes a tri-gram feature. From the UFN corpus, 1084 character n-grams, and from the BET corpus 1091 n-grams were extracted. BOW is a content-based feature representation in which a news article is represented as a set of words that occur in it at least once. IG measures the goodness of the features in the text. A comparative study concludes that IG is the best feature selection method for documentlevel text classification of Urdu. In our experiments, we have selected the top one thousand IG features from both corpora. 1225 and 1214 BoW features from BET and UFN, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heterogeneous Machine Learning Models</ns0:head><ns0:p>In our experiments, for machine learning classification, we use three individual machine learning models NB, SVM, and DT to detect fake news. All three models are heterogeneous. The working of these models is entirely different from each other. Using character-level n-grams from text articles, these models analyze the article's text and classify it into one of the categories legitimate or fake. Detail description of these machine learning models is given in Section 3.1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Learning Models</ns0:head><ns0:p>Ensemble classification is usually based on two levels: base-level and ensemble-level. We use three diverse machine learning models, SVM, DT, and NB, as base-predictors at the base-level. Input to these base-predictors is the character-level n-grams extracted from the news articles. Output predictions of these base-predictors are input to ensemble-level models. The basic aim of using the ensemble model is to overcome the base-predictors' shortcomings and improve overall prediction accuracy. We use five ensemble models for ensemble classification, known as Voting, Grading, Stacking, Cascading Generalization, and Ensemble Selection. A brief description of our ensemble models is given in Section 3.2.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Measures</ns0:head><ns0:p>To compare the performance of individual machine learning models and ensemble learning models, in this study, we employed the three well-known performance measures mean absolute error (MAE), balanced accuracy (BA), and area under the curve (AUC). Balanced Accuracy (BA) Separation of fake news from legitimate news is a binary classification task. A model has to decide about an article, either a legitimate article or a fake article. As the focus of this study is to detect both classes correctly, we used the balanced accuracy performance measure to compare the performance of our models. Balanced accuracy calculates the average of the proportion of corrects of each class individually. Balanced accuracy can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_0'>𝐵𝑎𝑙𝑎𝑛𝑐𝑒𝑑 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 (𝐵𝐴) = [ 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 𝑇𝑁 + 𝐹𝑁 ] /2</ns0:formula><ns0:p>(1)</ns0:p></ns0:div>
<ns0:div><ns0:head>Area Under the receiver operating characteristic Curve (AUC)</ns0:head><ns0:p>Area Under the receiver operating characteristic curve, also known as area under the curve (AUC), is used to estimate the performance of a machine learning model using a single value. AUC provides the probability that the model will rank a randomly chosen positive sample higher than a randomly chosen negative sample. AUC can be calculated by Equation <ns0:ref type='formula'>2</ns0:ref>. is the 𝑇𝑃 𝑟𝑎𝑡𝑒 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science ratio of correctly predicted articles as fake articles. It is also known as recall and can be calculated as above in Equation <ns0:ref type='formula' target='#formula_2'>3</ns0:ref>.</ns0:p><ns0:p>is the number of legitimate news articles that are 𝐹𝑃 𝑟𝑎𝑡𝑒 misclassified or incorrectly predicted as fake news articles.</ns0:p><ns0:formula xml:id='formula_1'>𝐴𝑈𝐶 = 1 + 𝑇𝑃 𝑟𝑎𝑡𝑒 -𝐹𝑃 𝑟𝑎𝑡𝑒</ns0:formula></ns0:div>
<ns0:div><ns0:head>2</ns0:head><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>𝐹𝑃 𝑟𝑎𝑡𝑒 = 𝐹𝑃 𝐹𝑃 + 𝑇𝑁<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>General rules for categorizing the performance of the machine learning model using AUC are given in Table <ns0:ref type='table'>4</ns0:ref>. These rules are used and discussed in <ns0:ref type='bibr' target='#b33'>(Pham et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean Absolute Error (MAE)</ns0:head><ns0:p>The error refers to the absolute difference between the actual values and the predicted values. MAE measures the average magnitude of the error from a set of predictions made by the model. MAE can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_3'>𝑀𝑒𝑎𝑛 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑀𝐴𝐸) = 1 𝑁 𝑛 ∑ 𝑗 = 1 |𝑦 𝑗 -𝑦 𝑗 |<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment Setup</ns0:head><ns0:p>As mentioned earlier, in this study, three diverse machine learning models NB, DT, and SVM have been used to classify news articles into legitimate and fake classes. We use a well-known data mining tool, WEKA, for experiments. WEKA provides a list of supervised and unsupervised machine learning models, data preprocessing techniques, and various performance evaluation methods. Machine learning models have few parameters, called hyper-parameters, to minimize the difference between training error and testing error. In our experiments, we do not fine-tune the hyper-parameters of all these models. We use the default parameters given in the WEKA as the default parameters have already the best values in most of the cases. We use the J48 algorithm for DT implementation. LibLINEAR algorithm for SVM implementation. We use the same DT, SVM, and NB models as base-predictors for all the ensemble models. For Voting and Stacking, along with the three base-predictors, we use Adaboost as a meta-classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Model training and testing</ns0:head><ns0:p>For training and validation of individual machine learning models and ensemble models, we use k-fold cross-validation as mentioned in Section 4.1 that both corpora have not been divided into a training subset and testing subset. k-fold cross-validation is a popular choice and used in many past research studies. In our experiments, we use 10-fold cross-validation, where k-1 folds are used for training, and one-fold is used to test the model's prediction performance. This process is repeated ten times to achieve the final performance score.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion of machine learning models</ns0:head><ns0:p>The experiment results achieved using 10-fold cross-validation from individual machine learning models are shown in Table <ns0:ref type='table'>5</ns0:ref>. We compare the performance using BA, AUC, MAE, and time. A close observation of the results reveals that SVM outperforms the other for all corpora performance metrics. A model is considered an accurate classifier if its balanced accuracy is higher than the other models. The BA metric shows that SVM outperforms the others on the UFN corpus. SVM achieves BA scores 81.6%, 86.7%, and 87.3% using tri-gram, BoW, and IG features, respectively. SVM also outperforms the others on the BET corpus. It achieves 76.3, 62.7, and 62.4 using tri-gram, BoW, and IG features, respectively. IG feature outperforms the others and achieves the maximum BA score 87.3% on large UFN corpus while tri-gram approach achieves maximum BA scores of 76.3% on BET corpus. With the lowest balanced accuracy scores, NB shows the worst performance. It is noticed that SVM has higher accuracy at UFN than BET. The size of the UFN corpus, in terms of the number of articles and vocabulary size, is almost double the of BET, and SVM is considered a good model for the classification of high-dimensional feature space <ns0:ref type='bibr' target='#b11'>(Faustini & Covões, 2020)</ns0:ref>.</ns0:p><ns0:p>Similarly, AUC scores of the SVM model are the highest score than DT and NB on both corpora. SVM achieves 87.3% and 76.3% AUC metric values on UFN and BET corpora, respectively.</ns0:p><ns0:p>Here, again IG proves the best feature selection method for UFN while tri-gram on BET corpus as SVM achieves the maximum AUC scores on IG and tri-gram features. Further, as per the rules of Table <ns0:ref type='table'>4</ns0:ref>, a comparison of AUC scores of all the models concludes that the performance of SVM on UFN is excellent ( ) on all the features. On the BET corpus, SVM 0.8 ≤ 𝐴𝑈𝐶 < 0.9 performance is only acceptable ( ) on tri-gram features. The performance of DT 0.7 ≤ 𝐴𝑈𝐶 < 0.8 and NB is just acceptable. From Table <ns0:ref type='table'>5</ns0:ref>, it can be seen that in terms of MAE, the prediction error of SVM is the lowest than others. SVM achieves the lowest MAE score 12.7% with IG on UFN while 23.5% with trigram on BET corpus. The highest MAE values of NB proves its worst performance to detect Urdu fake news. A model is considered efficient if it takes minimum time than other models to build the model on some corpus. Again, SVM takes a minimum time of 0.15 on BOW and 0.17 seconds on IG to build the model on UFN and BET. DT takes the highest time on all features to build the model for both corpora. Further, it is notable that all the models perform betters on our designed machine-translated UFN corpus than the original news article's corpus BET. It shows that Google API translated text, preprocessing methods, and feature selection methods all together improve the classification accuracy of our models to detect fake news. Therefore, after analyzing the results, we conclude that SVM is a more accurate, reliable, efficient, and robust classification model among the three models to detect Urdu text's fake news articles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion of ensemble models</ns0:head><ns0:p>The values of four evaluation metrics balanced accuracy, the area under curve, mean absolute error, and time achieved by five ensemble models on both corpora are given in Table <ns0:ref type='table'>6</ns0:ref> . Time and MEA are the two metrics whose minimum values are required by the models. The other two metrics balanced accuracy and AUC, whose maximum values are required to achieve by a model. For the time metric, it is visible that Voting takes the minimum time than other models to build the model on the input corpus. Voting takes 11.52 seconds and 3.12 seconds to build the model on UFN and BET corpora, respectively. As the size of the BET model is very small, the Voting takes the minimum time to build the model than all the other models. It can also be noticed that the minimum time taken by Voting to build a model on both corpora is using trigram. It shows the efficiency of the tri-gram method over IG and BoW to build a model. For the MAE metric, again, the Voting model achieves the minimum values than others on both corpora, which shows that magnitude of the error is significantly less in predicting the labels of both types of articles. The average magnitude error of the Voting model is 18.41% on tri-gram and 10.7% on IG for BET and UFN, respectively. It means that IG is a good feature selection method over other methods on large size UFN corpus while tri-gram is good for small size BET corpus.</ns0:p><ns0:p>To estimate an ensemble model's performance and decide whether a model's performance is acceptable or not, we use a performance ranking metric AUC. On the BET corpus, only is achieved by the Ensemble Selection model over the tri-gram feature method. With 𝐴𝑈𝐶 ≥ 90 IG and BoW features the AUC scores of all the other models are which means the 𝐴𝑈𝐶 < 75 performance of these models is acceptable. On UNF corpus, Cascade Generalization achieves outstanding performance to detect fake news with BoW and IG (see Table <ns0:ref type='table'>4</ns0:ref>). It achieves 92.0% and 92.7% AUC scores for BoW and IG methods. Cascade Generalization with (𝐴𝑈𝐶 ≥ 90) 86.8% AUC score categorizes its performance ranking to excellent. Again, Ensemble Selection achieves the best AUC score using IG on UFN while Cascade Generalization achieves the best AUC using tri-gram features on BET. As we are interested to know the performance of a model to predict both labels ('fake', 'legitimate) correctly in the corpus, we use balanced accuracy. The maximum BA achieved by a model means that the model is performing more accurately than others to distinguish fake articles from legitimate articles. Experiment results reveal that Ensemble Selection and Voting outperform the other models on BET corpus and UFN corpora. Ensemble Selection achieves the maximum 83.3% BA on BET corpus using the tri-gram feature On the UFN corpus, the Voting model significantly outperforms the other four ensemble models and achieves an 89.3% BA score using the IG feature. Again, it is noticed that IG outperforms the other methods on UFN while tri-gram outperforms the other feature selection methods on the BET corpus.</ns0:p><ns0:p>Voting model ensemble the numerous base-predictors using some ensemble rule. The four popular rules are majority voting, the product of probabilities, minimum probabilities, and maximum probabilities. By impressive Voting performance on both corpora using balanced accuracy, MAE, and Time metrics, as given in Table <ns0:ref type='table'>6</ns0:ref> and discussed above, we further investigate its performance using different ensemble rules. The mean absolute error values achieved by each ensemble rule is shown in Figure <ns0:ref type='figure'>3</ns0:ref>. We conclude that the minimum probabilities rule is impressive to ensemble the predictions of the base-predictors at it achieves the lowest error values on both corpora. The vote model achieves 16.74% and 18.41% MAE scores on UFN and BET corpora. Hence, in our experiments, minimum probabilities, and product of probabilities, both rules perform the same on both corpora.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance comparison of machine learning and ensemble learning models</ns0:head><ns0:p>It is important to know the difference in the performance of ensemble models and individual machine learning models. A summary of the results achieved by the best ML and EL model with the best feature selection method is given in Table <ns0:ref type='table'>7</ns0:ref>. Comparative analysis of the results shows that machine learning models are efficient than EL models and take less time to build a model on both corpora. SVM takes a minimum time of less than a second to build the model on both corpora. Among the EL models, Voting is efficient and takes 11.52 and 3.12 seconds on UFN and BET. But Voting is much costly than SVM. It is because of the multiple base-predictors in the EL model. EL model combines the three heterogeneous ML models and then the final Voting model predicts the final label based on the prediction of the base-models. For error analysis, MAE values show that EL models have the lowest values of MAE than individual ML models. Again SVM outperforms the NB and DT by achieving minimum MAE scores on both corpora. On the other side, Voting outperforms the other EL models on both corpora. The lowest score of MAE for EL models means that these models are more accurate in fake news detection. EL models reduce MAE at two levels: at the base-predictors level and ensemble-level. Voting takes the advantage of MAEs of its base-predictor. It reduces the MAE scores of its three base-predictors using minimum probability to predict the final class. SVM achieves maximum scores of AUC 87.3% and 76.3% on UFN and BET. AUC scores rank SVM predictions to excellent on UFN and acceptable on BET. Cascade Learning and Ensemble Learning achieve 92.7% and 91.0% AUC scores on UFN and BET. It categorizes the detection performance of both models as outstanding. SVM outperforms the other ML models and it achieves the maximum BA scores. SVM achieves 87.3% BA on UFN and 76.3% on BET. From EL models, Voting achieves 89.3% BA and outperforms the other EL and ML models on the UFN corpus. On BET corpus, Ensemble Selection models produce 83.3% BA that is the maximum BA among all the models. The comparison of EL and ML methods using three feature selection methods is interesting valuable. SVM shows the best performance among the three ML models on small and large corpora. SVM achieves the best scores in all the performance measures. Character tri-gram works well on small size corpus BET while IG works well on large size corpus UFN to boost SVM performance. Voting performance is the best performance among EL models using Time and MAE performance measures on both corpora. Ensemble Selection is good at small corpus BET on two performance measures. IG feature works well with Voting to predict the class of a news article on UFN while tri-gram is the best with Voting and Ensemble Learning. Further, it can be seen that the IG feature works well on large size corpus while character tri-gram is good on small size corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fake news detection through ensemble models is the most prominent topic of machine learning. If the traditional machine learning models are used for fake news detection task, the performance is not encouraging because their performance is limited to corpus characteristics. In this study, we deliberately choose ensemble methods to classify fake and legitimate news articles of the Urdu language. First, we use three machine learning models to classify two fake news corpora. Our experiments on two Urdu news corpora conclude that the individual machine learning model SVM outperforms the DT and NB on both corpora. SVM achieves the best scores of balanced accuracy and AUC and the minimum score of MAE. Second, we use five ensemble models for the same task. We find that ensemble models with three base-predictors DT, NB, and SVM, Ensemble Selection, and Vote models outperform the other on BET and UFN corpora, respectively. After the analysis of MAE, AUC, time, and BA values, we conclude that Voting with minimum probability is the best EL model for the fake news detection task. IG feature works well with large size corpus while character tri-gram works well on small size corpora. This study has several limitations that need to be addressed in future studies. The proposed corpus UFN still needs to grow by adding more news articles to it. We used online Google translation API in English-to-Urdu translation and we believe that translation accuracy is not a hundred percent. A study is vital in the future to explore the translation accuracy and quality of various translation APIs like Baidu, Google, etc. The potential of deep learning models also can be explored to detect fake news for Urdu. Further, we also hope to design another multilingual fake news corpus of English and Urdu news articles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4: Rules for classifying the discrimination using AUC AUC values</ns0:head><ns0:p>Classifier categories 𝐴𝑈𝐶 < 0.5</ns0:p><ns0:p>No Discrimination 0.7 ≤ 𝐴𝑈𝐶 < 0.8 Acceptable 0.8 ≤ 𝐴𝑈𝐶 < 0.9 Excellent 0.9 ≤ 𝐴𝑈𝐶 Outstanding</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,367.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54064:2:0:NEW 6 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Original Article Title: “Supervised ensemble learning methods towards automatically filtering Urdu fake news within social media”
To: PeerJ Computer Science Editor
Re: Response to reviewers
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments.
We are also very thankful for all the reviewers and the editor for their valuable feedback that help and guide us to improve the quality of our manuscript. We have tried our best to fix all the identified gaps, technical issues raised by the honorable reviewers and the editor. To understand the changes in the manuscript, we have highlighted the text in the manuscript.
We are uploading (a) our point-by-point response to the comments (below) (Rebuttal File), (b) an updated manuscript with tracked changes, and (c) a clean updated manuscript without tracked changes.
Best regards,
Akhter et al.
Editor Comments:
There are a few minor comments from the reviewers which need to be addressed before the manuscript can be accepted. I hope that you can quickly deal with these comments and return your manuscript to us.
Response 1: Thank you. All the changes have been completed.
Reviewer 1 (Anonymous)
Basic Reporting
In general, this version of the paper has been improved both in terms of content and writing. Several parts have been rewritten and now the quality of the paper is higher than before.
Response : Thank you
Experimental Design
The authors address my comments satisfactorily, both the accuracy of the English-to-Urdu translation and the news articles selection.
Response : Thank you
Validity of the Findings:
The authors have added the limitations of this study to the manuscript, which makes it more reasonable.
Response : Thank you
Comments to the Author:
The authors did some progress with the new changes, and I believe that it would be accepted.
Response : Thank you. We hope so…
Reviewer 2 (XinYi Zhou)
Basic Reporting:
Please see my general comments for the author.
Experimental Design:
No comment.
Validating of the findings:
No comment.
Validating of the findings:
I appreciate the authors have addressed most of my previous concerns. Nevertheless
Comment 1: the writing can be further improved before being accepted.
Response 1: Thank you for your suggestion. We have further review the writing of our article and have corrected the spelling and grammar issues.
Comment 2: the review for low-resource language datasets is insufficient, where several newly released ones for fake news research are suggested to include, such as CHECKED and MM-COVID. Meanwhile, I hope the following surveys could help the authors with the review.
Response 2: Thank you for your suggestion. We have further added a few recent articles about the fake news detection of resource-poor languages. Table 1 is update by inserting new articles. Further, the suggested articles have been added in the text of resource-rich languages from 200 to 203 lines.
" | Here is a paper. Please give your review comments after reading it. |
10 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Stream data is the data that is generated continuously from the different data sources and ideally defined as the data that has no discrete beginning or end. Processing the stream data is a part of big data analytics that aims at querying the continuously arriving data and extracting meaningful information from the stream. Although earlier processing of such stream was using batch analytics, nowadays there are applications like the stock market, patient monitoring, and traffic analysis which can cause a drastic difference in processing, if the output is generated in levels of hours and minutes. The primary goal of any real-time stream processing system is to process the stream data as soon as it arrives.</ns0:p><ns0:p>Correspondingly, analytics of the stream data also needs consideration of surrounding dependent data. For example, stock market analytics results are often useless if we do not consider their associated or dependent parameters which affect the result. In a real-world application, these dependent stream data usually arrive from the distributed environment.</ns0:p><ns0:p>Hence, the stream processing system has to be designed, which can deal with the delay in the arrival of such data from distributed sources. We have designed the stream processing model which can deal with all the possible latency and provide an end-to-end low latency system. We have performed the stock market prediction by considering affecting parameters, such as USD, OIL Price, and Gold Price with an equal arrival rate. We have calculated the Normalized Root Mean Square Error (NRMSE) which simplifies the comparison among models with different scales. A comparative analysis of the experiment presented in the report shows a significant improvement in the result when considering the affecting parameters. In this work, we have used the statistical approach to forecast the probability of possible data latency arrives from distributed sources. Moreover, we have performed preprocessing of stream data to ensure at-least-once delivery semantics.</ns0:p><ns0:p>In the direction towards providing low latency in processing, we have also implemented exactly-once processing semantics. Extensive experiments have been performed with varying sizes of the window and data arrival rate. We have concluded that system latency</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Due to the growth of social media and its applications, which demand a massive amount of data, the data generation rate is high and continuous. They are known as stream data. It is not feasible to store such data substantially, and usual data mining methods cannot handle it. So we need a system or model to analyze and manage such stream data, which is called a stream management system <ns0:ref type='bibr' target='#b6'>(Bhatt & Thakkar, 2019a)</ns0:ref>. Commonly used terms which are related to stream data are,  Unbounded data: The data which is not fixed like batch data, and the data which is continuous and endless is called unbounded data. There is a need to have a specific type of execution engine for processing streaming data ('O'Reilly Media -Technology and Business Training'; <ns0:ref type='bibr' target='#b1'>Akidau, 2015)</ns0:ref>  Unbounded data processing: A continuous processing of unbounded data <ns0:ref type='bibr' target='#b1'>(Akidau, 2015)</ns0:ref>.</ns0:p><ns0:p>Batch processing becomes a subset of stream processing <ns0:ref type='bibr' target='#b6'>(Bhatt & Thakkar, 2019a)</ns0:ref>.</ns0:p><ns0:p> Low-latency, approximate, and abstract results: Unlike batch processing systems, streaming engines are related to low-latency or hypothetical ('O'Reilly Media -Technology and Business Training'; <ns0:ref type='bibr' target='#b1'>Akidau, 2015)</ns0:ref>. As stream management models cannot observe and compute the entire data exactly, some kind of approximation is required <ns0:ref type='bibr'>(Masseglia et al.)</ns0:ref>. Therefore, Processing of Stream data is also crucial at the same time complex, and the causes are <ns0:ref type='bibr' target='#b6'>(Bhatt & Thakkar, 2019a)</ns0:ref>:</ns0:p><ns0:p> Businesses process the data having a particular timestamp and need to process in time order. Optimizing stream processing is a better way to deal with the issue of latency.</ns0:p><ns0:p> Large and infinite data that are in recent trade are efficiently controlled with a specific system that is intended for such continuously arriving data.</ns0:p><ns0:p> Processing data in such a system is a challenging task and requires an effective processing mechanism that is well defined by considering the processing of continuously arriving data. Stream data are often assumed to arrive from a variety of heterogeneous data sources, at high velocities, and in large volumes. How to retrieve valuable information based on the high-speed stream data and massive historical data has become a new challenge in the field of data mining to perform analysis immediately on incoming data <ns0:ref type='bibr' target='#b23'>(Wu, 2014;</ns0:ref><ns0:ref type='bibr' target='#b2'>Akidau et al., 2015)</ns0:ref>. The essential task of any streaming system is, processing arriving data from scattered sources and generate an output promptly. The critical deliberations for that desired task are Latency and Throughput. Hence Dealing with stream imperfections such as late data, lost data and out-of-order data becomes significant research in big data stream processing <ns0:ref type='bibr' target='#b7'>(Bhatt & Thakkar, 2019b)</ns0:ref>. There are many real-world stream data based applications where the prediction of such an application depends on the value of different distributed data sources as well. For example, to perform the prediction on the stock market data, we also need to consider the value of the other relevant parameters which affects the stock market, as there exist dependency between the price of Gold, Oil, USD, and stock market <ns0:ref type='bibr' target='#b4'>(Arfaoui & Ben Rejeb, 2017;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bedoui et al., 2018)</ns0:ref>. Hence any delay in incoming data from different distributed sources will affect the prediction. So there is a need that one should forecast the probability of delay in the incoming streams of data through the appropriate statistical method. Many challenges associated with stream data processing: 1) Since the source of continuously arriving stream data are distributed and their processing is dependent on other distributed parameters too, delay in any parameter introduces different types of latency such as data latency, system latency, and processing latency. 2) Statistical forecasting of the latency in the stream processing system to define the allowed lateness for a more accurate stream processing result. 3) To handle the repetitive processing of similar incoming data which will eventually increase the latency in overall stream processing. Henceforth there is a need to identify all the different types of latency present in the stream processing system. This paper presents the work to perform effective stream data processing with possible low latency. The main contributions of our work are summarized as follow:</ns0:p><ns0:p> We discover the strong relationship between the movement of the stock market and their dependent parameter. Accordingly, we design and implement the stream processing pipeline, which can process the distributed and continuously arriving stream data along with their dependent parameters with an equal arrival rate.</ns0:p><ns0:p> Normalized Root Mean Square Error (NRMSE) is calculated to measure the effect after considering the co-movement of related parameters for stock market prediction.</ns0:p><ns0:p> We explore the different types of latency, such as data latency and system latency in the stream processing system. Accordingly, we design and implement the proposed, end-toend low latency, stream processing model which deals with data latency and system latency.</ns0:p><ns0:p> The Statistical forecasting of the data latency in the stream processing system: o Compare and identify the appropriate latency distribution model.</ns0:p><ns0:p>o Forecast the data latency through appropriate probability density function and hazard rate.  The proposed system has been implemented by considering the 'Exactly-Once-Delivery' and 'Exactly-Once-Processing' semantics to ensure the low latency in stream processing.</ns0:p><ns0:p>Finally, an Experimental comparison between different window sizes and data arrival rate is presented.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature survey</ns0:head><ns0:p>A variety of approaches have been proposed to provide effective stream processing. Earlier stream data processing is used to be done by using the series of batch processing internally. Processing the stream data in small batches is still a time-consuming process which adds many types of delay in a system. Although, researchers have started working on batched stream processing systems that provide higher throughput with considerably lower latency. The work presented in <ns0:ref type='bibr' target='#b27'>(Zaharia et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b25'>Wu et al., 2017)</ns0:ref> defines the working of the batched stream processing system. However, results show such systems do not fit for stream fluctuation and uneven workload distribution. <ns0:ref type='bibr' target='#b21'>(Semeniuta & Falkman, 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Gabriel et al., 2020)</ns0:ref> state that periodic sampling became a dominant strategy for real-time stream processing systems. Although the periodic approach works well, in the distributed environment, one faces such challenges as latency. <ns0:ref type='bibr' target='#b9'>(Carbone et al., 2015)</ns0:ref> has presented processing of batch and stream in a single-engine where they analyzed, processing that is based on event-time may exhibit latency due to event-time processing-time delay. <ns0:ref type='bibr' target='#b0'>(Affetti et al., 2017)</ns0:ref> have analyzed the different stream processing window systems such as time window and count window to deal with latency. The choice of an appropriate window mechanism for stream processing is based on application. Although, under processing time semantics, the processing speed of the machines, as well as temporary overloads, might impact the output produced. <ns0:ref type='bibr' target='#b18'>(Miao et al., 2017)</ns0:ref> have presented the work on stream processing in a multi-core machine. They have presented a novel stream processing engine called 'StreamBox', where the primary design goal is to minimize latency with the help of epoch and pipeline parallelism. As the consideration of stream processing semantics can also affect the latency, the choice of appropriate semantic may lead to the lowest latency. Event-time latency and processing-time latency have been explored in the recent work presented in <ns0:ref type='bibr' target='#b16'>(Karimov et al., 2018)</ns0:ref> to provide possibly low latency stream processing. On the other hand, there is another possibility of latency into the stream processing system due to delay in arriving stream data, known as data latency. The state in the stream processing system, where data might have arrived into the system, but awaiting the processing also introduces the latency, known as the system latency. <ns0:ref type='bibr' target='#b2'>Akidau et al., 2015</ns0:ref> have defined the data flow model for balancing correctness and latency in processing out-of-order data. The author has used the concept of a watermark, which is a belief of input completeness concerning event times, to distinguish the data delay from the system delay. As the possibilities of drifts in stream data is a common issue, and to address the duration-based failure, the work presented in <ns0:ref type='bibr' target='#b11'>(Dong & Cui, 2021)</ns0:ref> has considered two-stage degradation in a dynamic environment specifically when the threshold is a random variable. Similar work to handle the delay through the threshold, <ns0:ref type='bibr' target='#b15'>(Huang et al., 2021)</ns0:ref>, has proposed a time-varying threshold for event-triggered mechanism using backstepping design. Authors in <ns0:ref type='bibr' target='#b10'>(Chen et al., 2020)</ns0:ref> have presented fuzzy adaptive two-bits-triggered control specifically for the uncertain system. They have concluded that addressing the input saturation for continuously arriving data and studying the inter-execution interval is important for an effective triggering mechanism. To provide low-latency message forwarding in sparse setting, a new opportunistic network framework called WON has been proposed by <ns0:ref type='bibr' target='#b13'>(Fu et al., 2019)</ns0:ref> which consider all the message-forwarding activities such as mobile-to-mobile, mobile-to-stationary, stationary-to-mobile, and stationary-to-stationary. Although authors have explored that, automatically get the recommended settings according to the input requirements can be an effective future work. To handle the time-varying delay related issues in stream processing, initial work presented in <ns0:ref type='bibr' target='#b28'>(Zhang et al., 2011)</ns0:ref> has analyzed Markovian jump systems. The improvement in the same was proposed by <ns0:ref type='bibr' target='#b26'>(Xiong et al., 2016)</ns0:ref>, where they have studied the delay-dependent stability and suggested the Lyapunov functional with the delay decomposition technique using Jensen inequality. Although authors have concluded there is scope to explore new integral inequalities to further improve the delay-dependent stabilization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Interdependencies between stock market data with dependent parameters</ns0:head><ns0:p>In the real world, an event-time based stream processing system, more specifically, the one where the prediction depends on multiple independent parameters suffer from different types of latency and low throughput. Existing windowing and triggering approach unable to handle different types of latency and to provide higher throughput for the unbounded data. As Arfaoui & Ben Rejeb, 2017 have analyzed, there is a dependency between the value of gold, USD, OIL, and the stock market. The purpose is to examine, from a global perspective, the oil, gold, USD, and stock prices interdependencies and to identify instantaneously direct and indirect linkages among them <ns0:ref type='bibr' target='#b4'>(Arfaoui & Ben Rejeb, 2017)</ns0:ref>. For instance, to perform stock market analysis, it is advisable to consider the co-movement, which has a high impact on the stock market analysis. Likewise, a recent study carried out by <ns0:ref type='bibr' target='#b5'>(Bedoui et al., 2018)</ns0:ref> proposes a Nested copula-based GARCH model to explore the dependence structure between oil, gold, and USD exchange rate. More importantly, a comparative framework based on three sub-periods is implemented to capture the co-movement during the regular and crisis period. Empirical results suggest that for both crisis periods the dependence between oil, gold, and USD exchange rate along with stock market value is stronger compared with the dependence during the untroubled period <ns0:ref type='bibr' target='#b5'>(Bedoui et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental analysis of stock market data</ns0:head><ns0:p>We have performed experiments on the Apache Beam. Apache Beam is a platform that allows streaming data. Apache Beam incorporates the different functionalities provided by Google and Apache independently into a single platform. We have used 'Dataflow Runner' as a Beam Runner ('Apache Beam Documentation') which runs on Google Cloud Platform. The experiments have been performed on the stock market prediction application using the following algorithm-1 of linear regression with multiple variables. The dataset considered of the BSE stock market with USD, gold price, and Oil Price is described in Table <ns0:ref type='table'>1</ns0:ref>. We have performed experiments on the Apache Beam. Apache Beam is a platform that allows streaming data. Apache Beam incorporates the different functionalities provided by Google and Apache independently into a single platform. We have used 'Dataflow Runner' as a Beam Runner ('Apache Beam Documentation') which runs on Google Cloud Platform. Experiments have been carried out on the stock market prediction application using the following algorithm-1 of linear regression with multiple variables.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 1: Linear Regression with multiple variable</ns0:head><ns0:p>Step 1: Prediction , where y(x) = θ 0 + θ 1 x 1 + θ 2 x 2 + θ 3 x 3 = θx T</ns0:p><ns0:p>x 1 = USD, x 2 = Gold_Price, x 3 = Oil_Price</ns0:p><ns0:p>Step 2: Cost Function</ns0:p><ns0:formula xml:id='formula_0'>J(θ) = 1 2m ∑ j (y (j) -yx (j)T ) 2 J(θ) = 1 2m ∑ j (y T -θx T ) (y T -θx T ) T</ns0:formula><ns0:p>Step 3: Optimization of hyperparameter by stochastic gradient descent algorithm</ns0:p><ns0:formula xml:id='formula_1'>∇𝐽 (𝑗) (θ) =- 2 𝑚 ∑ j (y (j) -θx (j)𝑇 ).[𝑥 (𝑗) 0 𝑥 (𝑗) 1 …..]</ns0:formula><ns0:p>Step-1 in algorithm-1 specifies the prediction function .We have considered the additional y(x). parameter like USD, Gold Price, and Oil Price.</ns0:p><ns0:p>Step-2 shows the calculation of the cost function by considering the difference between the actual ( and predicted ( value. We have optimized y) y) the hyperparameter using stochastic gradient descent, as shown in Step-3. Figure <ns0:ref type='figure'>1</ns0:ref> shows the comparative analysis on the actual close value of BSE Stock market data with prediction considering dependent parameters. We have also provided comparative results between prediction with considering parameters and without considering parameters in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. We have calculated the Normalized Root Mean Square Error (NRMSE), as shown in Eq. (1), which is the RMSE facilitates the comparison between models with different scales.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_2'>NRMSE = RMSE O</ns0:formula><ns0:p>Where, in Eq. ( <ns0:ref type='formula'>1</ns0:ref>) is the average of observation value and RMSE can be calculated as shown in O Eq. ( <ns0:ref type='formula'>2</ns0:ref>),</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_3'>RMSE = 100 * ∑ n i = 1 (X obs, i -X model, i ) 2 n</ns0:formula><ns0:p>Where, X (Obs, i) in Eq. ( <ns0:ref type='formula'>2</ns0:ref>) is the observation value, and X (model, i) is the forecast value. Generally, RMSE is preferred when we are comparing different models on the same data while NRMSE is the statistical measure that is suitable for comparing different data scales. Hence, the NRMSE can be interpreted as a fraction of the overall range that is typically resolved by the model. As shown in Figure <ns0:ref type='figure'>3</ns0:ref>, we have observed the NRMSE for the prediction considering dependent parameters is 0.03 (3%) which is significantly better than the NRMSE of 0.09 (9%) for the prediction without dependent parameter.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54618:1:1:NEW 10 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Finding from the experimental survey is, in a distributed environment, any parameter is missing or having delay/latency in processing affect the prediction. As the prediction in such stream databased application has a dependency on co-movement of different related parameters, there is a need to design a model that captures all the possible latency and also handle the late data with the appropriate statistical method to get an accurate result.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Proposed system</ns0:head><ns0:p>In this section, we have presented the proposed pipeline, as shown in Figure <ns0:ref type='figure'>4</ns0:ref>, followed by the proposed model for stream processing, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>. The proposed pipeline presented in Figure <ns0:ref type='figure'>4</ns0:ref> has been implemented with reading, Predict & Write transform and executed on google cloud dataflow as a runner in apache beam. The pipeline represents two things PCollection and PTransform. PCollection accepts the price of stock, gold, oil, and USD as input from distributed sources, and converted into distinct PCollection objects on which PTransform can operate. PTransform presents the processing function which can be performed on an element of the PCollection provided. The proposed model shown in Figure <ns0:ref type='figure'>5</ns0:ref> accepts the continuously arriving stream data from the distributed environment and passes it on to the Google Cloud Pub/Sub. Pub/Sub is a Publish/Subscribe model of distributed systems over Google cloud. The service of pub/sub is one that is based on the publish/subscribe method and make it possible through push and pull messages. It can take data from any source or any part of the world and provide for further processing. The proposed pipeline, incorporated into the proposed model, will be executed on a dataflow runner with different transformations such as read, extract, predict, format, and write. The output of the dataflow runner will be stored back to the bucket created on the cloud storage.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54618:1:1:NEW 10 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>The mathematical formulation of the proposed model</ns0:head><ns0:p>The proposed mathematical model for low latency stream processing is defined as follow: which appears more than once.</ns0:p></ns0:div>
<ns0:div><ns0:head>Begin</ns0:head><ns0:p>For each si ∈ S do Get K records si+1 … sk If any of the above k records are similar to si then Flag = 'Y' Skip the input stream.</ns0:p></ns0:div>
<ns0:div><ns0:head>Else</ns0:head><ns0:p>Flag = 'N' Consider the input stream.</ns0:p></ns0:div>
<ns0:div><ns0:head>End For End</ns0:head><ns0:p>Algorithm 3: Event History Analysis Input: A sequence of input data latency L= l1, l2… ln.</ns0:p><ns0:p>Distribution model M = {Gamma, LogNormal, Weibull}. Output: Probability density function for distribution model, Hazard rate.</ns0:p></ns0:div>
<ns0:div><ns0:head> H(•|x</ns0:head><ns0:p>) is a hazard rate that models the probability of an event to occur at time t.</ns0:p><ns0:p> F(t) is the Probability Density Function for input data.</ns0:p><ns0:formula xml:id='formula_4'> = Scale, = Shape. μ σ</ns0:formula><ns0:p>Step-1: Identify the suitable distribution model m ∈ M. // find the suitable distribution model</ns0:p><ns0:p>Step-2: Calculate the probability distribution function for model m. //Fit the model</ns0:p><ns0:formula xml:id='formula_5'>F(x) = e -((ln ( x m)) 2 (2σ 2 )) xσ 2π</ns0:formula><ns0:p>Step-3: Calculate hazard rate H // Forecast the probability of delay (x, σ) = Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>We have implemented the same instance rule as defined in algorithm 2, which is the appropriate Data Quality Rule (DQR) for the input stream, to ensure the exactly-once delivery semantic in stream processing. Furthermore, to handle the possible data latency, we have followed the process defined in Figure <ns0:ref type='figure'>6</ns0:ref>. We have performed event history analysis as defined in algorithm 3, which is the process to analyze the behaviour of the continuously arriving incoming stream data. As defined in step-1 of algorithm-3, to find the best suitable distribution model for our input data, we have identified different data distribution models that are applicable to continuously arriving stream data such as gamma distribution, Weibull distribution, and lognormal distribution. Figure <ns0:ref type='figure'>7</ns0:ref> shows the goodness of fit by maximum likelihood estimation for the above-mentioned, statistical data distribution models.</ns0:p><ns0:p>From statistical experiments, we have found the 'Lognormal Distribution' fit as the best data distribution model for our input, as shown in Figure <ns0:ref type='figure'>8</ns0:ref>. As shown in step-2 of the algorithm-3, we have calculated the probability density function (PDF) for 'Lognormal Distribution' as follow:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_6'>F(x) = e -((ln ( x m)) 2 (2σ 2 )) xσ 2π</ns0:formula><ns0:p>Where, the value of in Eq. ( <ns0:ref type='formula'>3</ns0:ref>) have been calculated using statistical computing tool μ, m and σ R for Lognormal Distribution. x(1.89) 2π</ns0:p><ns0:p>Eq. ( <ns0:ref type='formula'>4</ns0:ref>) has been used as the probability density function of lognormal distribution to forecast future latency. To forecast and handle the data latency, we have performed the survival analysis using hazard rate H(•|x), which models the propensity of the occurrence of an event, that is, the probability of an event to occur at time t as shown in Eq. ( <ns0:ref type='formula'>5</ns0:ref>). We have defined the hazard rate to forecast the data latency as shown in Eq. ( <ns0:ref type='formula'>5</ns0:ref>),</ns0:p></ns0:div>
<ns0:div><ns0:head>Hazard rate</ns0:head><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_7'>H(x, σ) = ( 1 xσ )ϕ ( ln x σ ) Φ ( ln x σ )</ns0:formula><ns0:p>Where ϕ is the probability density function (PDF) of the lognormal distribution and Φ is the cumulative distribution function of the lognormal distribution. Figure <ns0:ref type='figure'>9</ns0:ref> shows the probability of future data latency based on the hazard rate. From the experimental result, our proposed system has discovered the average data latency of 8 seconds, and hence after reaching the end of the window, the system will wait for the next 8 seconds to consider probable late data before processing.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54618:1:1:NEW 10 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Exactly-once stream processing semantic</ns0:head><ns0:p>There is various semantics such as at-least-once, at-most-once, and exactly-once exist for stream processing. To ensure end-to-end low latency stream processing, we have implemented exactlyonce stream processing semantics at our publisher-subscriber mechanism of cloud dataflow. Figure <ns0:ref type='figure'>10</ns0:ref>(A) shows the publisher-subscriber mechanism. The publisher will publish the message in topic whereas the subscriber will pull the messages from the topic. A unique data_id has been assigned to every input data; however, they are uniquely identified in the publisher-subscriber mechanism by a combination of (publisher_id + data_id), as shown in Figure <ns0:ref type='figure'>10(B)</ns0:ref>. The Publisher-subscriber mechanism work based on acknowledgment. If the publisher does not receive an acknowledgment from the subscriber, the publisher may publish the same data again. To achieve exactly-once processing in our stream processing system, redundant data can be identified and discarded from the topic based on the unique (publisher_id + data_id), as shown in Figure <ns0:ref type='figure'>10(C</ns0:ref>). To discover the low system latency, we have performed the experiments with different data arrival rate (data freshness) and different window sizes. Figure <ns0:ref type='figure'>11(A)</ns0:ref> shows the latency values (in seconds) for different window sizes at a data arrival rate of 15 seconds and figure <ns0:ref type='figure'>11(B)</ns0:ref> shows the graphical representation of the same. Likewise, Figure <ns0:ref type='figure' target='#fig_0'>12</ns0:ref>, Figure <ns0:ref type='figure'>13</ns0:ref>, and Figure <ns0:ref type='figure'>14</ns0:ref> show the latency values for different window sizes at a data arrival rate of 30 seconds, 45 seconds, and 60 seconds, respectively. We have calculated and observed the latency values up to 10 minutes at an interval of every 1 min. We have concluded from the experiments: 1) when the size of the window is larger than the rate of arrival of data, the window will wait for input data before the trigger fires which may increase the delay in the result. 2) When the size of the window is smaller than the rate of arrival of data, processing of the data may be delayed as data might have arrived into the system but waiting for processing.</ns0:p><ns0:p>3) The lowest latency can be achieved when the window size is equal to the data arrival rate.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>To provide the low latency in stream processing, there should be a systematic flow design that can process the continuously arriving stream. We have designed the proposed pipeline for our stock market prediction application. To implement effective stream processing, one should be able to capture all the different types of delay into the system and deal with the delay through an effective statistical approach. We have designed a proposed model that provides end-to-end low latency processing. However, to provide a better prediction for continuously arriving stream data, there is a need to consider the dependent parameter from the distributed environment with an equal data arrival rate. We have performed experiments for the prediction in the stock market by considering their dependent parameter (Price of Oil, Gold, and USD), and proved that delay/absence of dependent parameters would affect the performance. Hence, we have forecasted the probability of late data through a statistical approach and implement the system which deals with such data latency accordingly. System latency can also be reduced by choosing appropriate stream processing semantics, we have implemented exactly-once stream processing semantics. We have concluded from the experiments performed on the Google cloud dataflow that, in stream data processing, there is a dependency between the size of the window and data freshness. To reduce the overall system latency, we need to consider the appropriate size of the window as per the rate of arrival of data. The future scope of this work is to compare the result of stream processing with other stream processing semantics such as at-least-once and at-most-once stream processing. The work can further be extended by exploring more effective data quality rules for stream data which can further reduce the delay of overall processing.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Algorithm 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Data Stream Preprocessing -Duplicate Instance Rule Input: A sequence of input stream S = s1s2…sn. Output: Preprocessed stream. Apply the Data Quality Rule (DQR) on input data // Duplicate Instance Rule o Find Duplicate: Given an input stream S = s1s2 …sn, where si∈[m] and n > m, find a∈[m],</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54618:1:1:NEW 10 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>=</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,389.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,404.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,404.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,400.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,400.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54618:1:1:NEW 10 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Charotar University of Science And Technology,
139, CHARUSAT Campus, Highway, Off, Nadiad - Petlad Rd,
Changa, Gujarat 388421, INDIA,
Tel: +91 999 8582812,
niravbhatt.it@charusat.ac.in
February 10th, 2021
Dear Editors,
We thank the editor, reviewers and PeerJ staff for their generous comments on the manuscript and have edited the manuscript to address their concerns.
In particular, I have made all the technical changes suggested by PeerJ staff regarding figures and table presented in the work, and citation related to the table. I have also corrected the common grammatical mistakes as a part of reviewers comment.
We believe that the manuscript is now suitable for publication in PeerJ.
Nirav Bhatt,
Assistant Professor, IT, CSPIT, CHARUSAT.
On behalf of all authors.
1. Figure/Table Citation
The submission appears to be missing a citation for Table 1 (the table in line 201-202) in the text. Please can you add a citation for Table 1 in your manuscript and re-upload the document.
I have added the citation for Table 1 (line number 195-196) and uploaded the revised manuscript.
,
2. Text-only manuscript
The manuscript you provided has tables embedded in the document. Please remove them and upload a text-only manuscript with a separate file for each figure and table here. Our system will then merge them into a single PDF for review.
I have removed the Table 1from manuscript and uploaded as a separate file for table.
3. Figures
• Figures 7 and 8 have multiple parts. Each figure with multiple parts should have alphabetical (e.g. A, B, C) labels on each part and all parts of each single figure should be submitted together in one file.
I have corrected the Figures 7 and 8.
◦ The 4 parts of Figure 7 has been labeled A-D.
◦ The 4 parts of Figure 8 has been labeled A-D.
Please replace Figures 3, 4, 5, 7, 8, 9, and 10 (which are bitmap PDFs) with either PNG, EPS or vector PDF.
I have replaced Figures 3, 4, 5, 7, 8, 9 and 10 with EPS.
" | Here is a paper. Please give your review comments after reading it. |
11 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>With the increase in the use of private transportation, developing more efficient ways to distribute routes in a traffic network has become more and more important. Several attempts to address this issue have already been proposed, either by using a central authority to assign routes to the vehicles, or by means of a learning process where drivers select their best routes based on their previous experiences. The present work addresses a way to connect reinforcement learning to new technologies such as car-to-infrastructure communication in order to augment the drivers knowledge in an attempt to accelerate the learning process. Our method was compared to both a classical, iterative approach, as well as to standard reinforcement learning without communication. Results show that our method outperforms both of them. Further, we have performed robustness tests, by allowing messages to be lost, and by reducing the storage capacity of the communication devices. We were able to show that our method is not only tolerant to information loss, but also points out to improved performance when not all agents get the same information.</ns0:p><ns0:p>Hence, we stress the fact that, before deploying communication in urban scenarios, it is necessary to take into consideration that the quality and diversity of information shared are key aspects.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>With the COVID-19 related pandemic, there has been several reports that the use of private transportation means (e.g., individual vehicles) is increasing as people try to avoid public transit as much as possible.</ns0:p><ns0:p>This leads to even more congestion and hence makes the question of selecting a route to go from A to B more and more prominent. This is especially the case for commuters, who make a given trip nearly every day and, hence, have the opportunity to learn and/or adapt to the traffic patterns faced daily. To address the challenges posed by an ever increasing demand, transportation authorities and traffic experts try to distribute the flow among existing routes in order to minimize the overall travel time. Often, this task involves some form of communication with the drivers. Traditional approaches such as variable message panels or radio broadcast are now being replaced by directed (and potentially personalized) communication, via new kinds of communication devices.</ns0:p><ns0:p>While the current pattern is that each individual driver selects a route based on his/her own experience, this is changing as new technologies allow all sorts of information exchange. Examples of these technologies are not only based on broadcast (e.g., GPS or cellphone information) but also a two-way communication channel, where drivers not only receive traffic information but also provide them. Hence, currently, many traffic-related applications for cellphones deal with the idea of a central authority in charge of somehow assigning routes for drivers. Examples are Waze, Google apps, etc. Since their specific algorithms are not published, one can only guess that they try to find a feasible solution, given a set of constraints that they are able to infer from the current data they collect. What seems certain is that these platforms work in a centralized way, based on data they collect when their customers or users use their specific apps. Also, they do not handle locally collected and processed data. This leads to them being ineffective when the penetration of their services is low as, e.g., during the initial stages of the 2020 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science pandemics, when few drivers were using the system. A way to mitigate this could be to decentralize the processing of information, as proposed here, and passing it to drivers to make their route choices.</ns0:p><ns0:p>Our method has some resemblance with the notion of traffic assignment (see next section), since it is based on the fact that drivers collect experience by trying out several routes until they settle on those that lead to the least travel time.</ns0:p><ns0:p>Traffic assignment approaches work (and indeed were developed for this purpose) well for planning tasks, i.e., how to plan a traffic network (or change an existing one) in order to minimize travel costs.</ns0:p><ns0:p>However, route choice is not related to planning tasks but, rather, is an operational aspect, especially in commuting situations, where drivers repeatedly travel from the same origin to the same destination.</ns0:p><ns0:p>Besides, traffic assignment is a centralized approach, in which the drivers do not actively select routes.</ns0:p><ns0:p>Rather, routes are assigned to them. Thus, it is important to investigate how drivers do select routes in their daily commuting tasks.</ns0:p><ns0:p>Multi-agent reinforcement learning (MARL) can be used for such purpose, as it fits the task of letting agents decide, autonomously, how to select routes to go from A to B. This is realized by letting agents iteratively choose their least costly route based on their own learning experiences. Such approach has been tried before, as described in the section on related works. In fact, it has been shown that reinforcement learning is a good technique to investigate route choice. However, the learning process can be inefficient, as for instance, it may take time, since the agents have to collect experiences by themselves. As this happens to be a very noisy environment, the signal an agent gets can be little discriminatory (e.g., due to the presence of other learning agents, an agent may get the same signal for very different actions, or, conversely, different signals for the same action). Thus, our long term aim is to investigate forms of accelerating the learning process. One of these forms is by giving more information to the agents.</ns0:p><ns0:p>There are only few works that consider new technologies to this experience, as for instance those tied to vehicular communication in general.</ns0:p><ns0:p>In the present paper, we extend a method that connects MARL to new technologies such as carto-infrastructure communication (C2I). These were formulated with the goal of investigating how C2I communication could act to augment the information drivers use in their learning processes associated with choices of routes. In such approach, whole routes are not imposed or recommended to drivers, but rather, these receive local information about the most updated state of the links that happen to be near their current location. This way, drivers can change their route on-the-fly (the so-called en route trip building).</ns0:p><ns0:p>Further, that approach assumes that the infrastructure is able to communicate with the vehicles, both collecting information about their most recent travel times (on given links), as well as providing them with information that was collected from other vehicles. However, another assumption is that messages are never lost, which is not realistic. Thus, in the present paper, we relax this assumption and admit loses of messages, as well as investigate the impact of them on the overall performance.</ns0:p><ns0:p>As a result of such extension, we are able to confirm that the MARL technique combined with a C2I model can accelerate the learning process. Moreover, our approach is tolerant to information loses.</ns0:p><ns0:p>In short, the contribution of the present work is manifold. First, we employ MARL to the task of learning how to go from A to B. Second, we do this using a non trivial scenario (as it is the case in most of the literature), in which there are more than one origin-destination pair. Third, we depart from most of the literature where the learning task considers that the driver agents already know a set of (pre-computed) routes to select among. Rather, we let these agents build their trips en route. This in turn requires the use of a microscopic, agent-based approach, where agents can potentially use different pieces of information in order to perform en route choice. This again contrasts to most of the literature, which uses macroscopic modeling (e.g., by means of abstract cost functions to compute travel times). Fourth, we connect MARL with the aforementioned communication technologies, in order to investigate whether the learning process can be accelerated by exchange of local information only. Lastly, we extend a previous approach by investigating its robustness to loses of messages. This paper is organized as follows. The next section briefly presents some background concepts on traffic assignment and reinforcement learning, as well as the panorama on the related work. Following, our methods and experimental results are presented and discussed. We review the general conclusions and outline the future work in the last section.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>BACKGROUND AND RELATED WORK The Traffic Assignment Problem</ns0:head><ns0:p>In transportation, the traffic assignment problem (TAP) refers to how to connect a supply (traffic infrastructure) to its demand, so that the travel time of vehicles driving within a network is reduced. This network can be seen as a graph G = (N, E), where N is the set of nodes that operate as junctions/intersections, and E is a set of directed links (or edges, as both terms are used interchangeably) that connect the nodes.</ns0:p><ns0:p>Hence the goal is then to assign vehicles to routes so that the travel time is minimized.</ns0:p><ns0:p>For more details, the reader is referred to Chapter 10 in <ns0:ref type='bibr' target='#b10'>Ortúzar and Willumsen (2011)</ns0:ref>. For our purposes it suffices to mention that classical approaches aim at planning tasks, are centralized (i.e., trips are assigned by a central authority, not selected by individual drivers). Also, the main approaches are based on iterative methods that seeks convergence to the user equilibrium (see next).</ns0:p></ns0:div>
<ns0:div><ns0:head>User Equilibrium</ns0:head><ns0:p>When it comes to reaching a solution to the TAP, one can take into account two perspectives: one that considers the system as a whole, and one that considers each user's point of view. In the system perspective, the best solution refers to the system reaching the best average travel time possible; this is the so called system optimum (SO), or Wardrop's second principle <ns0:ref type='bibr' target='#b22'>(Wardrop, 1952)</ns0:ref>. We stress that the SO is a desirable property, but hardly achievable given that it comes at the cost of some users, who are not able to select a route leading to their personal best travel times.</ns0:p><ns0:p>On the other hand, and most relevant for our current work, at the user's perspective, the system reaches the user (or Nash) equilibrium (UE) when there is no advantage for any individual to change its routes in order to minimize their travel time, as stated in the first Wardrop's principle <ns0:ref type='bibr' target='#b22'>(Wardrop, 1952)</ns0:ref>. The UE can be achieved by means of reinforcement learning, as discussed next.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reinforcement Learning</ns0:head><ns0:p>Reinforcement learning (RL) is a machine learning method whose main objective is to make agents learn a policy, i.e., how to map a given state to a given action, by means of a value function. RL can be modeled as a Markov decision process (MDP), where there is a set of states S, a set of actions A, a reward function R : S × A → R, and a probabilistic state transition function T (s, a, s ′ ) → [0, 1], where s ∈ S is a state the agent is currently in, a ∈ A is the action the agent takes, and s ′ ∈ S is a state the agent might end up, taking action a in state s, so the tuple (s, a, s ′ , r) states that an agent was in state s, then took action a, ended up in state s ′ and received a reward r. The key idea of RL is to find an optimal policy π * , which maps states to actions in a way that maximizes future reward.</ns0:p><ns0:p>RL methods fall within two main categories: model-based and model-free. While in the model-based approaches the reward function and the state transition are known, in the model-free case, the agents learn R and T by interacting with an environment. One method that is frequently used in many applications is Q-Learning <ns0:ref type='bibr' target='#b23'>(Watkins and Dayan, 1992)</ns0:ref>, which is a model-free approach.</ns0:p><ns0:p>In Q-learning, the agent keeps a table of Q-values that estimate how good it is for it to take an action a in state s, in other words, a Q-value Q(s, a) holds the maximum discounted value of going from state s, taking an action a and keep going through an optimal policy. In each learning episode, the agents update their Q-values using the Equation <ns0:ref type='formula'>1</ns0:ref>, where α and γ are, respectively, the learning rate and the discounting factor for future values.</ns0:p><ns0:formula xml:id='formula_0'>Q(s, a) = Q(s, a) + α(r + γmax a [Q(s ′ , a ′ ) − Q(s, a)]) (1)</ns0:formula><ns0:p>In a RL task, it is also important to define how the agent selects actions, while also exploring the environment. A common action selection strategy is the ε-greedy, in which the agent chooses to follow the optimal values with a probability 1 − ε, and takes a random action with a probability ε.</ns0:p><ns0:p>While this basic approach also works in MARL, it is important to stress some challenging issues that arise in an environment where multiple agents are learning simultaneously. Complicating issues arise firstly due to the fact that while one agent is trying to model the environment (other agents included), the others are doing the same and potentially changing the environment they share. Hence the environment is inherently non-stationary. In this case, convergence guarantees, as previously known from single agent reinforcement learning (e.g., <ns0:ref type='bibr' target='#b23'>Watkins and Dayan (1992)</ns0:ref> regarding Q-learning), no longer hold.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A further issue in multi-agent reinforcement learning is the fact that aligning the optimum of the system (from the perspective of a central authority) and the optimum of each agent in a multi-agent system is even more complicated when there is a high number of agents interacting.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Solving the TAP is not a new problem; there have been several works that aim at solving it. In one front, there are have classical methods (see Chapter 10 in Ortúzar and Willumsen ( <ns0:ref type='formula'>2011</ns0:ref>)), which, as aforementioned, mostly deal with planning tasks. Further, the TAP can also be solved by imposing tolls on drivers (e.g., <ns0:ref type='bibr' target='#b13'>Sharon et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>Buriol et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Tavares and Bazzan (2014)</ns0:ref>). The latter specifically connects road pricing with RL. However, the focus is on learning which prices to charge. Besides these two fronts, RL for route choice is turning popular.</ns0:p><ns0:p>When we refer to RL methods to solve the TAP, these usually fall into two categories: a traditional RL method, and a stateless one. Contrarily to the traditional approach, in the stateless case, the agents actually have only one state that is associated with its origin-destination pair, and they choose which actions to take. Actions here correspond to the selection of one among k pre-computed routes. Works in this category are <ns0:ref type='bibr' target='#b11'>Ramos and Grunitzki (2015)</ns0:ref> (using a learning automata approach), and Grunitzki and Bazzan (2017) (using Q-learning). In <ns0:ref type='bibr' target='#b25'>Zhou et al. (2020)</ns0:ref> the authors used a learning automata approach combined with a congestion game to reach the UE. <ns0:ref type='bibr' target='#b20'>Tumer et al. (2008)</ns0:ref> adds a reward shaping component (difference utilities) to Q-learning, aiming at aligning the UE to a socially efficient solution.</ns0:p><ns0:p>Apart from the stateless formulation, in the traditional case, agents may found themselves in multiple states, which are normally the nodes (intersections) of the network. Actions then correspond to the selection of one particular link (edge) that leaves that node. In <ns0:ref type='bibr' target='#b2'>Bazzan and Grunitzki (2016)</ns0:ref> this is used to allow agents to learn how to build routes. However, they use a macroscopic perspective by means of cost functions that compute the abstract travel time. In the present paper, the actual travel time is computed by means of a microscopic simulator (details ahead). A microscopic approach is required to handle communication issues.</ns0:p><ns0:p>As aforementioned, our approach also includes C2I communication, as these kinds of new technologies may lead agents to benefit from sharing their experiences (in terms of travel times), thus reducing the time needed to explore, as stated in <ns0:ref type='bibr' target='#b16'>Tan (1993)</ns0:ref>. The use of communication in transportation systems, as proposed in the present paper, has also been studied previously <ns0:ref type='bibr' target='#b6'>(Grunitzki and Bazzan, 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bazzan et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b8'>Koster et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b0'>Auld et al., 2019)</ns0:ref>. However, these works handle communication at abstract levels, using macroscopic approaches. In some cases, the information is manipulated to bias the agents to reach an expected outcome. Moreover, most of these works deal with vehicular communication (i.e., messages are shared among the vehicles), or are based on broadcast of messages by one or few entities.</ns0:p><ns0:p>This scheme approaches either systems such as traffic apps we see nowadays (Waze, etc.), or messages distributed by the traffic authority (as it used to be the case some time ago, using radio or variable message panels on main roads as in <ns0:ref type='bibr' target='#b21'>Wahle et al. (2000)</ns0:ref>). Neither vehicular communication nor broadcast are appropriate to investigate the impact of sharing local information, as we do here. A previous work by us <ns0:ref type='bibr' target='#b12'>(Santos and Bazzan, 2020)</ns0:ref> has presented preliminary results about the performance of combining RL with C2I against RL without communication. However, in this work, it is assumed that messages exchanged among the various actors do not get lost, which is irrealistic. Therefore, in the present paper we focus on the impact of communication failure and also on what type of information yields better results.</ns0:p><ns0:p>In a different perspective, works such as <ns0:ref type='bibr' target='#b24'>Yu et al. (2020)</ns0:ref> evaluate the impact of incomplete information sharing in the TAP. They do not employ a RL-based but rather a classical approach, namely multinomial Logit model.</ns0:p><ns0:p>More recently, <ns0:ref type='bibr' target='#b3'>Bazzan and Klügl (2020)</ns0:ref> discuss the effects of a travel app, in which driver agents share their experiences. The idea is to 'mimic' what happens in an office where colleagues chat about their habits and route choice experiences. In the present paper, driver agents do not directly share their experiences since the work in <ns0:ref type='bibr' target='#b3'>Bazzan and Klügl (2020)</ns0:ref> has shown that this process may lead to suboptimal results, due to agents not taking local issues into account. This is hardly possible in that work since <ns0:ref type='bibr' target='#b3'>Bazzan and Klügl (2020)</ns0:ref> use a macroscopic simulator, where location is an abstract concept. Rather, the present paper proposes -as shown in the next section -that the information is exchanged via an intersection manager, i.e., a manager of a portion of the network.</ns0:p><ns0:p>In any case, this sharing of knowledge was proposed in other scenarios <ns0:ref type='bibr' target='#b16'>(Tan, 1993)</ns0:ref> <ns0:ref type='formula'>2014</ns0:ref>)). It is important to note though, that virtually all these works deal with cooperative environments, where it makes sense to transfer knowledge. In non-cooperative learning tasks, as it is the case of route choice, naive transfer of learned policies may lead to every agent behaving the same, which runs against the notion of efficient distribution of agents in the road network.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>Our approach is based on using communication to augment the information each agent 1 has and, hence, the learning performance. The next three subsections discuss, respectively: how the infrastructure is represented; how communication occurs; and the details of the RL algorithm. We then formalize the details as an algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Representing the Infrastructure</ns0:head><ns0:p>We assume that every node n ∈ N present in the network G is equipped with a communication device (henceforth, CommDev) that is able to send and receive messages in a short range signal (e.g., with vehicles around the intersection). Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows an scheme that represents G and CommDevs within G.</ns0:p><ns0:p>Using the short-range signal, the CommDevs are able to communicate with vehicles that are close enough, and are able to exchange information related to local traffic data (refer to next section for details).</ns0:p><ns0:p>Moreover, these CommDevs are able to store the data exchanged with the agents in order to propagate this information to other agents that may use nearby intersections in the near future.</ns0:p><ns0:p>The arrows that connect CommDevs in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> represent a planar graph, meaning that every</ns0:p><ns0:p>CommDev is connected and can communicate to its neighboring devices. This permits that CommDevs get information about the traffic situation in neighboring edges, which is then passed to the agents. Conversely, the infrastructure communicates to the agent information about the state of the nearby edges, in terms of which rewards an agent can expect if it selects to use that particular link. This information can be of various forms. In all cases, the expected reward is computed by taking into account the rewards informed by other agents, when they have used nearby links. In the experiments, we show results where CommDevs communicate expected rewards that are either an aggregation (over a time window) or just a single value.</ns0:p></ns0:div>
<ns0:div><ns0:head>How Communication Works</ns0:head><ns0:p>1 Henceforth, the term agent is used to refer to a vehicle and/or driver agent.</ns0:p><ns0:p>2 This figure was designed using assets from https://www.vectorportal.com/, and https://www.freepik.com. All assets used fall under license CC BY 4.0.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In any of these cases, an agent receiving such information will then take it into account when selecting an action (choice of a link) in that particular state (a node). Next, details about how the information is processed, by both the CommDevs and the vehicle agents, are given.</ns0:p></ns0:div>
<ns0:div><ns0:head>Information Hold by Infrastructure</ns0:head><ns0:p>Each CommDev uses queue based data structures to hold the rewards informed by each agent that passes through it. Specifically, each edge is associated with one data queue. These queues have a maximum size, and when new information arrives after the queue is full, the oldest reward stored is discarded to make room to the most recent one.</ns0:p><ns0:p>When an agent requests information, the CommDev retrieves the rewards collected for the agent's possible actions and passes it to that agent. Recall that an action corresponds to a link to be traveled next, in order to form a route to the agent's destination.</ns0:p></ns0:div>
<ns0:div><ns0:head>Information Used by the Agent</ns0:head><ns0:p>In a standard Q-learning algorithm, the agents update their Q-values based on the feedback from the action they have just taken. However, in our case agents also update their Q-values based on the expected rewards received by the infrastructure. This means that every time they reach an intersection, they update their Q-values with the information provided by the CommDevs. We do this in order to accelerate the learning process. Instead of just considering its own past experiences, the information provided by the CommDevs augment the knowledge each agent has.</ns0:p><ns0:p>It is worth noting that a distinguishing characteristic of our approach is that it deals with local information, thus the information received from the CommDev only concerns actions that can be selected from that particular node.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm</ns0:head><ns0:p>Algorithm 1 Q-learning with C2I 1: Input: G, D, P, M, α, γ, ε, B 2: s ← 0 3: while s < M do 4:</ns0:p><ns0:formula xml:id='formula_1'>for v in V do 5: if v. f inished_trip() then 6: v.update_Q_table(B − v.last_edge_travel_time) 7:</ns0:formula><ns0:p>G.commDev <ns0:ref type='bibr'>[v.curr_node]</ns0:ref>.update_queue <ns0:ref type='bibr'>(v.last_reward, v.last_edge)</ns0:ref> G.commDev <ns0:ref type='bibr'>[v.curr_node]</ns0:ref>.update_queue <ns0:ref type='bibr'>(v.last_reward, v.last_edge)</ns0:ref> 12:</ns0:p><ns0:p>v.update_Q_values <ns0:ref type='bibr'>(G.commDev[v.curr_node]</ns0:ref> s ← s + 1 17: end while Given a network G, every agent (vehicle) v ∈ V has a pair (o, d) ∈ N × N, that defines its origindestination pair (OD-pair). Nodes n ∈ N are seen as states the agents might be in, and the outgoing edges of a node n are the possible actions for that given state. Hence, the agents build their routes on-the-fly by visiting nodes and edges.</ns0:p><ns0:p>Upon choosing an action (edge) e, v perceives its reward. We recall that being a microscopic model, this reward is actually computed by the simulator, rather than by an abstract cost function, as it would be the case in a macroscopic model.</ns0:p><ns0:p>Assuming that the simulator reports a travel time of t v e for agent v traveling on edge e, the reward is −t v e , as we want to make sure the agents prefer to take edges that minimize travel times. This alone does not guarantee that the agents will reach their destination fast, as they might end up running in loops throughout the network. Hence a positive bonus B is given to each agent that reaches its destination, giving them incentives to end their trips as fast as possible.</ns0:p><ns0:note type='other'>Top1 Top2 Top3 Top4 Figure 2</ns0:note><ns0:p>We deal with a commuting scenario, where each agent performs day-to-day experiments in order to reach an equilibrium situation, in which no agent can reduce its travel time by changing routes. Because agents belong to different OD pairs and/or select different routes, their trips take different number of simulation steps. These steps represent elapsed seconds in simulation time. Hence, this means that not every agent finishes its trip simultaneously and, therefore, the standard notion of a learning episode cannot be used here. Rather, each agent has its own learning episode that will take as many simulation steps as necessary to reach its destination.</ns0:p><ns0:p>Next, we explain the main parts of our approach, which can be seen in Algorithm 1.</ns0:p><ns0:p>Line 1 list the inputs of Algorithm 1: G is the topology of the network, D is the demand (flow rate) that is inserted in the network, P is the set of OD-pairs, and M is the maximum number of steps to simulate.</ns0:p><ns0:p>It is also necessary to set α, γ (both relating to Eq. 1), ε for controlling the action selection and the exploration-exploitation strategy, and the bonus B.</ns0:p><ns0:p>The main loop is presented between lines 3 -17, where the learning and the communication actually take place. The first if statement shown in line 5 takes care of all agents that finished their trips in the current step: agents perceive their reward plus the bonus for finishing the trip. At Line 7, each agent informs the corresponding CommDevs the rewards, and since its trip has ended, it gets reinserted at the origin node to start a new learning episode (as this is a commuting scenario).</ns0:p><ns0:p>The if statement at line 9 represents the intermediary nodes, where each agent also perceives its reward and informs the CommDev (line 11) about the reward just experienced, so that the CommDev can update its queue structure. In line 10, each agent updates its Q-value for the last action based on its own experience, i.e., with the actual reward received for traveling through the last link.</ns0:p><ns0:p>Following, a CommDev also informs agents about the rewards that can be expected from the actions each agent might take next (line 12). Each agent then updates its Q-table and chooses an action.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS, RESULTS, AND ANALYSIS Scenario: Network and Demand</ns0:head><ns0:p>Simulations were performed using a microscopic tool called Simulation of Urban Mobility (SUMO, <ns0:ref type='bibr' target='#b9'>Lopez et al. (2018)</ns0:ref>). SUMO's API was used to allow vehicle agents to interact with the simulator en route, i.e., during simulation time.</ns0:p><ns0:p>The scenario chosen is a 5x5 grid depicted in Figure <ns0:ref type='figure'>2</ns0:ref>; each line in the figure represents bi-directed edges containing two lanes, one for each traffic direction. It is also worth noting that each directed edge is 200m long.</ns0:p><ns0:p>The demand was set to maintain the network populated at around 20 − 30% of its maximum capacity, which is considered a medium to high density. Recall that no real-world network is fully occupied at all times, and that the just mentioned density level does not mean that there will not be edges fully occupied, which happens from time to time; this percentage is just the average over all 50 edges.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science This demand was then distributed between the OD-pairs as represented in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. The last column represents the volume of vehicles per OD-pair. Those values were selected so that the shorter the path, the smaller the demand, which seems to be a more realistic assumption than a uniform distribution of the demand.</ns0:p><ns0:p>Two points are worth reinforcing here. First, vehicles get reinserted at their corresponding origin nodes, so that we are able to keep a roughly constant insertion rate of vehicles in the network, per OD pair. However, this does not mean that the flow per link is constant, since the choice of which link to take varies a lot from vehicle to vehicle, and from time to time. Second, despite being a synthetic grid network, it is not trivial, since it has 8 OD pairs, which makes the problem complex as routes from each OD pair are coupled with others. As seen in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, we have also increased such coupling by designing the OD pairs so that all routes traverse the network, thus increasing the demand for using the central links.</ns0:p></ns0:div>
<ns0:div><ns0:head>Q-learning Parameters</ns0:head><ns0:p>A study conducted by <ns0:ref type='bibr' target='#b2'>Bazzan and Grunitzki (2016)</ns0:ref> shows that, in an en route trip building approach, the learning rate α does not play a major role, while the discount factor γ usually needs to be high in discounted future rewards, as it is the case here. Thus a value of α = 0.5 suits our needs. We remark however that we have also played with this parameter.</ns0:p><ns0:p>As for the discount factor γ, we have performed extensive tests and found that a value of γ = 0.9 performs best.</ns0:p><ns0:p>For the epsilon-greedy action selection, empirical analysis pointed to using a fixed value of ε = 0.05.</ns0:p><ns0:p>This guarantees that the agents will mostly take a greedy action (as they only have a 5% chance to make a non-greedy choice), and also take into account that the future rewards have a considerable amount of influence in the agent's current choice, since γ has a high value.</ns0:p><ns0:p>For the bonus at the end of each trip, after tests, a value of B = 1000 was used. Recall that this bonus aims at compensating the agent for selecting a jammed link, if it is close to its destination, rather than trying out detours via a link that, locally, seems less congested, but that will lead the agent to wander around, rather than directly go to its destination. We remark that trips take a rough average of 450 time steps thus this value of B fits the magnitude of the rewards.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Metric and Results</ns0:head><ns0:p>While each agent perceives its own travel time, both after traversing each link, and after finishing its trip, we need an overall performance to assess the quality of the proposed method. For this, we use a moving average (over 100 time steps) of the complete route travel time, for each agent that has finished its trip.</ns0:p><ns0:p>Given the probabilistic nature of the process, it is necessary to run repetitions of simulations. Thus, 30 runs were performed. Plots shown ahead thus depict the average and the standard deviations. In order to evaluate how the communication affects the learning process, some different policies and comparisons were performed, these different methods are described in the following sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>QL with C2I versus Dynamic User Assignment</ns0:head><ns0:p>For sake of contrasting with a classical approach, Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> shows a comparison between our QL with C2I approach and a method called Dynamic User Assignment (DUA), which is an iteractive method implemented by the SUMO developers. We remark that DUA is a centralized, not based on RL approach.</ns0:p><ns0:p>DUA works as follows: it performs iterative assignment of pre-computed, full routes to the given OD-pairs in order to find the UE 3 . In our tests, DUA was run for 100 iterations. Note that a DUA iteration corresponds to a trip, and a new iteration only starts when all trips have reached their respective 3 For details on how the DUA method is made the reader may refer to https://sumo.dlr.de/docs/Demand/Dynamic_ User_Assignment.html</ns0:p></ns0:div>
<ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science destinations. The output of DUA is a route that is then followed by each vehicle, without en route changes.</ns0:p><ns0:p>Since DUA also has a stochastic nature, our results correspond to 30 repetitions of DUA as well.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> shows that, obviously, at the beginning, the performance of our approach reflects the fact that the agents are still exploring, whereas DUA has a better performance since a central authority determines which route each agent should take. This is possible since this central authority holds all the information, which is not the case in the MARL based approach, where each agent has to explore in order to gain information.</ns0:p><ns0:p>In our approach, after a certain time, the agents have learned a policy to map states to action and, by using it, they are able to reduce their travel times.</ns0:p><ns0:p>Before discussing the actual results, we remark that a SUMO time step corresponds roughly to one second. Our experiments were run for about 50, 000 time steps. A learning episode comprehends hundreds of time steps, as the agent has to travel from its origin to its destination. In short, a learning episode is not the same as a simulation time step. Given that the agents re-start their trips immediately, different agents have different lengths for their respective learning episodes, thus the learning process is non-synchronous.</ns0:p><ns0:p>Using our approach, on average, an episode takes roughly 500 time steps, thus agent reach the user equilibrium in about 100 episodes. For RL standards, this is a fast learning process, especially considering that we deal with a highly non-stationary environment, where agents get noisy signals. However, we also remark that, for practical purposes, the policy can be learned off-line, and, later, embedded in the vehicle.</ns0:p><ns0:p>To give a specific picture, Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the actual travel times after time step 50, 000. We remark that we could have measured roughly the same around step 30, 000. It can be seen that our approach outperforms DUA shortly after time step 10, 000. Also noteworthy is the fact that, at any time step, agents still explore with probability ε = 5% thus there is room for improvements if other forms of action selection are used.</ns0:p></ns0:div>
<ns0:div><ns0:head>QL with C2I versus QL Without Communication</ns0:head><ns0:p>Our approach is also compared to standard Q-learning, thus without communication, which means that the agents learn their routes only by their own previous experiences, without any augmented knowledge regarding the traffic situation and the experiences of other agents.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, we can divide the learning process in both cases shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> in two distinct phases:</ns0:p><ns0:p>the exploration phase, where the agents have yet no information about the network and explore it to find their destination -that is when the spikes in the learning curves can be seen -; and the exploitation phase, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Both approaches converge to the same average travel times in the exploitation phase. However, the advantage of our approach comes in the exploration phase. As we see in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, the exploration phase in the QL with C2I algorithm is reduced by a considerable amount when compared to the traditional QL algorithm, meaning that in our case the user equilibrium is reached earlier.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> compares the travel time measured in both cases at the time step 20, 000, when our approach has already converged, but the standard Q-learning has not.</ns0:p></ns0:div>
<ns0:div><ns0:head>Communication Success Rate</ns0:head><ns0:p>In the real world, it might be the case that some information gets lost due to failure in the communication devices. In order to test what happens when not all messages reach the recipient, a success rate was implemented to test how the our approach performs if communication does not work as designed.</ns0:p><ns0:p>Specifically, every time an agent needs to communicate with the infrastructure, the message will reach the destination with a given success rate. This was implemented by means of a randomly generated value, which is then compared to the success rate to determine whether or not the simulator should ignore the message, thus being a metaphor for a non-delivered message. Such a scheme is applied to any kind of communication between the infrastructure and the agent, i.e., regardless if it is from an agent to a</ns0:p><ns0:p>CommDev, or vice-versa.</ns0:p><ns0:p>If a message is lost, then: (i) a CommDev does not get to update its data structure, and (ii) an agent does not get to update its Q-table. Other than that, the method behaves exactly as described by Algorithm 1.</ns0:p><ns0:p>Experiments were performed varying the target success rate. For clarity, we show the results in two plots.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> compares the approach when the success rate is with 100% (thus the performance already discussed regarding the two previous figures), to one where the communication succeeds in only 75% of the times. In Figure <ns0:ref type='figure'>6</ns0:ref>, we depict the cases for success rate of 25% and 50%.</ns0:p><ns0:p>For specific values, Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> lists the average travel times for all these cases at time step 20, 000, since at that time the learning processes have nearly converged.</ns0:p><ns0:p>It is remarkable that the system not only tolerates some loss of information, but also performs slightly better when the success rate is 75% or even 50%. If one compares this case to the one in which 100% of the messages reach their destinations, one sees that the learning process is accelerated if agents do not have the very same information that other agents also receive. This is no surprise, as pointed out in the literature on the disadvantages of giving the same information to everyone. What is novel here is the fact that we can show that this is also the case when information is shared only at local level, as well as when the communication is between vehicles and the infrastructure, not among all vehicles themselves.</ns0:p><ns0:p>As expected, when we look at the case with a low success rate of 25%, we observe several drawbacks since the communication rate is getting closer to no communication at all: (i) the average travel time increases, (ii) the learning process takes longer, and (iii) the standard deviation also increases (meaning that different trips may take very different travel times and, possibly, different routes). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Different Strategies for</ns0:head><ns0:p>Computer Science ≈ 483 much less information (recall that the original approach is based on a queue-like data structure).</ns0:p><ns0:p>To this aim, we conducted experiments in which the goal was to test which type of information is best for the infrastructure to hold and pass on to the agents. We have devised three ways to do this: (i) the infrastructure only holds and informs the highest travel time (hence the most negative reward) value to the agents; (ii) the infrastructure informs the lowest reward (hence the least negative) to the agents; (iii) the infrastructure holds only the latest (most recent) travel time value received. Note that, in all these cases, the infrastructure only needs to store a single value, as opposed to the case in which the infrastructure stores a queue of values in order to compute a moving average.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> shows a comparison between the different policies. For clarity, we omit the deviations but note that they are in the same order as the previous ones.</ns0:p><ns0:p>The best case seems to be associated with the use of the most recent travel time information, as seen both in Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> and Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref>. Communicating the lowest travel time might look good at first sight. But it has as drawback that this leads all agents to a act greedly and thus using the option with least travel time. This ends up not being efficient as seen in Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>. Conversely, communicating the highest travel time is motivated by the fact that the infrastructure might want to distribute the agents among the options available, thus communicating a high travel time leads to not all agents considering it: since some would have experienced a better option before and hence have this knowledge in their Q-tables, they will not use the information received. This proves to be the second best strategy, only behind the aforementioned strategy on communicating the latest information. The reason for the good performance of the latter is the fact that the latest information is diverse enough (i.e., varies from recipient agent to agent) so that it also guarantees a certain level of diversity in the action selection, thus contributing to a more even distribution of routes.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>A wise route choice is turning more and more important when the demand is increasing and road networks are not being expanded in the same proportion. MARL is an attractive method for letting agents autonomously learn how to construct routes while they are traveling from A to B.</ns0:p><ns0:p>This paper presented a method that combines MARL with C2I communication. Vehicles interact with the infrastructure every time they reach an intersection. While they communicate the travel times they have experienced in nearby links, they also receive the expect travel times regarding their next possible link choices. We have extended a previous approach by relaxing the assumption that all messages are sent and received, i.e, there is no loss of messages. To the best of our knowledge, this is a novel investigation to scenarios dealing with learning based route choice, where the there is a sharing of local information via C2I.</ns0:p><ns0:p>This work thus has the following contributions: we employ MARL to the task of learning; we do this using a non trivial scenario with more than one origin-destination pair; we depart from the assumption that driver agents already know a set of (pre-computed) routes to select among; we use a microscopic, agent-based approach; we connect MARL with new communication technologies, in order to investigate whether the learning process can be accelerated. Also, we have employed our method to test some real-world situations that may arise, namely communication loses and the need to use simpler hardware devices to store information by the infrastructure.</ns0:p><ns0:p>Our results show that, before deploying C2I communication in the real-world, one has to take into account the various effects of sharing information, even at local level. We were able to show that one has to strive to communicate information that is diverse enough, in order to avoid sub-optimal route choices, i.e., those that are made by drivers having similar information. As these drivers tend to act greedly, a wise Manuscript to be reviewed</ns0:p><ns0:p>Computer Science strategy on sharing information is key.</ns0:p><ns0:p>Specifically, our results point out to our approach being tolerant to information loses; further, there was even a slight improvement in the overall performance (i.e., learning speed) since less information also mean that not all agents will act the same way. As for the different strategies regarding storage of information in the infrastructure, we could show that communicating only the latest known travel time is able to speed up the learning process.</ns0:p><ns0:p>We remark that in all cases we have tested, MARL was able to reach the user equilibrium. The major difference is the speed of such process.</ns0:p><ns0:p>For future work, one possible investigation is the addition of a biased information provided by the infrastructure in order to reach a different outcome, namely, to reach the system optimum (socially efficient distribution of routes to vehicles), rather than converging to the user equilibrium. We also plan to change the demand during simulation time, to check how the learners deal with such changes. Preliminary work on using Q-learning in such dynamic environments point out that it is able to handle different situations.</ns0:p><ns0:p>However, it remains to be investigated whether this is also the case for changes in flow rates. Moreover, we would like to study whether the proposed combination of Q-learning with C2I is able to speed up the learning processes as much as it was the case in the present work.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and refers generally to the research on transfer learning (Taylor et al. (2014); Torrey and Taylor (2013); Fachantidis et al. 4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021) Manuscript to be reviewed Computer Science (2019); Zimmer et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Scheme of the communication infrastructure 2</ns0:figDesc><ns0:graphic coords='6,183.09,379.01,330.85,159.62' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. QL with C2I vs DUA</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.88,248.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. QL with C2I vs QL Without Communication</ns0:figDesc><ns0:graphic coords='11,172.75,145.97,351.55,263.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. QL with C2I: Comparison Between 75% and 100% Success Rate</ns0:figDesc><ns0:graphic coords='12,172.75,85.11,351.55,263.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. QL with C2I With Different Strategies</ns0:figDesc><ns0:graphic coords='13,172.75,63.78,351.55,263.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Storing Information at the Infrastructure Apart from investigating what happens when information is lost, we also change the way CommDevs compute and share the reward information to the driver agents. Here the main motivation was to test what happens when the infrastructure is constrained by a simpler type of hardware, namely one that can store 12/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>.in f o)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>13:</ns0:cell><ns0:cell>v.choose_action()</ns0:cell></ns0:row><ns0:row><ns0:cell>14:</ns0:cell><ns0:cell>end if</ns0:cell></ns0:row><ns0:row><ns0:cell>15:</ns0:cell><ns0:cell>end for</ns0:cell></ns0:row><ns0:row><ns0:cell>16:</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>. 5x5 Grid Network Demand per OD-pair</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Origin</ns0:cell><ns0:cell cols='2'>Destination Demand</ns0:cell></ns0:row><ns0:row><ns0:cell>Bottom0</ns0:cell><ns0:cell>Top4</ns0:cell><ns0:cell>102</ns0:cell></ns0:row><ns0:row><ns0:cell>Bottom1</ns0:cell><ns0:cell>Top3</ns0:cell><ns0:cell>86</ns0:cell></ns0:row><ns0:row><ns0:cell>Bottom3</ns0:cell><ns0:cell>Top1</ns0:cell><ns0:cell>86</ns0:cell></ns0:row><ns0:row><ns0:cell>Bottom4</ns0:cell><ns0:cell>Top0</ns0:cell><ns0:cell>102</ns0:cell></ns0:row><ns0:row><ns0:cell>Left0</ns0:cell><ns0:cell>Right4</ns0:cell><ns0:cell>102</ns0:cell></ns0:row><ns0:row><ns0:cell>Left1</ns0:cell><ns0:cell>Right3</ns0:cell><ns0:cell>86</ns0:cell></ns0:row><ns0:row><ns0:cell>Left3</ns0:cell><ns0:cell>Right1</ns0:cell><ns0:cell>86</ns0:cell></ns0:row><ns0:row><ns0:cell>Left4</ns0:cell><ns0:cell>Right0</ns0:cell><ns0:cell>102</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Travel time measured for DUA and QL with C2I at time step 50, 000</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Travel Time at Step 50k</ns0:cell></ns0:row><ns0:row><ns0:cell>DUA</ns0:cell><ns0:cell>≈ 560</ns0:cell></ns0:row><ns0:row><ns0:cell>QL with C2I</ns0:cell><ns0:cell>≈ 470</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Travel time measured for QL and QL with C2I at time step 20, 000</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Travel Time at Step 20k</ns0:cell></ns0:row><ns0:row><ns0:cell>QL</ns0:cell><ns0:cell>≈ 676</ns0:cell></ns0:row><ns0:row><ns0:cell>QL with C2I</ns0:cell><ns0:cell>≈ 483</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Travel time measured for each success rate at time step 20, 000</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Success Rate Travel Time at Step 20k</ns0:cell></ns0:row><ns0:row><ns0:cell>25%</ns0:cell><ns0:cell>≈ 501</ns0:cell></ns0:row><ns0:row><ns0:cell>50%</ns0:cell><ns0:cell>≈ 467</ns0:cell></ns0:row><ns0:row><ns0:cell>75%</ns0:cell><ns0:cell>≈ 461</ns0:cell></ns0:row><ns0:row><ns0:cell>100%</ns0:cell><ns0:cell>≈ 483</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Travel time measured for each strategy at time step 20, 000</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Strategy</ns0:cell><ns0:cell>Travel Time at Step 20k</ns0:cell></ns0:row><ns0:row><ns0:cell>Highest Travel Time</ns0:cell><ns0:cell>≈ 472</ns0:cell></ns0:row><ns0:row><ns0:cell>Latest Travel Time</ns0:cell><ns0:cell>≈ 467</ns0:cell></ns0:row><ns0:row><ns0:cell>Lowest Travel Time</ns0:cell><ns0:cell>≈ 538</ns0:cell></ns0:row><ns0:row><ns0:cell>QL with C2I</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54963:1:1:CHECK 15 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "CS-2020:10:54963:0:0:REVIEW – Answers to
Reviewers
Ana Bazzan
January 4, 2021
italics: reviewers’ comments
bold: our answers and/or comments
Reviewer / Editor
We thank the reviewers and the editor for all the feedback provided.
Reviewer 1
It is not very clear why authors do not consider their proposed method as a
”traffic assignment” method, as discussed in line 49 and on. After all, the proposed method is an iterative offline method that seeks to yield user equilibrium.
In which sense is it not a traffic assignment problem then?
Thanks for pointing out that this was not clear. We have reformulated, trying to make clear that traffic assignment is a term used
in transp. eng. for planning, while we are dealing with operations. If
this reviewer thinks that the term TAP could also be used in this case,
we could reformulate. However, to make a clear distinction between a
planning activity (which ours is not as the driver is not planning but
rather reacting to the environment at each intersection, and learning
from such interaction), we prefer to use a different term.
How is the impedance of intersections modelled? Are intersections trafficlight controlled or are they priority junctions?
The intersections are not controlled by traffic signals. At this
stage we are not involving this layer of control. We plan to integrate
our current approach to one in which signals are also modelled as
learning agents. However, this adds up lots of other constraints and
issues. In order to specifically investigate the role of C2I, we opted for
not having other sources of control. We could have turned on SUMO’s
1
default t signals but if they are fixed controllers, we can achieve the
same without them, i.e., the way we are doing now. Specifically to
your question, we tried out several SUMO’s intersection management. We decided to use the allway-stop scheme (as presented in
SUMO documentation https://sumo.dlr.de/docs/Networks/PlainXML.
html#node_types), because, this way, SUMO gives no preference to
any direction.
2) How dense is the graph representing the communication network? Are all
CommDev connected to each other (i.e.|E| ∼ V 2 )? What would be the impacts
of different assumptions for this graph on the results?
CommDevs are only connected to their neighboring devices, as
shown in the Figure 1 of the manuscript. Different schemes should not
yield different results, as the CommDevs only retrieve information
regarding their neighboring links, as we wanted the model to consider
only *local* information to be passed to agents.
Also, please provide some more details on how this work extends the one
previously published by the same authors, quite recently: (Santos Bazzan, 2020).
Much material in this paper seems to come as-is from the previous publication,
especially concerning the proposed method.
The reviewer is right that we used parts of the text of the referred paper (which is a local workshop paper, a preliminary version,
as mentioned in the title), since the background and related work is
the same. However, we have extended the methods, results, and the
analysis. The major difference between this and our previous works
regards the fact that we extended the way the communication behaves, which includes studies of the impact of communication failure
and also what type of information yields better results. A sentence
pointing to this was added.
However, some findings seem very expected and intuitive; also the fact that
providing all drivers with information won’t benefit the system performance and
considering a lower rate of informed drivers produces better outcomes, even in
real-time setups (for rather localised information, in space and in time). Are
there other findings that may stimulate the potential use of the proposed approach?
The reviewer is right to affirm that providing all drivers with similar information has a negative impact on the overall performance,
and that this is known. However, papers claiming (or showing) this
refer to situations as follows: either route guidance (by a single or
a few information provider such as Waze, Google, or the traffic authority) is the source of information (e.g. Wahle et al. 2000, Physica
A); or information is shared among the vehicles themselves (car to
car communication, as in Bazzan and Klügl 2020). To the best of
our knowledge, there is a gap in the literature in what regards car to
infrastructure communication). This is important because this kind
2
of information is much more local than the other two cases aforementioned. We have added this to the discussion on related work, and
also added the corresponding reference.
it is not very clear, however, how is this method different from other userequilibrium offline iterative methods, regarding the concept of traffic assignment
as a tool for planning, rather than for operational purposes.
The final product of both methods is the same, as this reviewer
points out. However, while the classical TAP is a centralized approach
that *assigns* (one could say, impose) a route to each driver, learning based approaches let the agents learn how to select routes in an
active and autonomous way, and also in a decentralized fashion. We
argue that this is how drivers in fact operate from day to day. Hence,
such an approach is more appropriate. In fact, such agent-based
approach can then be used to model, in a bottom-up fashion, heterogeneous populations, different reactions to information received,
different willingness to share information, etc.
Reviewer 2
The experimental should be strengthened by providing comparison between their
method and some well-known methods to show the novelty.
The known method to reach an efficient assignment is based on
equilibrium and/or iterative procedures. This is easier to do when
the approach is macroscopic. Since we are using a microscopic simulators (as, otherwise, we would not be able to fully explore the local
communication), we can only compare to the method provided by
SUMO itself, namely the DUA method, whose results are reported
in the paper; this method is also iterative and similar to equilibriumbased methods for macroscopic models.
The novelty of the findings should be strengthened.
Thanks for pointing this out. The last paragraph of the introduction lists the contributions of our paper. We added some sentences in
the related work to stress the differences between our and previous
works. Finally, in the conclusion, we have reformulated the previous
version to stress the findings.
The model of TAP should be introduced.
Here we were not sure whether the reviewer means the classical
TAP approach, which is based on quadratic programming (thus, as
mentioned in the answers to rev. 1, a centralized approach), or,
rather, our route choice model. If the latter, it is described under Alg.
1. If the former, we can point out to a book where it is detailed but
3
we believe spending a page describing the mathematical formulation
of the optimization problem would steal the focus from the learningbased approach. We can revise this after this question is clarified.
The comparison between their method and some well-known RL schemes
should be provided to show the novelty.
We present the comparison against Q-learning, the most known
RL algorithm; see Fig. 4.
4
" | Here is a paper. Please give your review comments after reading it. |
12 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Sandboxes are increasingly important building materials for secure software systems. In recognition of their potential to improve the security posture of many systems at various points in the development lifecycle, researchers have spent the last several decades developing, improving, and evaluating sandboxing techniques. What has been done in this space? Where are the barriers to advancement? What are the gaps in these efforts? We systematically analyze a decade of sandbox research from five top-tier security and systems conferences using qualitative content analysis, statistical clustering, and graphbased metrics to answer these questions and more. We find that the term 'sandbox' currently has no widely accepted or acceptable definition. We use our broad scope to propose the first concise and comprehensive definition for 'sandbox' that consistently encompasses research sandboxes. We learn that the sandboxing landscape covers a range of deployment options and policy enforcement techniques collectively capable of defending diverse sets of components while mitigating a wide range of vulnerabilities.</ns0:p><ns0:p>Researchers consistently make security, performance, and applicability claims about their sandboxes and tend to narrowly define the claims to ensure they can be evaluated. Those claims are validated using multi-faceted strategies spanning proof, analytical analysis, benchmark suites, case studies, and argumentation. However, we find two cases for improvement: (1) the arguments researchers present are often ad hoc and (2) sandbox usability is mostly uncharted territory. We propose ways to structure arguments to ensure they fully support their corresponding claims and suggest lightweight means of evaluating sandbox usability.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>2 What is a sandbox?</ns0:p><ns0:p>In order to systematically analyze the 'sandboxing' landscape we need to clarify the meaning of the term.</ns0:p><ns0:p>We reviewed definitions used by practitioners and in papers within the field, both in the substance of the definitions and in their quality as definitions. This section reviews those definitions and establishes a definition for our use here, which we advance as an improved definition for the field.</ns0:p><ns0:p>A definition should be a concise statement of the exact meaning of a word and may be accompanied by narration of some properties implied by the definition. In this case, it should clearly distinguish between mechanisms that are and are not sandboxes. To gain widespread use, a new definition must include all mechanisms that are already widely considered to be sandboxes.</ns0:p><ns0:p>In software security contexts, the term 'sandboxing' has grown ambiguous. In an early published use, it described an approach for achieving fault isolation <ns0:ref type='bibr' target='#b106'>(Wahbe et al., 1993)</ns0:ref>. Discussions where practicing programmers are trying to understand what sandboxing is often fail to achieve a precise resolution and instead describe the term by listing products that are typically considered to be sandboxes or cases where sandboxes are often used. 1 However, we did find cases where attempts were made at a concise and general definition. A representative and accepted StackOverflow answer 2 started with, 'In the context of IT security, 'sandboxing' means isolating some piece of software in such a way that whatever it does, it will not spread havoc elsewhere'-a definition that is not sufficiently precise to separate sandboxes from other defensive measures.</ns0:p><ns0:p>Even recently published surveys of sandbox literature have either acknowledged the ambiguity, then used overly-broad definitions that include mechanisms not traditionally considered to be sandboxes <ns0:ref type='bibr' target='#b95'>(Schreuders et al., 2013)</ns0:ref>, or have relied entirely on the use of examples instead of a precise definition (Al Ameiri and <ns0:ref type='bibr' target='#b5'>Salah, 2011)</ns0:ref>. Schreuders writes, 'Although the terminology in use varies, in general a sandbox is separate from the access controls applied to all running programs. Typically sandboxes only apply to programs explicitly launched into or from within a sandbox. In most cases no security context changes take place when a new process is started, and all programs in a particular sandbox run with the same set of rights.</ns0:p><ns0:p>Sandboxes can either be permanent where resource changes persist after the programs finish running, or ephemeral where changes are discarded after the sandbox is no longer in use. ...' This definition suffers from three problems. First, it is still overly reliant on examples and thus is unlikely to capture all security mechanisms that are uncontroversially called sandboxes. Along the same lines, characterizations prefaced with, 'In most cases...', are not precise enough to reliably separate sandboxes from non-sandboxes. Finally, the comparison to access controls is not conclusive because it does not clarify which, if any, access control mechanisms applied to a subset of running programs are not sandboxes.</ns0:p><ns0:p>In this section we aim to resolve this ambiguity to lay the groundwork for our analysis's inclusion criteria.</ns0:p><ns0:p>While this definition serves our purposes, we believe it can strengthen future attempts to communicate scientifically about sandboxes by adding additional precision. We derive a clear, concise definition for what a 'sandbox' is using papers that appear in five top-tier security and operating system conferences, selected because their topics of interest are broad enough to include sandboxing papers most years. While we do not attempt to thoroughly validate our definition using commercial and open source sandboxes, it does encompass the tools with which we are most familiar.</ns0:p><ns0:p>We found 101 potential sandboxing papers. Out of these papers, 49 use the term 'sandbox' at least once, and 14 provide either an explicit or implicit definition of the term that is clear enough to characterize. The remaining papers that use the term make no attempt at a definition or provide an ambiguous explanation, intertwined with other ideas, and spread over multiple sentences. Within the set of definitions we identify two themes: sandboxing as encapsulation and sandboxing as policy enforcement.</ns0:p><ns0:p>Sandboxing as encapsulation has a natural analogy: sandboxes on playgrounds provide a place for children to play with indisputably-defined bounds, making the children easier to watch, and where they are less likely to get hurt or hurt someone else. They also contain the sand, thus preventing it from getting strewn across neighboring surfaces. A similar analogy is used in an answer on the Security StackExchange to the question, 'What is a sandbox?' Indeed, Wahbe was working to solve the problem of encapsulating software modules 1 http://stackoverflow.com/questions/2126174/what-is-sandboxing http://security.stackexchange.com/questions/16291/are-sandboxes-overrated http://en.wikipedia.org/w/index.php?title=Sandbox_(computer_security)&oldid=596038515 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>: Definitions that speak about 'sandboxing' in terms of isolation. Reference Quote <ns0:ref type='bibr'>(Zhang et al., 2013)</ns0:ref> 'SFI (Software(-based) Fault Isolation) uses instruction rewriting but provides isolation (sandboxing) rather than hardening, typically allowing jumps anywhere within a sandboxed code region.' <ns0:ref type='bibr' target='#b119'>(Zeng et al., 2013)</ns0:ref> 'It is a code-sandboxing technique that isolates untrusted modules from trusted environments. ... In SFI, checks are inserted before memoryaccess and control-flow instructions to ensure memory access and control flow stay in a sandbox. A carefully designed interface is the only pathway through which sandboxed modules interact with the rest of the system.' <ns0:ref type='bibr' target='#b36'>(Geneiatakis et al., 2012)</ns0:ref> 'Others works have also focused on shrinking the attack surface of applications by reducing the parts that are exposed to attack, and isolating the most vulnerable parts, using techniques like sandboxing and privilege separation.' <ns0:ref type='bibr' target='#b29'>(De Groef et al., 2012)</ns0:ref> 'Isolation or sandboxing based approaches develop techniques where scripts can be included in web pages without giving them (full) access to the surrounding page and the browser API.' <ns0:ref type='bibr' target='#b14'>(Cappos et al., 2010)</ns0:ref> 'Such sandboxes have gained widespread adoption with web browsers, within which they are used for untrusted code execution, to safely host plug-ins, and to control application behavior on closed platforms such as mobile phones. Despite the fact that program containment is their primary goal, flaws in these sandboxes represent a major risk to computer security.' <ns0:ref type='bibr' target='#b88'>(Reis et al., 2006)</ns0:ref> 'Wagner et al. use system call interposition in Janus to confine untrusted applications to a secure sandbox environment.' <ns0:ref type='bibr' target='#b26'>(Cox et al., 2006)</ns0:ref> 'Our work uses VMs to provide strong sandboxes for Web browser instances, but our contribution is much broader than the containment this provides.'</ns0:p><ns0:p>(to keep a fault in a distrusted module from effecting other modules) when he popularized the term in this domain. 3 Table <ns0:ref type='table'>1</ns0:ref> lists the definitions we found that we characterize as falling within the theme of sandboxing as isolation. Many of these definitions use the term 'isolation,' but we prefer the use of encapsulation. In Object Oriented Programming, an object encapsulates related components and selectively restricts access to some of those components. Isolation, on the other hand, sometimes refers to a stronger property in which modules use entirely different resources and therefore cannot interfere with each other at all. Sandboxed components often need to cooperate to be useful. Cooperation and the idea of disjoint resources are present in Wahbe's original use of the term 'sandbox': Wahbe was trying to reduce the communication overhead present in hardware fault isolation by instead creating software domains that run in the same hardware resources, but that do not interfere when faulty. One potential counterpoint to our use of 'encapsulation' is that the term typically is used to refer to cases where the inside (e.g. of an object) is protected from the outside, but sandboxes often protect the external system from the contents of the sandbox. While this is a fair point, this paper does discuss sandboxes that protect their contents from the outside and sandboxes exist that simultaneously defend the inside from the outside and vice versa <ns0:ref type='bibr' target='#b56'>(Li et al., 2014)</ns0:ref>. Furthermore, one can consider that a sandbox encapsulates an external system that must be protected from a potentially malicious component. Given these points, we maintain that encapsulation's recognition of cooperation is important enough to use the term over isolation. Nevertheless, we retain the use of isolation when discussing existing definitions.</ns0:p><ns0:p>3 While it is clear from at least one publication that the term sandbox was used in computer security earlier than <ns0:ref type='bibr'>Wahbe's paper (Neumann, 1990)</ns0:ref>, many early software protection papers cite Wahbe as the origin of the 'sandbox' method <ns0:ref type='bibr' target='#b124'>(Zhong et al., 1997;</ns0:ref><ns0:ref type='bibr' target='#b107'>Wallach et al., 1997;</ns0:ref><ns0:ref type='bibr' target='#b93'>Schneider, 1997)</ns0:ref>. At least one early commentator felt that this use of the term 'sandbox' was merely renaming 'trusted computing bases' (TCB) <ns0:ref type='bibr' target='#b67'>(McLean, 1997)</ns0:ref>. We believe this section makes it clear that sandboxes meet common TCB definitions, but that not all TCBs are sandboxes. Manuscript to be reviewed <ns0:ref type='bibr'>(Xu et al., 2012</ns0:ref>) 'We automatically repackage arbitrary applications to attach user-level sandboxing and policy enforcement code, which closely watches the applications behavior for security and privacy violations such as attempts to retrieve a users sensitive information, send SMS covertly to premium numbers, or access malicious IP addresses.' <ns0:ref type='bibr' target='#b19'>(Chandra et al., 2011)</ns0:ref> 'The re-executed browser runs in a sandbox, and only has access to the clients HTTP cookie, ensuring that it gets no additional privileges despite running on the server.' <ns0:ref type='bibr' target='#b84'>(Politz et al., 2011)</ns0:ref> 'ADsafe, like all Web sandboxes, consists of two inter-dependent components: (1) a static verifier, called JSLint, which filters out widgets not in a safe subset of JavaScript, and (2) a runtime library, adsafe.js, which implements DOM wrappers and other runtime checks.' <ns0:ref type='bibr' target='#b105'>(Tang et al., 2010)</ns0:ref> 'Fundamentally, rule-based OS sandboxing is about restricting unused or overly permissive interfaces exposed by todays operating systems.' <ns0:ref type='bibr' target='#b102'>(Sun et al., 2008)</ns0:ref> 'Sandboxing is a commonly deployed proactive defense against untrusted (and hence potentially malicious) software. It restricts the set of resources (such as files) that can be written by an untrusted process, and also limits communication with other processes on the system.' <ns0:ref type='bibr' target='#b65'>(McCamant and Morrisett, 2006)</ns0:ref> 'Executing untrusted code while preserving security requires that the code be prevented from modifying memory or executing instructions except as explicitly allowed. Software-based fault isolation (SFI) or sandboxing enforces such a policy by rewriting the untrusted code at the instruction level.' <ns0:ref type='bibr' target='#b85'>(Provos, 2003)</ns0:ref> 'For an application executing in the sandbox, the system call gateway requests a policy decision from Systrace for every system call.'</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> presents seven quotes that discuss sandboxing in terms of restrictions or policy enforcement.</ns0:p><ns0:p>These definitions reflect different dimensions of the same idea: A security policy can state what is allowed, verboten, or both. The 'sandbox' is the subject that enforces the policy or 'sandboxing' is the act of enforcing a policy. In short, these quotes cast sandboxing as policy enforcement.</ns0:p><ns0:p>Careful inspection of our definition tables shows that the same technique, Sofware-based Fault Isolation (SFI), appears in both tables. or is not allowed. To use another childhood analogy, we often sandbox children when we place them in the corner as a punishment. We isolate them by moving them away from everyone else and placing them in a specific, bounded location, then we impose a security policy on them by making statements such as, 'Do not speak, look straight ahead, and think about what you did.' We resolve ambiguity in the use of the term 'sandbox' by combining these themes:</ns0:p><ns0:p>Sandbox An encapsulation mechanism that is used to impose a security policy on software components.</ns0:p><ns0:p>This definition concisely and consistently describes the research sandboxes we identify in the remainder of this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Methodology</ns0:head><ns0:p>In this section, we discuss the steps we took in order to select and analyze sandboxing papers and the sandboxes they describe. Our methodology is primarily based on the book 'Qualitative Content Analysis in Figure <ns0:ref type='figure'>1</ns0:ref>: The iterative process used to define research questions, build a dataset, and interpret the set to answer the questions. This process is inspired by QCA <ns0:ref type='bibr' target='#b94'>(Schreier, 2012)</ns0:ref> Practice' (QCA) <ns0:ref type='bibr' target='#b94'>(Schreier, 2012)</ns0:ref>. <ns0:ref type='bibr' target='#b8'>Barnes (2013)</ns0:ref> provides a succinct summary of the methodology in Section 5.3 of his dissertation. This methodology originates in the social sciences <ns0:ref type='bibr' target='#b9'>(Berelson, 1952;</ns0:ref><ns0:ref type='bibr' target='#b52'>Krippendorff, 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Denzin and Lincoln, 2011)</ns0:ref> and is intended to repeatably interpret qualitative data to answer a set of research questions. Figure <ns0:ref type='figure'>1</ns0:ref> summarizes the iterative process we used to define our questions, pick and interpret papers (Sections 3.1 and 3.2), and develop our results (Section 3.3).</ns0:p><ns0:p>QCA goes well beyond a systematic literature review <ns0:ref type='bibr' target='#b13'>(Budgen and Brereton, 2006;</ns0:ref><ns0:ref type='bibr' target='#b50'>Kitchenham et al., 2009)</ns0:ref>. While both QCA and systematic reviews require the definition of research questions and repeatable processes for collecting source material, reviews stop short of detailed analysis. QCA carries on where reviews end. When performing QCA, researchers define coding frames to clearly and repeatably establish how the source material will be interpreted to answer the research questions. The frames contain codes that summarize blocks of data and definitions for each code. Furthermore, QCA methodologies dictate how the coding frames are to be applied, by segmenting the entirety of the data such that each segment can labeled with at most one code. This ensures that the data is coded without missing relevant data and while reducing the researcher's bias towards some bits of data. Finally, QCA requires researchers to test their full process before carrying out the analysis. 4 Together, these steps allow researchers to reliably and effectively interpret text to answer research questions that are not possible to answer using a purely quantitative analysis. For example, Schreier points out that a quantitative analysis can determine how many women appear in magazine advertisements relative to men, but a qualitative analysis (e.g. QCA) is required to determine whether or not women are more likely to be placed within trivial contexts than men in those ads <ns0:ref type='bibr'>(Schreier, 2012, p. 2)</ns0:ref>.</ns0:p><ns0:p>The sandboxes we describe in this paper were selected from the proceedings of five conferences: IEEE Symposium on Security and Privacy (Oakland), Usenix Security, ACM Conference on Computer and Communications Security (CCS), ACM Symposium on Operating System Principles (SOSP), and Usenix Symposium on Operating System Design and Implementation (OSDI). We restricted our selection to particular conferences to improve reproducibility-because of this choice, the set of papers evaluated against our inclusion criteria is very well defined. To select these conferences, we collected all of the sandboxing papers we were aware of and the selected five venues contained far more sandboxing papers than any other venue. 5</ns0:p><ns0:p>4 We followed the QCA methodology specified by Schreier with one major deviation. We did not segment the text because the vast majority of the content in the papers is irrelevant to our needs. Most uses of QCA attempt to capture content of a text in its entirety. This was not our goal so we analyzed text more selectively.</ns0:p><ns0:p>5 Based on earlier criticism of this paper, we reevaluated our data set by looking at the past four years of proceedings at unselected venues such as the USENIX Annual Technical Conference (ATC), Programming Language Design and Implemen- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The selected conferences are widely regarded as the top-tier conferences in software security and operating systems. 6 Therefore, our data reflects the consensus of large communities.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> presents our twelve research questions, the areas each question attempts to illuminate, and a comprehensive list of their answers as manifested by our paper corpus. We derived an initial set of questions by considering which broad aspects of sandboxes are poorly understood and where better understanding may change how the community performs research in this space. As a result, the questions are necessarily biased by our own backgrounds and personal experiences. In particular, this lead to an emphasis on questions about how mechanisms and policies are derived, applied, and evaluated. We added questions while we performed the analysis when we found that we had the data to answer new and interesting questions. Overall, these questions aim to capture a comprehensive snapshot of the current state of sandboxing research, with an emphasis on where sandboxes fit into the process of securing software systems, what policies are enforced and how they are defined and constructed, and what claims are made about sandboxes and how those claims are validated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Picking Papers</ns0:head><ns0:p>We selected papers from 10 years worth of proceedings at the five conferences mentioned above. We decided whether a paper was included in our sample based on rigorous inclusion criteria so the process of including/excluding papers is repeatable. The most important criterion is that the paper describes a sandbox that meets the definition given in Section 2. The remaining criteria were added as we carried out the study to exclude papers that are incapable of answering the research questions and to clarify relevant nuances in the definition.</ns0:p><ns0:p>Papers were included if they met the following criteria:</ns0:p><ns0:p>• The paper documents the design of a novel tool or technique that falls under the sandbox definition • The paper is a full conference paper • The paper is about an instance of a sandbox (e.g. not a component for building new sandbox tools, theoretical constructs for sandboxes, etc.)</ns0:p><ns0:p>• Techniques are applied using some form of automation (e.g. not through entirely manual re-architecting)</ns0:p><ns0:p>• A policy is imposed on an identifiable category of applications or application subsets -The policy is imposed locally on an application (e.g. not on the principal the application executes as, not on network packets in-transit, etc.) -The category encompasses a reasonable number of real-world applications (e.g. doesn't require the use of (1) a research programming language, (2) extensive annotations, or (3) non-standard hardware)</ns0:p><ns0:p>We gathered papers by reading each title in the conference proceedings for a given year. We included a paper in our initial dataset if the title gave any indication that the paper could meet the criteria. We refined the criteria by reviewing papers in the initial dataset from Oakland before inspecting the proceedings from other venues. We read the remaining papers' abstracts, introductions, and conclusions and excluded papers as they were being interpreted if they did not meet the criteria. We maintained notes about why individual papers were excluded from the final set. 7 tation (PLDI), and Object-Oriented Programming, Systems, Languages and Applications (OOPSLA). These venues contained fewer sandboxing papers than our selected venues, and those that appeared were not significantly different in form or content from those in selected venues. In fact, with rare exceptions, the sandboxing papers at the unselected venues were written by the same authors as one or more paper in our data set. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Categorizing the Dataset</ns0:head><ns0:p>To interpret papers we developed coding frames 8 where a category is a research question and a code is a possible answer to the question. To ensure consistency in coding, our frames include detailed definitions and examples for each category and code. Our codes are not mutually exclusive: A question may have multiple answers. We developed the majority of our frames before performing a detailed analysis of the data, but with consideration for what we learned about sandboxing papers while testing the inclusion criteria above on our data from Oakland. We learned that evaluative questions were quite interesting while coding papers, thus frames concerning what claims were made about a sandbox and how those claims were validated became more fine-grained as the process progressed. Whenever we modified a frame, we updated the interpretations of all previously coded papers.</ns0:p><ns0:p>We tested the frames by having two coders interpret different subsets of the Oakland segment of the initial dataset. To interpret a paper, each category was assigned the appropriate code(s) and a quote justifying each code selection was highlighted and tagged in the paper's PDF. 9 While testing, the coders swapped quotes sans codes and independently re-assigned codes to ensure consistency, but we did not measure inter-rater reliability. Code definitions were revised where they were ambiguous. While there is still some risk that different coders would select different quotes or assign codes to the same quote, we believe our methodology sufficiently mitigated the risk without substantially burdening the process given the large scope of this effort.</ns0:p><ns0:p>After coding every paper, we organized the codes for each paper by category in a unified machine-readable file 10 (hereafter referred to as the summary of coded papers) for further processing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Analyzing the Dataset</ns0:head><ns0:p>To summarize the differences and similarities between sandboxing papers, we attempted to identify clusters of similar sandboxing techniques. To do so, we first calculated a dissimilarity matrix for the sandboxes. For category k, let p ijk be the number of codes that sandboxes i and j share, divided by the total number of codes in that category they could share. For categories in which each sandbox is interpreted with one and only one code, p ijk is either 1 or 0; for other categories, it falls in the interval [0, 1]. Then the dissimilarity between i and j is</ns0:p><ns0:formula xml:id='formula_0'>d ij = k (1 − p ijk ).</ns0:formula><ns0:p>We fed the resulting dissimilarity matrix into a hierarchical agglomerative clustering algorithm <ns0:ref type='bibr' target='#b46'>(Kaufman and Rousseeuw, 2009)</ns0:ref> (implemented in R with the cluster package (R Core Team, 2014; <ns0:ref type='bibr' target='#b61'>Maechler et al., 2014)</ns0:ref>). This algorithm begins by treating each sandbox as its own cluster, and then iteratively merges the clusters that are nearest to each other, where distance between two clusters is defined as the average dissimilarity between the clusters' members. The agglomerative clustering process is displayed in dendrograms. We stopped the agglomerative process at the point at which there were two clusters remaining, producing two lists of sandboxes, one list for each cluster. To interpret the resulting clusters, we produced bar charts displaying the code membership by cluster. We conducted this analysis three times: once using all of the categories to define dissimilarity, once using using all categories except those for claims, validation, and availability, and once using the validation categories. We do not present the plots from the analysis that ignored claims, validation, and availability because it did not produce results different from those generated using all categories.</ns0:p><ns0:p>We conducted correlational analyses to learn whether sandbox validation techniques have improved or worsened over time, or whether sandbox publications with better (or worse) validation received more citations. The validation codes were ordered in the following way: proof > analytical analysis > benchmarks > case study > argumentation > none. This ordering favors validation techniques that are less subjective.</ns0:p><ns0:p>While it is possible for a highly ranked technique to be applied less effectively than a lower ranked technique (e.g. a proof that relies on unrealistic assumptions relative to a thorough case study) this ranking was devised after coding the papers and is motivated by the real world applications of each technique in our dataset.</ns0:p><ns0:p>Each claim type (security, performance, and applicability), then, was an ordinal random variable, so rankbased methods were appropriate. When a sandbox paper belonged to two codes in a particular validation category, we used its highest-ordered code to define its rank, and lower-ordered codes to break ties. So, for instance, if paper A and paper B both included proofs, and paper A also included benchmarks, paper A would be ranked higher than paper B. To test if a claim type was improving over time, we estimated the Spearman correlation <ns0:ref type='bibr' target='#b100'>(Spearman, 1904)</ns0:ref> between its codes and the year of publication, and hence tested for a monotonic trend. Testing if papers with better validation, in a particular category, received more citations necessitated accounting for year of publication, since earlier papers typically have higher citation counts. To do so, we regressed paper citation rank against both publication year and category rank. (We used the rank of papers' citation counts as the dependent variable, as opposed to the citation counts themselves, due to the presence of an influential outlier-Terra <ns0:ref type='bibr' target='#b35'>(Garfinkel et al., 2003)</ns0:ref>. Scatterplots show the relationship between citation ranks and publication year to be approximately linear, so a linear adjustment should suffice.) There was a 'validation effect' if the coefficient on the validation measure was significantly different from zero.</ns0:p><ns0:p>We conducted four separate regression analyses: one in which citation ranks were regressed on publication year and category ranks of all three validation criteria, and one in which citation ranks were regressed on publication year and security validation only, one in which citation ranks were regressed on publication year and performance validation only, and one in which citation ranks were regressed on publication year and applicability validation only.</ns0:p><ns0:p>We constructed a citation graph using the papers in our set as nodes and citations as edges as a final means of better understanding the sandboxing landscape. We clustered the nodes in this graph using the same clusters found statistically, using the process describe above, and using common topics of interest we observed. The topics of interest are typically based on the techniques the sandboxes apply (e.g. Control</ns0:p><ns0:p>Flow Integrity (CFI), artificial diversity, etc.). We evaluate these clusters using the modularity metric, which enables us to compare the quality of the different categorizations. Modularity is the fraction of edges that lie within a partition, above the number that would be expected if edges were distributed randomly.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Results</ns0:head><ns0:p>We derived our results from the various statistical clusters of our summary of coded papers, trends explicit in this dataset, and observations made while reading the papers or analyzing our summarized data. As our dataset is public, we encourage readers to explore the data themselves. Note while interpreting the statistical clusters that they are not representative of how papers are related in terms of broad topics of interest. When we applied the statistical clusters to the citation graph of the papers in our set the modularity scores were -0.04 and 0.02 when papers were clustered based on all of the attributes we coded and just validation attributes respectively. These modularity scores mean that the statistical clusters are no better than randomly clustering papers when considering how they cite each other.</ns0:p><ns0:p>These poor modularity scores make sense because authors are much more likely to cite papers that use similar techniques or tackle similar problems than use similar validation strategies. We confirmed the latter observation by computing that the modularity for overlapping groups <ns0:ref type='bibr' target='#b54'>(Lázár et al., 2009)</ns0:ref> based on validation is -0.198, which confirms that partitions built from the validation techniques do not direct citation graph structure. Indeed, when we clustered papers in the citation graph based on topics of interest we observed while interpreting the set, the modularity score, 0.33, is significantly better than a random cluster. The citation graph with topic clusters is shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. While these clusters are potentially of sociotechnical interest to the community, we must look at lower-level attributes to understand how sandboxes are to be applied in practice and how they improve the security posture of real systems. The statistical clusters fill that role.</ns0:p><ns0:p>Figures <ns0:ref type='figure'>3 and 4</ns0:ref> show the codes that are members of the fixed policy and user-defined policy clusters respectively when all categories are considered. The dendrogram for these clusters appears in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>.</ns0:p><ns0:p>Many of our results are interpretations of these charts. Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> succinctly describes our results per research question and references later sections where more details are found. The remainder of this section presents those details.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Sandboxes: Building Materials for Secure Systems</ns0:head><ns0:p>Sandboxes are flexible security layers ready to improve the security posture of nearly any type of application.</ns0:p><ns0:p>While the deployment requirements and details vary from sandbox to sandbox, collectively they can be applied at many different points in a system's architecture and may be introduced at any phase in an Where in a system's architecture are policies enforced?</ns0:p><ns0:p>There is an emphasis on enforcing policies in the operating system or transforming applications to enforce a policy over using application hosts (e.g. language-hosting virtual machines, browsers, etc.).</ns0:p></ns0:div>
<ns0:div><ns0:head>4.1</ns0:head><ns0:p>When are policies imposed? Static, dynamic, and hybrid strategies are roughly equally favored in all domains but with a slight preference for strictly static or dynamic approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.1</ns0:head><ns0:p>What application resources are protected by sandboxes?</ns0:p><ns0:p>Sandboxes with fixed policies tend to prevent memory corruption or protect properties of application code (e.g. control flow). User-defined policies are correlated with policies that are more diverse and cover the gamut of application-managed resources.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.1</ns0:head><ns0:p>What types of components are protected by sandboxes?</ns0:p><ns0:p>Sandboxes that use fixed policies tend to require the user to target specific components, while those with user-defined policies tend to allow for broader targeting.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.1</ns0:head><ns0:p>At what point in the process of an attack will an exploit violate sandbox policies?</ns0:p><ns0:p>Sandboxes are primarily pro-active by disrupting exploits before a payload can be executed. Where users must define a policy, sandboxes tend to be pro-active in attempting to stop exploits, but also limit the range of possible behaviors a payload can exhibit.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.1</ns0:head><ns0:p>What are the requirements of people applying sandboxes?</ns0:p><ns0:p>Sandboxes that have fewer requirements for people tend to have more requirements for the application.</ns0:p><ns0:p>Similarly, having a fixed policy is correlated with more requirements of the application, while userdefined policies are correlated with more requirements of the user.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.2</ns0:head><ns0:p>What are the requirements of components being sandboxed?</ns0:p><ns0:p>Sandboxes with fixed policies most-often require that applications be compiled using a special compiler.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.2</ns0:head><ns0:p>Who defines sandbox policies? Policies are most often defined by the sandbox developer at design time.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.2</ns0:head><ns0:p>How are policies managed? Policy management is largely ignored, even where users must write their own policies.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.2</ns0:head><ns0:p>How are policies constructed? Most policies are hardcoded in the sandbox.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>What claims are made about sandboxes?</ns0:head><ns0:p>Applicability to new cases is often the impetus for improving existing techniques, but strong security and better performance are more often claimed.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.3</ns0:head><ns0:p>How are claims validated?</ns0:p><ns0:p>Benchmarks and case studies are the most favored validation techniques for all types of claims. Where security claims are not validated using both benchmarks and case studies, ad-hoc arguments are heavily favored.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.3</ns0:head><ns0:p>In what forms are sandboxes made available for review?</ns0:p><ns0:p>There is a recent slight increase in the release of sandbox source code, but generally no implementation artifacts are made available for review. Manuscript to be reviewed Computer Science Figure 4: Breakdown of the representation of all codes for papers that emphasize user-defined policies. Some sandboxes support a fixed-policy with an optional user-defined policy (e.g. <ns0:ref type='bibr' target='#b99'>(Siefers et al., 2010)</ns0:ref>). Cases where a claim was made but not validated are labeled with an 'x'. At the top most level, where two clusters exist, the clusters respectively represent sandboxes that use fixed policies and those that use user-defined policies.</ns0:p><ns0:p>application's development lifecycle, starting with the initial implementation. In fact, sandboxes can even be applied well after an application has been abandoned by its maintainer to secure legacy systems.</ns0:p><ns0:p>In our dataset, the policy enforcement mechanism for a sandbox is always deployed as a system component, as a component of an application host, or by insertion directly into the component that is being encapsulated. While application hosts are becoming more popular as many applications are moved into web browsers and mobile environments, they are currently the least popular place to deploy policy enforcement mechanisms for research sandboxes. Our set includes ten sandboxes where policies are enforced in the application host, twenty-six in the component being encapsulated, 11 and thirty two in a system component.</ns0:p><ns0:p>We believe that application hosts are less represented because many existing hosts come with a sandbox (e.g. the Java sandbox, Android's application sandbox, NaCl in Google Chrome, etc.). Indeed, all but one of the sandboxes deployed in application hosts are for the web, where applications can gain substantial benefits from further encapsulation and there is currently no de facto sandbox. The one exception is Robusta <ns0:ref type='bibr' target='#b99'>(Siefers et al., 2010)</ns0:ref>, which enhances the Java sandbox to encapsulate additional non-web computations.</ns0:p><ns0:p>System components are heavily represented because any sandbox that is to encapsulate a kernel, driver, or other system component must necessarily enforce the policy in a system component. Fifteen of the sandboxes fall into this category because they are encapsulating either a kernel or hypervisor. The remainder could potentially enforce their policies from a less privileged position, but take advantage of the full access to data and transparency to user-mode applications available to system components. This power is useful when enforcing information flow across applications, when preventing memory corruption, or when otherwise enforcing the same policy on every user-mode application.</ns0:p><ns0:p>Research sandboxes almost universally embed their enforcement mechanism in the application that is being encapsulated when the application runs in user-mode. Application deployment is correlated with fixed policies where modifying the application itself can lead to higher performance and where it makes sense to ensure the enforcement mechanisms exists anywhere the application is, even if the application moves to a different environment. Fixed-policies with embedded enforcement mechanisms are correlated with another 11 <ns0:ref type='bibr' target='#b97'>(Sehr et al., 2010)</ns0:ref> is counted twice because the enforcement mechanism is spread across the application and its host.</ns0:p><ns0:p>important deployment concern: statically imposed policies.</ns0:p><ns0:p>Imposing a policy statically, most often using a special compiler or program re-writer, is advantageous because the policy and its enforcement mechanism can travel with the application and overhead can be lower as enforcement is tailored to the targeted code. There are some cons to this approach. For example, the process of imposing the policy cannot be dependent on information that is only available at run-time and the policy is relatively unadaptable after it is set. Furthermore, because the policies are less adaptable, sandboxes that statically impose security policies typically only encapsulate components that are targeted by the person applying the sandbox. These are cases where dynamic mechanisms shine. Given these trade-offs, it makes sense that papers in our set fall into one of two clusters when all codes are considered: Those that are protecting memory and software code, which are relatively easy to encapsulate with a fixed policy, and those managing behaviors manifested in external application communications or interactions with user-data and files that are more easily encapsulated with an adaptable (typically user-defined) policy.</ns0:p><ns0:p>Generally hybrid deployments are used when the approach is necessarily dynamic but static pre-processing lowers overhead. Sometimes, techniques begin as hybrid approaches and evolve to fully dynamic approaches as they gain traction. For example, early papers that introduce diversity in binaries to make reliable exploits harder to write (e.g. code randomization), tend to rely on compiler-introduced metadata, while later papers did not need the extra help. This evolution broadens the applicability of the sandboxing technique. We observed other techniques such as SFI and CFI evolve by reducing the number of requirements on the application, the person applying the sandbox, or both.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Policy Flexibility as a Usability Bellwether</ns0:head><ns0:p>Requiring more work out of the user or more specific attributes of an application lowers the odds that a sandbox will be applied, thus it is natural that research on specific techniques reduce these burdens over time. We find that the nature of the policy has an influence on how burdensome a sandbox is. About half of sandboxes with fixed policies require the application be compiled using a special compiler or uses a sandbox-specific framework or library. Many fixed-policy sandboxes also require the user to run a tool, often a program re-writer, or to install some sandbox component. In comparison, nearly all sandboxes with flexible policies require the user to write a policy manually, but few have additional requirements for the application. Given the burdens involved in manually writing a security policy, the message is clear-easy to use sandboxes reduce the user-facing flexibility of the policies they impose.</ns0:p><ns0:p>Forty-eight sandboxes, more than two-thirds of our sample, use a fixed policy. In all of these cases the policy itself exists within the logic of the sandbox. In the remaining cases, the policy is encoded in the logic of the application twice (e.g. through the use of the sandbox as a framework), and the remaining seventeen cases require the user to manually write a policy.</ns0:p><ns0:p>In cases where the user must manually write the policy, it would help the user if the sandbox supported a mechanism for managing policies-to ensure policies do not have to be duplicated repeatedly for the same application, to generate starter policies for specific cases, to ensure policies can apply to multiple applications, etc. This type of management reduces the burden of having to manually write policies in potentially complex custom policy languages. Support for the policy writer is also important because the policies themselves can be a source of vulnerabilities <ns0:ref type='bibr' target='#b91'>(Rosenberg, 2012)</ns0:ref>. Eight out of twenty-six cases where policy management is appropriate offered some central mechanism for storing existing policies, where they could potentially be shared among users. However, none of the papers in our sample list policy management as a contribution, nor do any of the papers attempt to validate any management constructs that are present. However, it is possible that there are papers outside of our target conferences that explicitly discuss management. For example, programming languages and software engineering conferences are more focused on policy authoring concerns and management may therefore be the focus of a paper that appears in one of those conferences.</ns0:p><ns0:p>However, in spite of the fact that two of the authors of this paper are active researchers in the Programming Language community and three are active in the Software Engineering community, we are not aware of any such paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>The State of Practice in Sandbox Validation</ns0:head><ns0:p>There is little variation in the claims that are made about sandboxes. Most claim to either encapsulate a set of threats or to increase the difficulty of writing successful exploits for code-level vulnerabilities. All but four measure the performance overhead introduced by the sandbox. Thirty-seven papers, more than half, make claims about the types of components the sandbox applies to, typically because the paper applies an existing technique to a different domain or extends it to additional components.</ns0:p><ns0:p>While there is wide variety in how these claims are validated, we observe measurable patterns. In our data set, proof and analytical analysis were, by far, the least used techniques. The lack of analytical analysis is due to the fact that the technique is primarily useful when the security of the mechanism depends on randomness, which is true of few sandboxes in our set. However, proof appears in two cases: (1) to prove properties of data flows and (2) six papers prove the correctness of a mechanism enforcing a fixed policy. The rarity of proof in the sandboxing domain is not surprising given the difficulty involved. Proof is particularly difficult in cases where one would ideally prove that a policy enforcement mechanism is capable of enforcing all possible policies a user can define, which we did not see attempted. Instead, claims are often validated empirically or in ways that are ad hoc and qualitative.</ns0:p><ns0:p>In empirical evaluations, case studies are the most common technique for all claims, often because proof was not attempted and there is no existing benchmark suite that highlights the novel aspects of the sandbox.</ns0:p><ns0:p>For example, papers for sandboxes with fixed policies often want to show a particular class of vulnerabilities can no longer be exploited in sandboxed code, thus examples of vulnerable applications and exploits for their vulnerabilities must be gathered or, very rarely, synthesized. When claims were empirically validated, the results were not comparable in fifteen out of sixty-two cases for performance, twenty-two out of forty-two cases for security, and twenty-four out of thirty-one cases for applicability because non-public data was used in the discussed experiments. Non-public data takes the form of unlabeled exploits, undisclosed changes to public applications, and unreleased custom example cases (e.g. applications built using a sandbox's framework where the examples were not released).</ns0:p><ns0:p>Security claims are notoriously difficult to formalize, hence the pervasive lack of proof. Many papers instead vet their security claims using multi-faceted strategies, often including both common empirical approaches: case studies and experiments using benchmark suites. However, Figures <ns0:ref type='figure' target='#fig_6'>6 and 7</ns0:ref> illustrate an interesting finding: In twenty-nine papers where multi-faceted strategies are not used, authors pick one empirical tactic and argue that their claims are true. Argumentation in this space is problematic because all of the arguments are ad hoc, which makes evaluations that should be comparable difficult to compare at best but more often incomparable. Furthermore, we observed many cases where arguments essentially summarize as, 'Our sandbox is secure because the design is secure,' with details of the design occupying most of the paper in entirely qualitative form. Not only are these types of arguments difficult to compare in cases where sandboxes are otherwise quite similar, it is even harder to see if they are complete in the sense that every sub-claim is adequately addressed.</ns0:p><ns0:p>Our correlational analyses show no significant trends in security or applicability analyses, however performance validation has improved over time. Table <ns0:ref type='table' target='#tab_9'>5</ns0:ref> summarizes the Spearman correlations and their p-values per validation category. Spearman correlations fall in the range [-1,1], where a value of 0 is interpreted as no correlation, positive values show a positive correlation, and negative values a negative correlation. The magnitude of the coefficient grows towards 1 as time and the validation rank become closer to perfect monotonic functions (i.e. when a positive and perfect monotonic relationship exists, the Spearman correlation is 1).</ns0:p><ns0:p>Performance validation is positively, and statistically significantly, correlated with the passage of time.</ns0:p><ns0:p>We observe that performance validation has advanced from a heavy reliance on benchmark suites to the use multi-faceted strategies that include the use of benchmark suites and case studies (typically to perform micro-benchmarks) that make use of public data-which ensures the results are comparable with future sandboxes. While the applicability validation correlation is not statistically significant, we observe that argumentation was abandoned early on in favor of case studies, with some emphasis on including benchmark suites in later years. There is no apparent change in security validation over time.</ns0:p><ns0:p>We fit linear models to each validation category separately and together relative to ranked citation counts to see if validation practices are predictive of future citations. All of the models achieved an R-squared value of 0.54 which suggests that passage of time and validation practices jointly explain about half of the variance Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in citation count ranks. Validation practices on their own are not predictive of how highly cited a paper will become.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Strengthening Sandboxing Results</ns0:head><ns0:p>The existing body of knowledge within the sandboxing community provides a strong basis for securing current and future software systems. However, the results in Section 4 highlight several gaps. In this section we discuss how structured arguments can solve the problems presented by incomparable and incomplete ad hoc arguments (Section 5.1) and possible ways to enhance sandbox and policy usability (Section 5.2).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Structured Arguments</ns0:head><ns0:p>Sandboxes are often evaluated against coarse criteria such as the ability to stop exploits against certain classes of vulnerabilities, to encapsulate certain categories of operations, or to function in new environments.</ns0:p><ns0:p>However, these coarse criteria typically require the sandbox to address a number of sub-criteria. For example, <ns0:ref type='bibr' target='#b123'>Zhang and Sekar (2013)</ns0:ref> provide CFI without requiring compiler support or a priori metadata, unlike earlier implementations. To ensure the technique is secure, they must be sure that independently transformed program modules maintain CFI when composed. Details that clarify how an individual criterion is fulfilled can easily be lost when ad hoc arguments are used in an effort to persuade readers that the criterion has been met; particularly in sandboxes with non-trivial design and implementation details. This can leave the reader unable to compare similar sandboxes or confused about whether or not contributions were validated.</ns0:p><ns0:p>Since many of the security criteria are repeated across most papers, the cost of developing substructure can be amortized across lots of communal use. There are many possible ways to structure arguments in support to security claims:</ns0:p><ns0:p>• Assurance cases <ns0:ref type='bibr' target='#b112'>(Weinstock et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b47'>Kelly, 1999)</ns0:ref> provide graphical structures that explicitly tie claims together in trees that show how claims are narrowed. <ns0:ref type='bibr' target='#b51'>Knight (2015)</ns0:ref> provides a concise introduction to the topic. These structures also explicitly link leaf claims to the evidence that supports the claim. Assurance cases were created in response to several fatal accidents resulting from failures to systematically and thoroughly understand safety concerns in physical systems. Their use has spread to security and safety critical systems of nearly every variety in recent decades with case studies from aerospace <ns0:ref type='bibr' target='#b39'>(Graydon et al., 2007)</ns0:ref> and a sandbox called S 3 (Rodes et al., 2015) that was not analyzed as part of this study <ns0:ref type='bibr' target='#b72'>(Nguyen-Tuong et al., 2014)</ns0:ref>. Sandboxing papers can use assurance cases to decompose claims to their most simple components, then link those components to relevant evidence in the paper (e.g. a summary of specific results, a specific section reference, etc.).</ns0:p><ns0:p>• <ns0:ref type='bibr' target='#b60'>Maass et al. (2014)</ns0:ref> use a qualitative framework to compare sandboxes based on what happens when a sandbox fails, is bypassed, or holds. Authors could structure their arguments by using the framework to describe their specific sandbox without performing explicit comparisons.</ns0:p><ns0:p>• Structured abstracts <ns0:ref type='bibr' target='#b40'>(Hartley, 2004;</ns0:ref><ns0:ref type='bibr' target='#b86'>R. Brian Haynes et al., 1990)</ns0:ref> are used in many medical journals to summarize key results and how those results were produced. These abstracts have the benefit of being quick to read while increasing the retention of information, largely thanks to the use of structure to guide authors in precisely summarizing their work.</ns0:p><ns0:p>• Papers could provide a table summarizing their contributions and the important design or implementation details that reflect the contribution.</ns0:p><ns0:p>All of these approaches provide the reader with data missing in ad hoc arguments: A specific map from the claims made about a sandbox to evidence that justifies the claim has been met. They are also necessarily qualitative, but as we saw earlier, arguments are often used where more rigorous approaches are currently intractable. We believe that adding structure to these arguments is a reasonable advancement of the state of practice in sandbox validation. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Sandbox and Policy Usability</ns0:head><ns0:p>Sandbox and policy usability are concerns of interest to the following stakeholders: Practitioners that must correctly use sandboxes to improve the security postures of their systems and users that must work with sandboxed applications. Some security researchers do attempt to make their sandboxes more usable by providing policy management or reducing requirements on the user, but usability is definitely not a focus of any of the papers in our sample.</ns0:p><ns0:p>Our data shows that, with very few exceptions, sandbox researchers thoroughly evaluate the performance of their sandboxes. Why is there focus on this practical concern but not on usability? We observe that a focus on performance evaluation is partially motivated by the fact that overhead is relatively easy to quantify, but we also saw many cases where researchers were explicitly concerned with whether or not a sandbox was too resource intensive for adoption. The latter is a reasonable concern; <ns0:ref type='bibr'>Szekeres et al. (2013)</ns0:ref> pointed out that many mitigations for memory corruption vulnerabilities are not adopted because performance concerns outweigh protection merits.</ns0:p><ns0:p>While the idea that performance is an important adoption concern is compelling and likely reflects reality, we cannot correlate performance with the adoption of the sandboxes in our set. We cannot find a correlation because the sandboxes and their techniques in our set remain almost entirely unadopted. We only found four cases where sandboxes in our set were either directly adopted or where the techniques they evaluate are clearly implemented in a different but adopted sandbox. A lack of adoption is present even for techniques where performance and applicability have been improved over multiple decades (e.g. SFI). Three of the adopted sandboxes were created by the industry itself or by entities very closely tied to it: Google NaCl was designed with the intention of adopting it in Google Chrome in the short term <ns0:ref type='bibr' target='#b117'>(Yee et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b97'>Sehr et al., 2010)</ns0:ref> and the paper on systrace was published with functioning open source implementations for most Unix-like operating systems <ns0:ref type='bibr' target='#b85'>(Provos, 2003)</ns0:ref>. While the case for adoption is weaker, Cells <ns0:ref type='bibr' target='#b6'>(Andrus et al., 2011</ns0:ref>) is a more advanced design than one VMware developed in parallel <ns0:ref type='bibr' target='#b11'>(Berlind, 2012)</ns0:ref>, although the sandboxes both aim to partition phones into isolated compartments using virtualization (e.g. one for work and one for personal use). More recently, Microsoft has stated that Visual Studio 2015 will ship with an exploit mitigation that we believe is equivalent to what the research community calls CFI <ns0:ref type='bibr' target='#b42'>(Hogg, 2015)</ns0:ref>.</ns0:p><ns0:p>A third party analysis supports this belief, however the uncovered implementation details differ from the techniques implemented in published research <ns0:ref type='bibr' target='#b104'>(Tang, 2015)</ns0:ref>.</ns0:p><ns0:p>We argue that the need to evaluate the usability of our sandboxes is evidenced by the observation that performance and security evaluation are not sufficient to drive adoption. Usability is of particular concern in cases where the sandbox requires developers without security expertise (1) to re-architect applications to apply the sandbox and/or (2) to develop a security policy. In practice, it is quite common for developers without a security focus to apply sandboxes, particularly Java's. In fact, usability issues have factored into widely publicized vulnerabilities in how sandboxes were applied to Google Chrome and Adobe Reader as well as the many vulnerable applications of the Java sandbox <ns0:ref type='bibr' target='#b24'>(Coker et al., 2015)</ns0:ref>. In all of these cases applying the sandbox is a relatively manual process where it is difficult for the applier to be sure he is fully imposing the desired policy and without missing relevant attack surfaces. These usability issues have caused vulnerabilities that have been widely exploited to bypass the sandboxes. We call on the community to evaluate the following usability aspects of their sandboxes where appropriate:</ns0:p><ns0:p>• The intended users are capable of writing policies for the component(s) to be sandboxed that are neither over-or under-privileged.</ns0:p><ns0:p>• Policy enforcement mechanisms can be applied without missing attack surfaces that compromise the sandbox in the targeted component(s).</ns0:p><ns0:p>• Source code transformations (e.g. code re-writing or annotations) do not substantially burden future development or maintenance.</ns0:p><ns0:p>• The sandbox, when applied to a component, does not substantially alter a typical user's interactions with the sandboxed component.</ns0:p><ns0:p>Ideally many of these points would be evaluated during user studies with actual stakeholders. However, we believe that we can make progress on all of these points without the overhead of a full user study, particularly because we are starting from a state where no usability evaluations are performed. For example, authors can describe correct ways to determine what privileges in their policy language a component needs or even provide tools to generate policies to mitigate the risks presented by under-and over-privileged policies.</ns0:p><ns0:p>Similarly, tooling can be provided to help users install policy enforcement mechanisms or check that manual applications of a mechanism are correct. Sandbox developers can transform or annotate representative open source applications and use repository mining 12 to determine how sandbox alternations are affected by code evolution present in the repository <ns0:ref type='bibr' target='#b45'>(Kagdi et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b116'>Yan et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b63'>Mauczka et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b101'>Stuckman and Purtilo, 2014)</ns0:ref>. Finally, a summary of how the sandbox qualitatively changes a user's experience with a sandboxed component would provide a gauge for how much the sandbox burdens end-users.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>Enabling Meta-Analysis</ns0:head><ns0:p>We believe a key contribution of this work is the use of multi-disciplinary and systematic methodologies for drawing conclusions about a large body of security techniques. In this section, we discuss the generalizability of our methodology and suggest other areas to which it can be applied. Then, we discuss some challenges that we faced when doing this research and suggest changes that would address these challenges.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Generalizability of Methodology</ns0:head><ns0:p>The methodology employed in this paper is based on two research approaches: Qualitative Content Analysis and Systematic Literature Reviews. Qualitative Content Analysis is primarily used in the humanities and social sciences. Systematic Literature Reviews were first applied to medical studies and are used primarily in empirical fields. The differences between sandboxing papers are bigger than the differences between studies of a particular cancer treatment. In addition, sandboxing papers do not fit into the 'native' domains of either approach-their primary contributions are designs, techniques, and implementations.</ns0:p><ns0:p>The result of these differences, is that most literature reviews and systemizations in computing are done in an ad hoc manner. Our computing research is worthy of a more rigorous approach and we think the methodology applied in this paper can and should be applied to other topics. In fact, any topic of active research where the primary contributions is an engineered artifact, but without a clear and precise definition, would be amenable to our approach. These topics span computing research from software engineering (e.g. service oriented architecture, concurrent computation models) to systems (e.g. green computing, no instruction set computing) to human-computer interaction (e.g. GUI toolkits, warning science).</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Meta-analysis Challenges and Suggested Solutions</ns0:head><ns0:p>In our experience, the biggest roadblock standing in the way of applying the same techniques to other segments of the research community lies in the difficulty involved in collecting analyzable metadata about papers. We experienced several fixable issues:</ns0:p><ns0:p>• The major publishers in computer science-IEEE, ACM, and Usenix-do not provide publicly available mechanisms to collect metadata and either rate limit or outright ban scraping. 13 In our case, the painstaking process of collecting and curating analyzable metadata across several sources limited our ability to explore hypotheses about our dataset's papers and their relationships to publications not in the set.</ns0:p><ns0:p>• The metadata is limited and contains little semantic content-typically the metadata includes the authors, title, data, and DOI, but little else. If abstracts and keywords were easier to harvest we could have more systematically derived topics of interest within the sandboxing community.</ns0:p><ns0:p>• Links to papers on publisher websites use internal identifiers (e.g. http://dl.acm.org/citation.cfm? id=2498101) instead of DOI. This makes it difficult to reference papers across publisher repositories.</ns0:p><ns0:p>• Conference websites have inconsistent layouts, which increases the difficulty of data collection.</ns0:p><ns0:p>We believe easier access to this data would have allowed us to draw more conclusions about how sandboxing papers are related and how the sandboxing landscape has evolved over time. For example, we explored the idea of using a more developed citation graph than Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> to trace the lineage of sandboxing techniques, but found the required resource expenditures were outside of our means. This data may provide support for explanations regarding the lack of advancement in security validation practices (e.g. by showing an emphasis on a different but important dimension of advancement). These points are important to understand how we got to the current state of practice, thus improving our ability to recognize and advance means for enhancing our results.</ns0:p><ns0:p>On another data collection point, we averaged about 45 minutes per paper to code the data necessary to answer our research questions. While we do not claim that our research questions are of universal interest to the sandboxing community, we did observe that papers that answer all or most of the questions in the abstract are often clearly written throughout and easy to interpret. A small minority of sandboxing papers have far less specific abstracts. In these cases, the papers often took double the average time to comprehend and interpret. It may be useful to strive to clearly answer questions like ours in future papers to show practitioners the value sandbox researchers bring to the table.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Threats to Validity</ns0:head><ns0:p>Due to the complexity of the text and concepts we are interpreting, there is some risk that other coders would assign quotes to different codes. Different codes will change the results, but we believe this risk is mitigated through our tests of the coding frame and by our efforts to select clear quotes. Furthermore, the correlative nature of our results ensures that a few code divergences will not dramatically change the analysis's outcomes.</ns0:p><ns0:p>The primary risk is that we are missing relevant quotes that add codes to our dataset. This is typically mitigated in QCA by fully segmenting the text, but we decided against that strategy because of the very large data set we studied and irrelevance of most of the text to our goals. We did search PDFs for relevant keywords we observed were commonly linked to specific codes throughout the process (e.g. 'proof', 'available' to find the availability of sandbox artifacts for evaluation, 'experiment' to signal a case study or benchmark, etc.) to decrease the odds of missing a code. While this does mitigate the risk, it is still likely that our results under-approximate the state of the sandboxing landscape.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>Conclusion</ns0:head><ns0:p>We systematically analyzed the sandboxing landscape as it is represented by five top-tier security and systems conferences. Our analysis followed a multidisciplinary strategy that allowed us to draw conclusions backed by rigorous interpretations of qualitative data, statistics, and graph analysis. Based on our results, we conclude that the sandbox research community will benefit from the use of structured arguments in support of security claims and the validation of sandbox and policy usability. We suggested lightweight ways to move forward in achieving these goals. Our data also shows that there is a dearth of science regarding the management of security policies for sandboxes, although we did not discuss this gap in depth.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2 http://security.stackexchange.com/questions/5334/what-is-sandboxing 4 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>7</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2: The citation graph for the papers in our set. The colors represent clusters based on topics of interest (modularity = 0.33). Papers cluster based on topics of interest, not necessarily their technical attributes or validation stratgies, thus we must look at lower level attributes to gain a broad understanding of the sandboxing landscape. Papers that were not linked to any of the other papers in the set are not shown. Categories bridging Mandatory Integrity and Access Control (MI/AC) were collapsed to simply Mandatory Access Control (MAC) for this graph. Our citation data can be found in sandboxbib.zip in our supplemental materials .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: A dendrogram displaying the clusters for sandboxing papers taking into account all categories.At the top most level, where two clusters exist, the clusters respectively represent sandboxes that use fixed policies and those that use user-defined policies.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: A dendrogram displaying the clusters for sandboxing papers taking into account validation categories. At the top most level, where two clusters exist, the clusters respectively represent sandboxes that emphasize multi-faceted empirical security validation and those that do not.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7: Breakdown of the representation of validation codes per claim type for the three validation clusters found in our dataset. Each row contains the data for one cluster. The bottom two clusters include papers that do not emphasize multi-faceted security validation strategies, instead relying on case studies and arguments that security claims are true. Cases where a claim was made but not validated are labeled with an 'x'.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Definitions that speak about 'sandboxing' in terms of policy enforcement.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Quote</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Our research questions, the areas each question attempts to illuminate, and potential answers. The answers are codes in the content analysis process we apply. Answers are not necessarily mutually exclusive. Definitions for the terms in this table appear in our coding frames (see coding frames.pdf in our supplemental material) with examples.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Question area</ns0:cell><ns0:cell>Question</ns0:cell><ns0:cell>Possible answers</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Where in the architecture are policies enforced?</ns0:cell><ns0:cell>Component, Application, Host</ns0:cell></ns0:row><ns0:row><ns0:cell>Sandbox Lifecycle</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>How and when are policies imposed?</ns0:cell><ns0:cell>Statically, Dynamically, Hybrid</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>What resources do the</ns0:cell><ns0:cell>Memory, Code/Instructions, Files,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>sandboxes protect?</ns0:cell><ns0:cell>User Data, Communications</ns0:cell></ns0:row><ns0:row><ns0:cell>Security outcomes</ns0:cell><ns0:cell>Which components do the sandboxes protect?</ns0:cell><ns0:cell>Component, Application, Application Class</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>At what point will sandboxes catch exploits?</ns0:cell><ns0:cell>Pre-exploit, Post-exploit</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>What must be done to</ns0:cell><ns0:cell>Nothing, Select Pre-made Policy,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>apply the sandboxes?</ns0:cell><ns0:cell>Write Policy, Run Tool, Install Tool</ns0:cell></ns0:row><ns0:row><ns0:cell>Effort and applicability</ns0:cell><ns0:cell>What are the requirements on sandboxed components?</ns0:cell><ns0:cell>None, Source Code, Annotated Source Code, Special Compiler, Compiler-introduced Metadata, Sandbox Framework/Library Components</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Sandbox Developer (Fixed), Sandbox</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Who defines policies?</ns0:cell><ns0:cell>User (User-defined), Application</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Developer (Application-defined)</ns0:cell></ns0:row><ns0:row><ns0:cell>Policy provenance</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>and manifestation</ns0:cell><ns0:cell>How are policies man-</ns0:cell><ns0:cell>Central Policy Repository, No Manage-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>aged?</ns0:cell><ns0:cell>ment</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>How are policies</ns0:cell><ns0:cell>Encoded in Sandbox Logic, Encoded</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>constructed?</ns0:cell><ns0:cell>in Application Logic, User Written</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>What claims are made</ns0:cell><ns0:cell>Performance, Security, Applicability</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>about sandboxes?</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Research claims</ns0:cell><ns0:cell /><ns0:cell>Proof, Analytical Analysis, Benchmark</ns0:cell></ns0:row><ns0:row><ns0:cell>and validation</ns0:cell><ns0:cell>How are claims validated?</ns0:cell><ns0:cell>Suite, Case Studies, Argumentation,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Using Public Data</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>How are sandboxes released for review?</ns0:cell><ns0:cell>Source Code, Binaries, Not Available</ns0:cell></ns0:row></ns0:table><ns0:note>9 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Summary of our research questions and results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Research Question</ns0:cell><ns0:cell>Results</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure3: Breakdown of the representation of all codes for papers that emphasize fixed policies. Cases where a claim was made but not validated are labeled with an 'x'.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='6'>Where is policy enforced? Where is policy enforced?</ns0:cell><ns0:cell cols='6'>When is policy imposed? When is policy imposed?</ns0:cell><ns0:cell cols='6'>What resources protected? What resources protected?</ns0:cell></ns0:row><ns0:row><ns0:cell>app apphost system app apphost system</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>dynamic hybrid static dynamic hybrid static</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>codeinstr communication codeinstr communication files files memory usrdata memory usrdata</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>0.0 0.0 What components protected? 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 What components protected?</ns0:cell><ns0:cell cols='6'>0.0 0.0 Requirements of Person 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Requirements of Person</ns0:cell><ns0:cell cols='6'>0.0 0.0 Requirements of App 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1.0 1.0 Requirements of App</ns0:cell></ns0:row><ns0:row><ns0:cell>appcls syscomponent targetedapp appcls syscomponent targetedapp</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>installtool none runtool installtool runtool selpolicy writepolicy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>annotations compilermetadata framlib havesource none other specialcompiler annotations framlib havesource none specialcompiler</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='5'>0.0 0.0 Policy Type 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Policy Type</ns0:cell><ns0:cell>1.0 1.0</ns0:cell><ns0:cell cols='6'>0.0 0.0 Exploit Disruption 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Exploit Disruption</ns0:cell><ns0:cell cols='6'>0.0 0.0 Policy Management 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1.0 1.0 Policy Management</ns0:cell></ns0:row><ns0:row><ns0:cell>app−defined fixed app−defined fixed usrdef</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>post post pre pre</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>centralrep none classes none</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>0.0 0.0 Policy Construction 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Policy Construction</ns0:cell><ns0:cell>0.0 0.0</ns0:cell><ns0:cell cols='4'>0.2 0.2 Claims 0.4 0.6 0.8 0.4 0.6 0.8 Claims</ns0:cell><ns0:cell>1.0 1.0</ns0:cell><ns0:cell cols='6'>0.0 0.0 Security Validation 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 1.0 1.0 Security Validation</ns0:cell></ns0:row><ns0:row><ns0:cell>applogic sandboxlogic applogic manuallywritten sandboxlogic</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>applicability performance security applicability performance security</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>analyticalanalysis argumentation benchmarks argumentation benchmarks casestudy casestudy proof x n/a proof</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>0.0 0.0 Performance Validation 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Performance Validation</ns0:cell><ns0:cell cols='6'>0.0 0.0 Applicability Validation 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Applicability Validation</ns0:cell><ns0:cell>0.0 0.0</ns0:cell><ns0:cell cols='4'>0.2 0.2 Availability 0.4 0.6 0.8 0.4 0.6 0.8 Availability</ns0:cell><ns0:cell>1.0 1.0</ns0:cell></ns0:row><ns0:row><ns0:cell>casestudy n/a casestudy n/a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>casestudy n/a casestudy n/a x x</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>unavail source unavail</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>benchmarks benchmarks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>argumentation benchmarks benchmarks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>source binaries</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.0 0.0</ns0:cell><ns0:cell>0.2 0.2</ns0:cell><ns0:cell>0.4 0.4</ns0:cell><ns0:cell>0.6 0.6</ns0:cell><ns0:cell>0.8 0.8</ns0:cell><ns0:cell>1.0 1.0</ns0:cell><ns0:cell>0.0 0.0</ns0:cell><ns0:cell>0.2 0.2</ns0:cell><ns0:cell>0.4 0.4</ns0:cell><ns0:cell>0.6 0.6</ns0:cell><ns0:cell>0.8 0.8</ns0:cell><ns0:cell>1.0 1.0</ns0:cell><ns0:cell>0.0 0.0</ns0:cell><ns0:cell>0.2 0.2</ns0:cell><ns0:cell>0.4 0.4</ns0:cell><ns0:cell>0.6 0.6</ns0:cell><ns0:cell>0.8 0.8</ns0:cell><ns0:cell>1.0 1.0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The Spearman correlations and their statistical significances per validation category. Data with correlation coefficients closer to 1 have stronger correlations.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Correlation (ρ) p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>Security Validation</ns0:cell><ns0:cell>-0.02</ns0:cell><ns0:cell>0.894</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Performance Validation 0.30</ns0:cell><ns0:cell>0.014</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Applicability Validation 0.20</ns0:cell><ns0:cell>0.105</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>19</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Claims made about sandboxes ( : Security, : Performance, and : Applicability) and their validation strategies ( : Proof, : Analytical Analysis, : Benchmarks, : Case Studies, and : Argumentation). Grayed out icons mean a claim was not made or a strategy was not used. Icons made by Freepik from www.flaticon.com.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Citation</ns0:cell><ns0:cell>Conference Claims</ns0:cell><ns0:cell>Val.</ns0:cell><ns0:cell>Val.</ns0:cell><ns0:cell>Val.</ns0:cell></ns0:row><ns0:row><ns0:cell>Other (Syscall)</ns0:cell><ns0:cell>Provos (2003)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Garfinkel et al. (2003)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Bhatkar et al. (2005)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (Syscall)</ns0:cell><ns0:cell>Linn et al. (2005)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Abadi et al. (2005)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (Memory)</ns0:cell><ns0:cell>Ringenburg and Grossman (2005)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MAC</ns0:cell><ns0:cell>Efstathopoulos et al. (2005)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Cox et al. (2006)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>McCamant and Morrisett (2006)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI, SFI</ns0:cell><ns0:cell>Erlingsson et al. (2006)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (DFI)</ns0:cell><ns0:cell>Castro et al. (2006)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Reis et al. (2006)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (InfoFlow)</ns0:cell><ns0:cell>Zeldovich et al. (2006)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MI/AC</ns0:cell><ns0:cell>Li et al. (2007)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Bandhakavi et al. (2007)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Chen et al. (2007)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Petroni and Hicks (2007)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Seshadri et al. (2007)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Criswell et al. (2007)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Wang et al. (2007)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (InfoFlow)</ns0:cell><ns0:cell>Krohn et al. (2007)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Akritidis et al. (2008)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Payne et al. (2008)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MI/AC</ns0:cell><ns0:cell>Sun et al. (2008)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (TaintTrack)</ns0:cell><ns0:cell>Chang et al. (2008)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Oda et al. (2008)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (OS)</ns0:cell><ns0:cell>Williams et al. (2008)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>Yee et al. (2009)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell cols='2'>Louw and Venkatakrishnan (2009) Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Parno et al. (2009)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (Memory)</ns0:cell><ns0:cell>Akritidis et al. (2009)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Wang et al. (2009)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>Castro et al. (2009)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>McCune et al. (2010)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Meyerovich and Livshits (2010)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (Memory)</ns0:cell><ns0:cell>Akritidis (2010)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>Sehr et al. (2010)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Louw et al. (2010)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (OS)</ns0:cell><ns0:cell>Wurster and van Oorschot (2010)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>SFI, Other (UserPolicy) Siefers et al. (2010)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Feldman et al. (2010)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MI/AC</ns0:cell><ns0:cell>Owen et al. (2011)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (Transactions)</ns0:cell><ns0:cell>Jana et al. (2011)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Zeng et al. (2011)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Saxena et al. (2011)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Chen et al. (2011)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Zhang et al. (2011)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>Mao et al. (2011)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Andrus et al. (2011)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Pappas et al. (2012)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Hiser et al. (2012)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SFI</ns0:cell><ns0:cell>Payer et al. (2012)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Kemerlis et al. (2012)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Giuffrida et al. (2012)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MI/AC</ns0:cell><ns0:cell>Xu et al. (2012)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Wartell et al. (2012)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web, Other (InfoFlow)</ns0:cell><ns0:cell>De Groef et al. (2012)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Dunn et al. (2012)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web (MI/AC)</ns0:cell><ns0:cell>Giffin et al. (2012)</ns0:cell><ns0:cell>OSDI</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Zhang et al. (2013)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Zhang and Sekar (2013)</ns0:cell><ns0:cell>Usenix</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI, SFI</ns0:cell><ns0:cell>Niu and Tan (2013)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diversity</ns0:cell><ns0:cell>Homescu et al. (2013)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Other (OS)</ns0:cell><ns0:cell>Moshchuk et al. (2013)</ns0:cell><ns0:cell>CCS</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Virtualization</ns0:cell><ns0:cell>Nikolaev and Back (2013)</ns0:cell><ns0:cell>SOSP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>CFI</ns0:cell><ns0:cell>Criswell et al. (2014)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web</ns0:cell><ns0:cell>Mickens (2014)</ns0:cell><ns0:cell>Oakland</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='8'>Our full coding frames are available in coding frames.pdf in our supplemental materials . 9 A full list of quotes with code assignments is available in paper quotes.pdf in our supplemental materials . 10 The summarized version of our dataset is available as summary of coded papers.xlsx in our supplemental materials . This spreadsheet was converted to a CSV to perform statistical and graph-based analyses.</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6669:1:0:REVIEW 16 Dec 2015)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='12'>http://msrconf.org13 In at least one case ACM provided a copy of their digital library for scraping<ns0:ref type='bibr' target='#b10'>(Bergmark et al., 2001)</ns0:ref> </ns0:note>
</ns0:body>
" | "PeerJ Manuscript CS-2015:09:6669
Maass, Sales, Chung, and Sunshine, A systematic analysis of the science of sandboxing
Dear Editor,
We appreciate the reviewers' comments and have submitted revised text and graphics. We believe we have adequately addressed all of the key points as well as lower level comments. We include many of these comments below with individual responses (mostly documenting our fixes) for your reference.
Please let us know if we failed to address an important point.
Thank You,
Michael Maass
Adam Sales
Benjamin Chung
Joshua Sunshine
Feedback from Editor
-- you should clarify that your work focuses on sandboxes intended to encapsulate software through the constraints imposed by a security policy. Assuming that this is actually your intent, this will mitigate a comment on this topic made by the first reviewer, while addressing a suggestion of the second reviewer.
Response: We adopted Spinellis’ suggested definition.
-- provide for better reproducibility of your results by storing your data in a more stable long term repository
Response: We have included all of the data as PeerJ supplemental files and have updated the footnotes to refer to specific files by name.
-- ensure that all references, including those in footnotes, are useable and that the paper is self-contained.
Response: Shortened links have been reverted to their original forms and our data is now included as supplemental material.
-- clarify 'usability'. The generality of the term implies a large user population, whereas the sandboxes may be used by domain experts.
Response: Clarified that we are concerned with usability from the perspective of people applying sandboxes and that appliers are typically not security experts. Provided additional relevant peer-reviewed citation.
Feedback from Spinellis
-- The 13 research questions studied and the possible answers are clearly defined in Table 3. It is however unclear how the research questions were derived. The questions seem to place considerable weight on security policies, their usability, and evaluation, while e.g. placing less focus on other possible attributes, such as portability, security, or performance. The authors allude to this bias on lines 169-173 and 554-555. Focusing on a specific area is fine, as long as the authors explicitly clarify (and maybe also justify) the focus.
Response: Added context regarding research question derivation to the end of the initial methodology section.
-- The choice of the conferences used can be also examined and justified by reference to the CORE Conference Ranking activity <http://www.core.edu.au/index.php/conference-rankings>.
Response: Added link to CORE in footnote 6 after confirming rankings.
-- Regarding QCA: The method is well described, though the reader must reference the supplementary data in order to understand its use. A few concrete examples could help the reader's understanding.
Responses: Expanded this discussion with an example that also compares QCA to a related approach (to illustrate the important outcomes QCA enables).
-- It would be nice to provide the rationale for the criteria used for picking papers (lines 180-190). Theoretical constructs for sandboxes could also be explicitly listed as outside the paper's scope (line 183).
Response: Added suggested explicit exclusion and reworded ending of 3.1’s first paragraph to clarify rationale behind criteria.
-- The regression method used for adjusting citation ranks according to the publication year (line 289) should be explained.
Response: We’ve expanded this portion of the methodology section.
-- It is unclear how the authors found that performance cannot be correlated with a sandbox's adoption (lines 470-474).
Response: Re-wrote the beginning of this paragraph to clarify our intent and that we cannot even begin to find a correlation between performance and adoption because essentially none of the investigated sandboxes or their techniques have been adopted. We undertook our own search for evidence of adoption and spoke with authors at relevant conferences.
-- The provided definition of a sandbox (line 137) can be improved by stating that it is an encapsulation mechanism *that is used to impose* a security policy on software components
Response: Made this change.
Feedback from Reviewer 1
-- Nit: line 359: you cannot say that you don't look at these
conferences and in the next line that you are not aware of any such paper. If you are not evaluating these conferences then you are naturally not aware of the existence of such a paper.
Response: Added clause clarifying that the paper's authors are active researchers in the named communities.
-- In Figure 2: names of authors, names of techniques, and names of mechanisms are mixed together. Can you make this figure more consistent?
Response: Within the graph itself the name in each box is the name of the sandbox if one is available or the name of the first author otherwise. We chose this strategy because the sandbox name is often more meaningful than the author's name, particularly when the author has published more than one sandbox. Prior versions of this graph used citations instead, but the graph was essentially unreadable in this form.
-- Figures 5 and 6 are very similar (yet 5 is in much worse quality).
Response: Replaced Figure 5 with a high quality vector graphic version.
-- Typo, line 312: is anywhere the application is: remove first is
Response: FIXED
-- Typo, line 328: diversity to binaries to: diversity in binaries
Response: FIXED
" | Here is a paper. Please give your review comments after reading it. |
13 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A large number of clinical concepts are categorized under standardized formats that ease the manipulation, understanding, analysis, and exchange of information. One of the most extended codifications is the International Classification of Diseases (ICD) used for characterizing diagnoses and clinical procedures. With formatted ICD concepts, a patient profile can be described through a set of standardized and sorted attributes according to the relevance or chronology of events. This structured data is fundamental to quantify the similarity between patients and detect relevant clinical characteristics. Data visualization tools allow the representation and comprehension of data patterns, usually of a high dimensional nature, where only a partial picture can be projected.</ns0:p><ns0:p>In this paper, we provide a visual analytics approach for the identification of homogeneous patient cohorts by combining custom distance metrics with a flexible dimensionality reduction technique. First we define a new metric to measure the similarity between diagnosis profiles through the concordance and relevance of events. Second we describe a variation of the STAD (Simplified Topological Abstraction of Data) dimensionality reduction technique to enhance the projection of signals preserving the global structure of data.</ns0:p><ns0:p>The MIMIC-III clinical database is used for implementing the analysis into an interactive dashboard, providing a highly expressive environment for the exploration and comparison of patients groups with at least one identical diagnostic ICD code. The combination of the distance metric and STAD not only allows the identification of patterns but also provides a new layer of information to establish additional relationships between patient cohorts. The method and tool presented here add a valuable new approach for exploring heterogeneous patient populations. In addition, the distance metric described can be applied in other domains that employ ordered lists of categorical data.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div>
<ns0:div><ns0:head>33</ns0:head><ns0:p>Patient profiling and selection are a crucial step in the setup of clinical trials. The process involves 34 analytical methods to handle the increasing amount of healthcare data but is still extremely labor-intensive 35 <ns0:ref type='bibr' target='#b56'>(Sahoo et al., 2014)</ns0:ref>. Nevertheless, the input from an expert in this selection is important.</ns0:p></ns0:div>
<ns0:div><ns0:head>36</ns0:head><ns0:p>To support the expert in the selection of suitable patients, visual analytics solutions can enable the 37 exploration of a patient population, make recruitment consistent across studies, enhance selection accuracy, 38 increase the number of selected participants, and significantly reduce the overall cost of the selection 39 process <ns0:ref type='bibr' target='#b17'>(Fink et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b14'>Damen et al., 2013)</ns0:ref>. Visual analytics relies on interactive and integrated 40 visualizations for exploratory data analysis in order to identify unexpected trends, outliers, or patterns. It 41 can indicate relevant hypotheses that can be complemented with additional algorithms, and help define of the case-based reasoning algorithms <ns0:ref type='bibr' target='#b32'>(Kolodner, 2014)</ns0:ref> such as the detection of consistent cohorts of patients within a patient population. One of the remaining open challenges in the analysis of patient similarity is to establish relevant and practical ways based on clinical concepts <ns0:ref type='bibr' target='#b29'>(Jia et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Many types of information about the patient profile such as diagnosis, procedures, and prescriptions are available under standardized categories contained in taxonomies or dictionaries, e.g., the International Classification of Diseases (ICD), Medical Dictionary for Regulatory Activities (MedDRA) and the Anatomical Therapeutic Chemical (ATC) Classification System. Each patient is for example linked to an ordered list of diagnoses, which are semantic concepts that are (in the case of MIMIC <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>) ordered from most to least important (as per the MIMIC-III documentation 'ICD diagnoses are ordered by priority -and the order does have an impact on the reimbursement for treatment').</ns0:p><ns0:p>These standardized formats provide a non-numerical data structure facilitating both understanding and management of the data. Several methods have been proposed to define similarity between lists of clinical concepts based on presence of absence of specific terms <ns0:ref type='bibr' target='#b21'>(Gottlieb et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b64'>Zhang et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brown 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Girardi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rivault et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al. 2019)</ns0:ref>. However, the diagnostic profile of a patient is not merely an independent list of semantic concepts but also includes an intrinsic order indicated by the position of the terms in the list reflecting the relevance vis-a-vis the actual patient status. To the best of our knowledge, no previous work has combined the categorical and ordinal nature of clinical events into a single distance function. This dualism can contribute to improving the detection of cohorts through diagnostic and procedural data. This can have a significant impact as diagnoses or procedures are part of recruitment criteria in most clinical trials <ns0:ref type='bibr' target='#b6'>(Boland et al., 2012)</ns0:ref>.</ns0:p><ns0:p>In this paper, a novel approach for exploring clinical patient data is introduced. In particular, we focus on patient profiles represented by a set of diagnosis ICD codes sorted by relevance. The distance metric considers the sorted concepts as input, and the resulting pairwise values are projected into a dimensionality reduction graph.</ns0:p><ns0:p>The remaining part of this paper is organized as follows. In the section 'Background', we give an overview of related work in categorical events and graphical projections of patient similarity. The section 'Materials and Methods' describes the proposed distance metric and modifications applied on the base algorithms STAD for visualizing patient population. In 'Results', we demonstrate the effectiveness of the approach in a real-world dataset. The section 'Discussion' compares other methods and alternative metrics for similar data. Finally, the section 'Conclusion' presents conclusions and possible directions for future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>The exploration and analysis of patients through similarity measures has been presented in different areas of bioinformatics and biomedicine, and also data mining and information visualization. In this section, we review the related literature on these areas below, and we focus on the notion of similarity measures for categorical events and graphical representation of patient similarity.</ns0:p></ns0:div>
<ns0:div><ns0:head>Patient similarity and distance measures for categorical events</ns0:head><ns0:p>Different distance metrics exist for unordered lists of categorical data, including the overlap coefficient <ns0:ref type='bibr' target='#b62'>(Vijaymeena and Kavitha, 2016)</ns0:ref>, the Jaccard index <ns0:ref type='bibr' target='#b52'>(Real and Vargas, 1996)</ns0:ref>, and the simple matching coefficient <ns0:ref type='bibr'>( Šulc and Řezanková, 2014)</ns0:ref>. These methods compute the number of matched attributes between two lists using different criteria. Although they treat each entry in the list as independent of the others, they have been used successfully to measure patient similarity to support clinical decision making and have demonstrated their effectiveness in exploratory and predictive analytics <ns0:ref type='bibr' target='#b64'>(Zhang et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b34'>Lee et al. 2015)</ns0:ref>. Similarly, different ways of computing distances between ordered lists are available <ns0:ref type='bibr' target='#b61'>(Van Dongen and Enright, 2012)</ns0:ref>. The Spearman's rank coefficient <ns0:ref type='bibr' target='#b11'>(Corder and Foreman, 2014</ns0:ref>) is useful for both numerical and categorical data and has been used in clinical studies <ns0:ref type='bibr' target='#b41'>(Mukaka, 2012)</ns0:ref>. However, correlation between ordered lists cannot be calculated when the lists are of different lengths <ns0:ref type='bibr' target='#b50'>(Pereira et al., 2009)</ns0:ref>.</ns0:p><ns0:p>In the context of medical diagnoses, the ICD (International Classification of Diseases) codes have been widely used for describing patient similarity. However, these typically consider the hierarchical structure of the ICD codes. <ns0:ref type='bibr' target='#b21'>Gottlieb et al. (2013)</ns0:ref>, for example, proposed a method combining the Jaccard score of two lists with the nearest common ancestor in the ICD hierarchy. The similarity measure for the ICD ontology was previously presented in <ns0:ref type='bibr' target='#b51'>Popescu and Khalilia (2011)</ns0:ref>. Each term is assigned to a weight Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>based on its importance within the hierarchy, which was defined as 1 − 1/n where n corresponded to its level in the hierarchy.</ns0:p><ns0:p>In our work, however, we will not leverage the hierarchical structure of the ICD codes, but employ the ICD grouping as described by Healthcare Cost and Utilization Project (2019). Our approach takes the position of the term in the list of diagnoses into account, which is a proxy to their relevance for the patient status. The metric assigns a higher weight to terms located in first positions than the last ones.</ns0:p><ns0:p>Alternative approaches such as those by <ns0:ref type='bibr' target='#b33'>Le and Ho (2005)</ns0:ref> and <ns0:ref type='bibr' target='#b1'>Ahmad and Dey (2007)</ns0:ref> consider the similarity between two attributes as the shared relationship with the other elements in the sample, i.e., two elements are similar if they appear with a common set of attributes. From a different perspective, the latent concept of these metrics is also present in the identification of comorbidity diseases <ns0:ref type='bibr' target='#b40'>(Moni et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b54'>Ronzano et al. 2019</ns0:ref>) although these studies aim to find heterogeneous types of diseases rather than different profiles of patients. The main drawback of metrics based on co-occurrence is the assumption of an intrinsic dependency between attributes without considering their relevance. The work presented by <ns0:ref type='bibr' target='#b26'>Ienco et al. (2012)</ns0:ref> and <ns0:ref type='bibr' target='#b28'>Jia et al. (2015)</ns0:ref> use the notion of context which identifies the set of relevant categories to a defined attribute. The similarity measure in <ns0:ref type='bibr' target='#b28'>Jia et al. (2015)</ns0:ref> is determined by the correlation of their context attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graphical projections of patient similarity</ns0:head><ns0:p>Visually representing pairwise distance matrices remains a challenge. Most often, dimensionality reduction techniques are used to bring the number of dimensions down to two so that the data can be represented in a scatterplot <ns0:ref type='bibr' target='#b43'>(Nguyen et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Girardi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b60'>Urpa and Anders 2019)</ns0:ref>. Such scatterplots can not only indicate clusters and outliers, but are also very useful for assessing sample quality. In the case of patient data, each point in such plot represents a patient, and relative positions between them in the 2D plane correspond to the distance between them in the original higher dimensional space. Multidimensional scaling (MDS) is arguably one of the most commonly used dimensionality reduction methods <ns0:ref type='bibr' target='#b42'>(Mukherjee et al., 2018)</ns0:ref>. It arranges points on two or three dimensions by minimizing the discrepancy between the original distance space and the distance in the two-dimensional space. Derived MDS methods have been presented, proposing modified versions of the minimization function but conserving the initial aim <ns0:ref type='bibr' target='#b55'>(Saeed et al., 2018)</ns0:ref>. Besides MDS, recent methods have been proposed to highlight the local structure of the different patterns in high-dimensional data. For example, t-distributed stochastic neighbor embedding (t-SNE) <ns0:ref type='bibr' target='#b37'>(Maaten and Hinton, 2008)</ns0:ref> and uniform manifold approximation (UMAP) <ns0:ref type='bibr' target='#b38'>(McInnes et al., 2018)</ns0:ref> have been used in many publications on heterogeneous patient data <ns0:ref type='bibr' target='#b0'>(Abdelmoula et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b58'>Simoni et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Becht et al. 2019)</ns0:ref>. Unlike MDS, t-SNE projects the conditional probability instead of the distances between points by centering a normalized Gaussian distribution for each point based on a predefined number of nearest neighbors. This approach generates robustness in the projection, which allows the preservation of local structure in the data. In a similar fashion, UMAP aims to detect the local clusters but at the same time generates a better intuition of the global structure of data.</ns0:p><ns0:p>In addition to scatterplot representations, alternative visual solutions are also possible, for example heatmaps <ns0:ref type='bibr' target='#b3'>(Baker and Porollo, 2018)</ns0:ref>, treemaps <ns0:ref type='bibr' target='#b65'>(Zillner et al., 2008)</ns0:ref>, and networks. The latter are often built using a combination of dimensionality reduction and topological methods <ns0:ref type='bibr' target='#b35'>(Li et al. 2015;</ns0:ref><ns0:ref type='bibr'>Nielson et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b12'>Dagliati et al. 2019)</ns0:ref>. This approach has for example been used with success to visually validate the automated patient classification in analytical pipelines <ns0:ref type='bibr' target='#b48'>(Pai and Bader 2018;</ns0:ref><ns0:ref type='bibr' target='#b49'>Pai et al. 2019)</ns0:ref>.</ns0:p><ns0:p>In general, the created network encodes the distance between two datapoints in high-dimensional space into an edge between them and the full dataset can therefore be represented as a fully connected graph.</ns0:p><ns0:p>The STAD method (Alcaide and Aerts, 2020) reduces the number of edges allowing a more scalable visualization of distances. The original distance in high-dimensional space between two datapoints is correspondent to the path-length in the resulting graph between these datapoints. The main advantage of networks to display high-dimensional data is that users not only can perceive patterns by the location of points but also by the connection of elements, thereby increasing trust in the data signals.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIAL AND METHODS</ns0:head><ns0:p>The International Classification of Diseases (ICD) is a diagnosis and procedure coding system used by hospitals to bill for care provided. They are further used by health researchers in the study of electronic medical records (EMR) due to the ease of eliciting clinical information regarding patient status. Although these administrative databases were not designed for research purposes, their efficiency compared to the</ns0:p></ns0:div>
<ns0:div><ns0:head>3/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:ref> Manuscript to be reviewed Computer Science manual review of records and demonstrated reliability of information extracted have democratized the analysis of health data in this way <ns0:ref type='bibr' target='#b24'>(Humphries et al., 2000)</ns0:ref>. Even though ICD codification is hierarchically organized, some concepts in the database may be under-reported <ns0:ref type='bibr' target='#b9'>(Campbell et al., 2011)</ns0:ref>. To make analysis feasible, the ICD codes are in practice often grouped in higher categories to reduce noise and facilitate the comparison and analysis with automatic systems <ns0:ref type='bibr' target='#b10'>(Choi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b39'>Miotto et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Baumel et al. 2018)</ns0:ref>.</ns0:p><ns0:p>In our approach, we adopt the ICD generalization introduced by the Clinical Classification Software (CSS) which groups diseases and procedures into clinically meaningful sections (Healthcare Cost and Utilization Project, 2019). Here we introduce a method to compare unequal sets of ordered lists of categories and explore the different cohorts of patients through visual representations of data. This approach employs a custom distance metric presented in section 'Diagnosis similarity and distances' within the visual analytics method as presented in section 'Spanning Trees as Abstraction of Data'.</ns0:p></ns0:div>
<ns0:div><ns0:head>Diagnosis similarity and distances</ns0:head><ns0:p>In the MIMIC dataset which was used for this work <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>, each patient's diagnosis is a list of ICD codes, as exemplified in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The average number of concepts per profile in the MIMIC III dataset is 13 with a standard deviation of 5. Diagnoses are sorted by relevance for the patient status. This order determines the reimbursement for treatment, and, from an analysis perspective, can help us to distinguish similar medical profiles even with different initial causes. The similarity between two patients (diagnosis profiles) A and B is based on which diagnoses (i.e.</ns0:p><ns0:p>ICD9 codes) are present in both, as well as the position of these elements in the list. Consider a match M between two concepts c A and c B , which contributes to the similarity according to the following formula:</ns0:p><ns0:formula xml:id='formula_0'>M C (A, B) = ln 1 + 1 max( position(c A ), position(c B ) )</ns0:formula><ns0:p>The position mentioned in the formula corresponds to the positional index in the list. As an example, the individual contribution of the concept 'Sepsis' for patients A and B in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> is Distance measures in categorical variables are built based on a binary statement of zero or one. Unlike other data types, categorical data generate a bimodal distribution, which can be considered as a normal when the element contains multiple dimensions <ns0:ref type='bibr' target='#b57'>(Schork and Zapala, 2012)</ns0:ref>. The similarity in diagnosis metric not only depends on the matching of elements but also on their positions on the list. These two conditions tend to generate left-skewed distance distributions, as shown in (Figure <ns0:ref type='figure' target='#fig_1'>1A</ns0:ref>). In other words, most patients are very different from other patients.</ns0:p><ns0:formula xml:id='formula_1'>M Sepsis = ln 1 + 1 max(2,4)) =</ns0:formula></ns0:div>
<ns0:div><ns0:head>Simplified Topological Abstraction of Data</ns0:head><ns0:p>Simplified Topological Abstraction of Data (STAD) (Alcaide and Aerts, 2020) is a dimensionality reduction method which projects the structure of a distance matrix D X into a graph U. This method converts datapoints in multi-dimensional space into an unweighted graph in which nearby points in input space are mapped to neighboring vertices in graph space. This is achieved by maximizing the Pearson correlation between the original distance matrix and a distance matrix based on the shortest paths between any two nodes in the graph (which is the objective function to be optimized). STAD projections of multi-dimensional data allow the extraction of complex patterns. The input for a STAD transformation consists of a distance matrix of the original data, which in this case is based on the metric as defined in the previous section.</ns0:p><ns0:p>As mentioned above, high dissimilarity between datapoints (i.e. patients) results in a left-skewed distance distribution. Unfortunately, this skew poses a problem for STAD analysis. As mentioned above, the STAD method visualizes the distances between elements by means of the path length between nodes.</ns0:p><ns0:p>Hence, to represent a big distance between two elements, STAD needs to use a set of intermediate connections that help to describe a long path. In case no intermediate nodes can be found, the algorithm forces a direct connection between the two nodes. As a result, in a left-skewed distribution, STAD tends to generate networks with an excessively high number of links, even when high correlation can be achieved as shown in Figure <ns0:ref type='figure' target='#fig_1'>1B and D</ns0:ref>. This means that the principle that nodes that are closely linked are also close in the original space (i.e. are similar) does not hold anymore <ns0:ref type='bibr' target='#b31'>(Koffka, 2013)</ns0:ref>.</ns0:p><ns0:p>Therefore, we propose a modification of the STAD algorithm, named STAD-R (where the R stands for 'Ratio'), which avoids the problem on datasets of dissimilar items through the use of a modified objective function. To reduce the number of links between dissimilar datapoints we alter the STAD method to Manuscript to be reviewed , in which the sum of d network edge refers to the sum of distances of edges included in the network (see Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Note that edges represent the distance between two elements of the dataset and constitute a cell in the pairwise distance matrix.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>This ratio R is added to the objective function of the algorithm, which maximizes the correlation ρ between the distance matrices D X (of the input dataset) and D U (based on shortest path distances in the graph). When including the ratio R, the objective function in STAD-R is not only a maximization problem based on the Pearson correlation but also a maximization of ratio R. Table <ns0:ref type='table'>2</ns0:ref> shows the difference between STAD and STAD-R.</ns0:p><ns0:p>The ratio R is the sum of those distances of datapoints in D X that are directly connected in network U.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> provides an intuition of the creation of a STAD-R network during different iterations.</ns0:p><ns0:p>The result of STAD-R over STAD is presented in Figure <ns0:ref type='figure' target='#fig_1'>1E</ns0:ref>. The network has a considerable lower number of links (Figure <ns0:ref type='figure' target='#fig_1'>1C</ns0:ref>), and patterns in the data are much more apparent.</ns0:p><ns0:p>The STAD-R algorithm generates networks with considerably lower number of links compared to the correlation-based version. The ratio R restricts the inclusion of dissimilarities and therefore, the number of edges in the network. This new constraint also alters the number of edges in networks generated from other distributions types, e.g., right-skewed or normal. Nevertheless, the general 'shape' of the resulting network remains the same. An example is presented in Figure <ns0:ref type='figure' target='#fig_5'>3A</ns0:ref> distribution, leading to networks with different numbers of edges for STAD and STAD-R, respectively.</ns0:p><ns0:p>However, the structure is still preserved in both networks (Figure <ns0:ref type='figure' target='#fig_5'>3D and E</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We applied this approach to the MIMIC-III database <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>, which is a publicly available dataset developed by the MIT Lab for Computation Physiology, containing anonymized health data from intensive care unit admissions between 2008 and 2014. The MIMIC-III dataset includes the diagnosis profiles of 58,925 patients. Their diagnoses are described using the ICD-9 codification and sorted according to their relevance to the patient. To reduce the number of distinct terms in the list of diagnoses, ICD codes were first grouped as described in the ICD guidelines Healthcare Cost and Utilization Project ( <ns0:ref type='formula'>2019</ns0:ref>). The proof-of-principle interface as well as the underlying code can be found on http://vda-lab.be/mimic.html.</ns0:p><ns0:p>The interface is composed of two main parts: an overview node-link network visualization including all patients (Figure <ns0:ref type='figure' target='#fig_7'>4A</ns0:ref>), and a more detailed view of selected profile groups (Figure <ns0:ref type='figure' target='#fig_7'>4B</ns0:ref>). Networks for each ICD code are precomputed: for each ICD-9 code the relevant patient subpopulations were extracted from the data, diagnosis distances and the resulting graph were computed using STAD-R. When the user selects an ICD-9 code from the interface (in this case code 2910; alcohol withdrawal delirium), the corresponding precomputed network is displayed.</ns0:p><ns0:p>The output of Louvain community detection <ns0:ref type='bibr' target='#b15'>(De Meo et al., 2011)</ns0:ref> is added as post-hoc annotation to facilitate the selection and exploration of the most evident patterns. The Louvain algorithm defines clusters by measuring the density of links inside the group compared to the links between them, which is close to the user interpretation of networks. However, the interpretation of a STAD-R network is not limited to discrete clusters. It aims to represent all relationships between points, including other types of patterns, such as trends or loops. The user can subsequently select either a cluster in this visualisation or individual patients, which will then trigger the display of a barchart which gives more information for that particular cluster (Figure <ns0:ref type='figure' target='#fig_7'>4B</ns0:ref>). This stacked barchart shows how different ICD codes are spread across the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The definition of a custom similarity metric together with a flexible dimensionality reduction technique constitute the key elements of our approach. In this section, we evaluate the benefits of STAD to detect patterns in diagnostic data compared to other popular methods and further discuss the application of the presented distance metric in a different but similar context.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparing STAD to other dimensionality reduction methods</ns0:head><ns0:p>The projection of distances in STAD-R aims to enhance the representation of similarities using networks.</ns0:p><ns0:p>Similar groups of patients tend to be inter-connected, which are perceived as a homogeneous cohort. The outputs of three popular algorithms (MDS, t-SNE, and UMAP) are compared with STAD-R in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>.</ns0:p><ns0:p>The population used in this example is the collection of MIMIC-III patients with alcohol withdrawal delirium (ICD-9 291.0), which was also used for Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. The MDS projection endeavors to approximate all distances in data by defining the two most informative dimensions. Dimensionality methods such as t-SNE and UMAP favor the detection of local structures over the global, although UMAP also retains part of the general relations. Conversely, the abstract graph produced by STAD-R must still be embedded to be visualized, and the selection of the layout may produce slightly different results. Unlike scatterplots, node-link representations provide a more flexible platform for exploring data, especially when node positions can be readjusted according to the analyst and data needs <ns0:ref type='bibr' target='#b23'>(Henry et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In the four plots of Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, the same points were highlighted to ease the comparison between them.</ns0:p><ns0:p>These groups correspond to three communities identified by the Louvain method in the interface. For Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>instance, community 1 and 3 correspond to the patients analyzed in section 'Results'. Community 1 were patients diagnosed with alcohol withdrawal delirium as the primary diagnosis (Group A in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>); community 3 are patients with fractures of bones as the primary diagnosis (Group B in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>); community 2 are patients with intracranial injuries such as concussions. Despite the simple comparison presented, further analysis between these groups confirmed qualitative differences between profiles and a closer similarity between communities 2 and 3 than 1. The initial causes of communities 2 and 3 are associated with injuries while the primary diagnosis of patients in community 1 is the delirium itself.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, we can see that communities that are defined in the network (Figure <ns0:ref type='figure' target='#fig_8'>5A</ns0:ref>) are relatively well preserved in t-SNE (Figure <ns0:ref type='figure' target='#fig_8'>5C</ns0:ref>) but less so in MDS (Figure <ns0:ref type='figure' target='#fig_8'>5B</ns0:ref>). However, t-SNE does not take the global structure into account which is apparent from the fact that communities 2 and 3 are very far apart in t-SNE but actually are quite similar (STAD-R and MDS). UMAP (Figure <ns0:ref type='figure' target='#fig_8'>5D</ns0:ref>) improves on the t-SNE output and results in a view similar to MDS.</ns0:p><ns0:p>Although the interpretation of these visualizations is difficult to assess, quality metrics may help quantify the previous intuitions. Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> presents the quantitative measures for global distance and local distance preservation of projections in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>. Global distance preservation was measured using the Spearman rank correlation (ρ Sp ). It compares the distances for every pair of points between the original data space and the two-dimensional projection <ns0:ref type='bibr' target='#b63'>(Zar, 2005)</ns0:ref>. Local distance preservations were measured by the proportion of neighbors identified in the projection. This metric quantifies how many of the neighbors in the original space are neighbors in the projection <ns0:ref type='bibr' target='#b16'>(Espadoto et al., 2019)</ns0:ref>. We evaluated this metric using a neighborhood of fourteen neighbors, which is the average cluster size in the MIMIC-III dataset using Louvain community detection (14 − nn).</ns0:p><ns0:p>The richness of the node-link diagram representation of STAD-R cannot be captured using node position in the 2D plane alone. Therefore, STAD-R is analyzed from two perspective. First, the abstract graph as generated by STAD-R (STAD-R graph) and, second, the two-dimensional projection after graph drawing (STAD-R layout). The abstract graph only considers the connections between nodes to determine the distances between them, whereas the graph drawing results only consider the node placement in the 2D plane.</ns0:p><ns0:p>Based on the values from The selection of fourteen neighbors corresponds to the average cluster size in the MIMIC-III dataset using Louvain community detection. Column 'STAD-R graph' represents the abstract graph and column 'STAD-R layout' represents the node placement generated by a ForceAtlas2 layout <ns0:ref type='bibr' target='#b27'>(Jacomy et al., 2014)</ns0:ref> which is the layout implemented in the interface.</ns0:p></ns0:div>
<ns0:div><ns0:head>Similarity measures for ICD procedures</ns0:head><ns0:p>The diagnosis similarity described in section 'Diagnosis similarity and distances' is designed for assessing distance between diagnosis profiles, but the principles presented here can be generalized to other terminologies. For example, the procedures which patients receive during a hospital stay are also recorded and also follow an ICD codification: they also contain a list of categories similar to diagnosis. Unlike ICD diagnoses list, which encode priority, the order of procedure code lists indicate the sequence in which encode procedures were performed. Thus the weight distribution in the similarity that was used for the diagnosis metric must be adapted to the nature of the procedure data. Therefore, we can alter the formula Manuscript to be reviewed</ns0:p><ns0:p>Computer Science to include the relative distance between positions of matched elements instead of the top position in the diagnosis case. Formally, the similarity between two procedure concepts can be described as follows:</ns0:p><ns0:formula xml:id='formula_2'>M C (A, B) = ln 1 + 1 |position(C A ) − position(C B )| + 1</ns0:formula><ns0:p>As with diagnosis similarity, the metric is estimated as the sum of individual contributions of matched concepts, S(X,Y ) = ∑ n i=1 M(X ∩Y ).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows a STAD network generated using this adapted similarity for procedures. This example illustrates the population of patients with partial hip replacement (ICD 9: 81.52) in the MIMIC-III population. We can identify three clusters which describe three types of patients: group A are patients with the largest list of activities and are often characterized by venous catheterization and mechanical ventilation; patients in group B are mainly patients with a single procedure of partial hip replacement;</ns0:p><ns0:p>patients in group C are characterized by the removal of an implanted device and a blood transfusion (data not shown).</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, we introduced a custom distance metric for lists of diagnoses and procedures, as well as an extension to STAD for dissimilar datapoints. The diagnosis similarity measure can be applied to any ordered list of categories in a manner that is not possible with the measures available in the literature so far.</ns0:p><ns0:p>The metric is designed to identify differences between patients through standardized concepts (diagnosis and procedures) where the weights of matching concepts are adapted to highlight the most relevant terms.</ns0:p><ns0:p>As mentioned in <ns0:ref type='bibr' target='#b7'>Boriah et al. (2008)</ns0:ref>, selecting a similarity measure must be based on an understanding of how it handles different data characteristics. The projection of data using STAD-R allows both for the detection of local structures and the representation of the global data structure. While no dimensionality reduction output from a high-dimensional dataset can completely project all relationships in the data, the connection of nodes in the graph allows a granular selection and exploration of cohorts. Furthermore, the embedding of the network into an interactive dashboard provides a level of convenience that supports interpretation of the analysis results of the network.</ns0:p><ns0:p>Moreover, as discussed previously, STAD-R can reveal equivalent data signals at multiple levels to other dimensionality reduction methods. Quantitative and qualitative (user) evaluation of the method can be further extended with other datasets to assess both the information captured by the graph and the benefits of node-links diagrams to represent the similarity between datapoints. Following this direction, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we plan to further explore STAD-R in collaboration with domain experts in diverse case studies. Besides, we plan to build a more robust interface that allows the computation and exploration of STAD-R networks from a friendlier environment.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Distance distributions of a population of patients with sepsis, STAD, and STAD-R projections. The dataset is composed of a selection of 1,271 patients from MIMIC-III diagnosed with sepsis (ICD-9: 99591). Predefined conditions cause more homogeneous populations that mitigate the skewness of the diagnosis similarity distribution. (A) Distribution of diagnosis distance. (B) Correlation between original distance matrix and distance matrix based on STAD graph, given different numbers of edges. (C) Idem as (B) using STAD-R. (D) STAD network. (E) STAD-R network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Creation of the STAD-R network for different iterations. (A) Distance matrix D X : Pairwise distances between all elements in a point cloud are calculated using a defined distance metric. (B) Distance list: Transformation of the matrix into a edges list. Edges are sorted by their distance. Smaller distances are first candidates to become part of the network U. (C) The Minimum spanning tree connects all nodes with minimum distance. It guarantees that a path exists between all nodes and becomes the initial iteration in the evaluation of the optimal STAD network (D) The addition of edges over the MST may improve the correlation between the two distance matrices. Edges are added in sequential order following the list in B. (E) The optimal network is found at the iteration with the maximum combination of correlation between D X and D U and the ratio R.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.56,315.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>, showing a right-skewed distancemax ρ(D X , D U ) max ρ(D X , D U )R = max ρ ∑ 1−d network edges ∑ 1+d network edges Table 2.Objective function in STAD and STAD-R. The correlation ρ is computed between the original distance matrix D X and the distance matrix derived from the shortest path graph in D U . The ratio R is calculated from the network at each iteration considering the edges included in the network. Note that distance d network edge are normalized values between zero and one.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distance distributions of traffic activity, STAD, and STAD-R projections. The dataset contains the traffic activity in the city of Barcelona from October 2017 until November 2018. The dataset was presented and analyzed in (Alcaide and Aerts, 2020). (A) Distribution of diagnosis distance. (B) Correlation between original distance matrix and distance matrix based on STAD graph, given different numbers of edges. (C) Idem as (B) using STAD-R. (D) STAD network. (E) STAD-R network.</ns0:figDesc><ns0:graphic coords='8,141.73,165.31,413.54,148.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The interface to explore the diagnosis profiles in the MIMIC-III database. (A) Network visualization of those patients who have alcohol withdrawal delirium as one of their diagnoses. The network is visualized using a force-directed layout. Node colors are assigned automatically following Louvain community detection. (B) Bar-charts to compare the diagnosis profiles of selected groups in the network. Color corresponds to ICD category. In this example Group A contains patients with alcohol withdrawal delirium as the primary diagnosis; in contrast, Group B lists closed fractures as the most relevant diagnosis, and alcohol withdrawal delirium is only in the 2nd to 8th position.</ns0:figDesc><ns0:graphic coords='9,193.43,63.79,310.17,483.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of STAD-R, MDS, t-SNE and UMAP using the population of patients with patients with alcohol withdrawal delirium (ICD-9 291.0). The three communities were determined by the Louvain algorithm. Community 1 are patients diagnosed with alcohol withdrawal delirium in the first positions of the list. Community 2 were patients with intracranial injuries as concussions. Community 3 are patients with fractures of bones as the primary diagnosis.</ns0:figDesc><ns0:graphic coords='10,193.43,63.80,310.15,329.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The population of patients who received a partial hip replacement (ICD 9: 81.52). The network was computed using STAD-R, and distances were estimated using an adapted version of diagnosis similarity for procedures. Color is based on Louvain community detection.</ns0:figDesc><ns0:graphic coords='12,245.13,63.78,206.79,206.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52883:1:1:NEW 21 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The similarity metric presented in this work takes this duality into account and provides support for comparing profiles with an unequal length of elements. Objective function in STAD and STAD-R. The correlation ρ is computed between the original distance matrix D X and the distance matrix derived from the shortest path graph in D U . The ratio R is calculated from the network at each iteration considering the edges included in the network. Note that distance d network edge are normalized values between zero and one.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Patient A (115057)</ns0:cell><ns0:cell cols='3'>Patient B (117154)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ICD section Label (ICD9)</ns0:cell><ns0:cell /><ns0:cell cols='2'>ICD section Label (ICD9)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>996-999.</ns0:cell><ns0:cell>Infection and inflammatory</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>430-438.</ns0:cell><ns0:cell>Unspecified intracranial</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>reaction due to other vas-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>hemorrhage (4329)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>cular device, implant, and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>graft (99662)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>990-995.</ns0:cell><ns0:cell>Sepsis (99591)</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>430-438.</ns0:cell><ns0:cell>Cerebral artery occlusion,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>unspecified with cerebral</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>infarction (43491)</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>590-599.</ns0:cell><ns0:cell>Urinary tract infection,</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>996-999.</ns0:cell><ns0:cell>Iatrogenic cerebrovascular</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>site not specified (5990)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>infarction or hemorrhage</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(99702)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>401-405.</ns0:cell><ns0:cell>Unspecified essential hy-</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>990-995.</ns0:cell><ns0:cell>Sepsis (99591)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>pertension (4019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>590-599.</ns0:cell><ns0:cell>Urinary tract infection,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>site not specified (5990)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>401-405.</ns0:cell><ns0:cell>Unspecified essential hy-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>pertension (4019)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='7'>Global/local focus Measure MDS t-SNE UMAP STAD-R graph STAD-R layout</ns0:cell></ns0:row><ns0:row><ns0:cell>Global</ns0:cell><ns0:cell>ρ Sp</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Local</ns0:cell><ns0:cell>14 − nn</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.62</ns0:cell><ns0:cell>0.52</ns0:cell></ns0:row></ns0:table><ns0:note>, the STAD-R obtained equivalent results to other dimensionality reduction methods in the preservation of the global and local structures. The abstract graph (STAD-R graph) is a not directly visible object as other methods. However, the node placement (STAD-R layout) is able to capture the information from the graph obtaining measures, in this example, close to UMAP, both global and local distances. Besides, node-link diagrams provide the intangible information of links that enhance the interpretation of relationships and allow thorough exploration through interactions such as drag nodes to other positions.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Distance preservation measures of projections in Figure5. The table describes the Spearman's rank correlation (ρ Sp ) and the proportion of the first fourteen nearest neighbors preserved (14 − nn).</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "Dear Editor,
We thank you and the reviewers for your valuable comments on our manuscript.
Below, we will explain how your comments were considered and incorporated into the
revised manuscript as we send it to you today.
Thank you very much for your consideration.
Yours Sincerely,
Daniel Alcaide and Prof. Jan Aerts
Reviewer 1 raises questions about semantic information loss in the use of rank-based
metrics, and the use of patient similarity measures in general. They also ask if a holistic view
of the MIMIC database could be achieved with a more complex metric analysed with
STAD-R.
Indeed, semantic links between different codes in the ICD codes for a patient is not taken
into account. To do this would however make the distance metric much more complex,
although we already have valid groups using our approach. This could be investigated in the
future, though.
Regarding the holistic view, it is indeed possible to create a STAD network containing all
individuals. However, as patients are already coming in with specific pathologies, it makes
sense to do this pre-selection. As added in the section 'Conclusion', we are extending
STAD-R in collaboration with domain-experts to improve data exploration at multiple data
levels. The STAD-R method can generate a first view of data, but the exploration phase
requires other visual and analytical tools to recognize and interpret all their data
characteristics.
Reviewer 2 highlights that figure legends to be made larger, and that the Louvain method
employed in this work was not properly introduced. They also highlight text that should be
revised for clarity in line 254 (as position in list increases) and line 285 (t-SNE & global
structure). Here I also recommend you consider PeerJ's figure instructions
(https://peerj.com/about/author-instructions/#figures - specifically, using capitals for multipart
figure labels, and not 'highlighting' the subfigure label with a yellow background).
We reviewed the text and the figures accordingly.
Regarding the proper introduction of the Louvain method, we clarified in section 'Results'
that community detection was added as post-hoc annotation in the interface to facilitate the
selection and exploration of the most evident pattern. Still, it should not be considered as
part of the approach presented in this work.
1
In addition I offer my own comments and suggested revisions:
A. Demonstration application
I found this RShiny app reasonably usable though I found the 'Select Patients' button
unintuitive. Also, with some selections it is possible to create an empty plot in the Bar Chart
section (this could be due to the minimum length slider setting, but was also after running
through several primary ICD codes, subgroups and interactive selecting events, so could be
a bug).
Suggestion: It could also be useful to be able to re-select the patient(s) in the network
diagram that are shown in a particular chart.
We consider your comments, and we found technical limitations to improve it on the current
platform. We are currently working on a new interface to import, analyze, and explore data
using STAD-R. The selection of patients will be changed to lasso selection, facilitating user
interaction with the interface.
B. Code review
Only the processed data has been provided along with the demo application. Ideally,
deployment instructions and scripts for processing the sample data is needed to
demonstrate the application can be reused (this also helps since several data items are
referred to but not defined in the R code). Inspection of the R code reveals clear comments
highlighting different sections, but I note some aspects (e.g. a decision tree feature) are
commented out.
1. Please also provide the scripts needed to generate the processed data, or make clear
where they can be obtained (eg. in line 241).
We included the scripts on GitHub (https://github.com/vda-lab/ICD_diagnosis_explorer) to
compute distance metrics and STAD-R networks for all ICD codes in the MIMIC-III database
(See README.md). However, we cannot provide direct access to the database as it
requires permission from the MIT Lab for Computational Physiology.
Online docs
The equations for the STAD metric appear to not be rendered in mathML, making it
somewhat difficult for a reader not familiar with the precise notation used to interpret them.
2. If possible please revise the equation so it renders correctly.
We reviewed it accordingly.
2
Manuscript.
I have provided an annotated PDF detailing suggested revisions and comments. Questions
and critical issues are summarised here:
3. Clarify the use of ICD term weighting in similarity/projection metrics. You note in lines
104-107 that your approach resembles that of Goodall (1966), i.e. that pairs of less common
attributes receive higher similarity scores than pairs of common attributes. However, I noted
no subsequent evidence that the metric encodes such 'prior-like' behaviour. Please either
more clearly explain this aspect, or omit the statement and reference.
We corrected the text as we did not use this weighing.
4. Line 156 - 'what is meant by 'reliable results' - see suggested revisions in PDF.
5. line 164 - the distance metric described probably isn't 'new' (i.e. novel), but has perhaps
not yet been applied to patent ICD term lists. It may be better to qualify your statement rather
than say it is an entirely new metric.
6. The wording and notation used in the equation at line 177 M(c_A,c_B) suggests that c_A
and c_B may be *different concepts). Suggest this is revised to M_c(A,B) - since (as I
understand it) M is defined for a particular concept (indeed, it is referred to as such in line
179).
7. line 189-192 - these two sentences describing how the bimodal behaviour of the metric
combined with the scaling effect of the order leads to a left-skewed distribution should be
improved for clarity. Ideally, a reference to this cumulative behaviour could be given, since it
is a behaviour common to many similarity measures composed of discrete scores (e.g.
pairwise alignments).
8. Line 269-70. Simpler and more formal to simply say 'STAD-R's graph needs to be laid
out/embedded to be visualised'.
9. line 270-274. Is the detailed description of alternative 2D layout/embedding methods
relevant to the discussion ? I'm also unconvinced that interactive layout is critical for this
application without evidence from user evaluation.
We reviewed the text to clarify/correct all previous comments provided (4-9).
10. line 275-293 gives a fairly in-depth description of the appearance of the different louvain
clusters under different projection methods as contrasted to the STAD-R graph embedding.
The distinctness of the clusters under these projections could be quantified rather than
simply described - was this attempted ? A quantitative analysis could also allow
effectiveness of each visualisation method to be systematically evaluated across all ICD
code graphs.
Clusters identified by Louvain methods were used to facilitate the comparison of plots. We
considered the quantitative analysis and included a new table with two quantitative indicators
about the global and local distance preservation, which provide a more representative
3
measure of the projection. However, the comparison of these projections should not be
limited to distance only in the projected space, as important information is contained in the
links between nodes in the node-link diagram.
11. Line 288-293 - it isn't clear to me why the outliers in community G have been described
in detail - is this an advantage of the method or a problem with the community analysis
technique ?
The STAD algorithm requires all data points to be connected in some way in a single graph.
Even though this concerns outlier patients, the 'closest' group of other patients in
community G. We removed this detailed description because it distorts the main message of
the comparison of projections.
12. Line 298-300. This could be reworded for clarity. Here I think you mean to explain that
unlike ICD diagnostic lists, which encode 'priority', the order of procedure code lists indicate
the sequence in which procedures were performed. It's meaningless to say 'the position of a
procedure is equally important across the list'.
13. Line 305 - the modified similarity metric
position(C_a)+position(C_b) | - did you mean '-' here ?
for
ICD procedures includes |
We reviewed the text to clarify/correct previous comments (12-13).
14. I was surprised to find there was no discussion concerning planned further work, such as
formal user evaluation to quantitate the effectiveness of the approach. I hope that such work
will be carried out in the future!
Agreed. We included future work in the 'Conclusion' section
4
" | Here is a paper. Please give your review comments after reading it. |
14 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A large number of clinical concepts are categorized under standardized formats that ease the manipulation, understanding, analysis, and exchange of information. One of the most extended codifications is the International Classification of Diseases (ICD) used for characterizing diagnoses and clinical procedures. With formatted ICD concepts, a patient profile can be described through a set of standardized and sorted attributes according to the relevance or chronology of events. This structured data is fundamental to quantify the similarity between patients and detect relevant clinical characteristics. Data visualization tools allow the representation and comprehension of data patterns, usually of a high dimensional nature, where only a partial picture can be projected.</ns0:p><ns0:p>In this paper, we provide a visual analytics approach for the identification of homogeneous patient cohorts by combining custom distance metrics with a flexible dimensionality reduction technique. First we define a new metric to measure the similarity between diagnosis profiles through the concordance and relevance of events. Second we describe a variation of the STAD (Simplified Topological Abstraction of Data) dimensionality reduction technique to enhance the projection of signals preserving the global structure of data.</ns0:p><ns0:p>The MIMIC-III clinical database is used for implementing the analysis into an interactive dashboard, providing a highly expressive environment for the exploration and comparison of patients groups with at least one identical diagnostic ICD code. The combination of the distance metric and STAD not only allows the identification of patterns but also provides a new layer of information to establish additional relationships between patient cohorts. The method and tool presented here add a valuable new approach for exploring heterogeneous patient populations. In addition, the distance metric described can be applied in other domains that employ ordered lists of categorical data.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div>
<ns0:div><ns0:head>33</ns0:head><ns0:p>Patient profiling and selection are a crucial step in the setup of clinical trials. The process involves 34 analytical methods to handle the increasing amount of healthcare data but is still extremely labor-intensive 35 <ns0:ref type='bibr' target='#b55'>(Sahoo et al., 2014)</ns0:ref>. Nevertheless, the input from an expert in this selection is important.</ns0:p></ns0:div>
<ns0:div><ns0:head>36</ns0:head><ns0:p>To support the expert in the selection of suitable patients, visual analytics solutions can enable the 37 exploration of a patient population, make recruitment consistent across studies, enhance selection accuracy, 38 increase the number of selected participants, and significantly reduce the overall cost of the selection 39 process <ns0:ref type='bibr' target='#b18'>(Fink et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b14'>Damen et al., 2013)</ns0:ref>. Visual analytics relies on interactive and integrated 40 visualizations for exploratory data analysis in order to identify unexpected trends, outliers, or patterns. It 41 can indicate relevant hypotheses that can be complemented with additional algorithms, and help define of the case-based reasoning algorithms <ns0:ref type='bibr' target='#b32'>(Kolodner, 2014)</ns0:ref> such as the detection of consistent cohorts of patients within a patient population. One of the remaining open challenges in the analysis of patient similarity is to establish relevant and practical ways based on clinical concepts <ns0:ref type='bibr' target='#b29'>(Jia et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Many types of information about the patient profile such as diagnosis, procedures, and prescriptions are available under standardized categories contained in taxonomies or dictionaries, e.g., the International Classification of Diseases (ICD), Medical Dictionary for Regulatory Activities (MedDRA) and the Anatomical Therapeutic Chemical (ATC) Classification System. Each patient is for example linked to an ordered list of diagnoses, which are semantic concepts that are (in the case of MIMIC <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>) ordered from most to least important (as per the MIMIC-III documentation 'ICD diagnoses are ordered by priority -and the order does have an impact on the reimbursement for treatment').</ns0:p><ns0:p>These standardized formats provide a non-numerical data structure facilitating both understanding and management of the data. Several methods have been proposed to define similarity between lists of clinical concepts based on presence of absence of specific terms <ns0:ref type='bibr' target='#b21'>(Gottlieb et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b63'>Zhang et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brown 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Girardi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b52'>Rivault et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Jia et al. 2019)</ns0:ref>. However, the diagnostic profile of a patient is not merely an independent list of semantic concepts but also includes an intrinsic order indicated by the position of the terms in the list reflecting the relevance vis-a-vis the actual patient status. To the best of our knowledge, no previous work has combined the categorical and ordinal nature of clinical events into a single distance function. This dualism can contribute to improving the detection of cohorts through diagnostic and procedural data. This can significantly impact clinical trials when diagnoses or procedures are part of the recruitment criteria <ns0:ref type='bibr' target='#b6'>(Boland et al., 2012)</ns0:ref>.</ns0:p><ns0:p>In this paper, a novel approach for exploring clinical patient data is introduced. In particular, we focus on patient profiles represented by a set of diagnosis ICD codes sorted by relevance. The distance metric considers the sorted concepts as input, and the resulting pairwise values are used to create a graph where similar patients are connected.</ns0:p><ns0:p>The remaining part of this paper is organized as follows. In the section 'Background', we give an overview of related work in categorical events and graphical projections of patient similarity. The section 'Materials and Methods' describes the proposed distance metric and modifications applied on the base algorithms STAD for visualizing patient population. In 'Results', we demonstrate the effectiveness of the approach in a real-world dataset. The section 'Discussion' compares other methods and alternative metrics for similar data. Finally, the section 'Conclusion' presents conclusions and possible directions for future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>The exploration and analysis of patients through similarity measures has been presented in different areas of bioinformatics and biomedicine, and also data mining and information visualization. In this section, we review the related literature on these areas below, and we focus on the notion of similarity measures for categorical events and graphical representation of patient similarity.</ns0:p></ns0:div>
<ns0:div><ns0:head>Patient similarity and distance measures for categorical events</ns0:head><ns0:p>Different distance metrics exist for unordered lists of categorical data, including the overlap coefficient <ns0:ref type='bibr' target='#b61'>(Vijaymeena and Kavitha, 2016)</ns0:ref>, the Jaccard index <ns0:ref type='bibr' target='#b51'>(Real and Vargas, 1996)</ns0:ref>, and the simple matching coefficient <ns0:ref type='bibr'>( Šulc and Řezanková, 2014)</ns0:ref>. These methods compute the number of matched attributes between two lists using different criteria. Although they treat each entry in the list as independent of the others, they have been used successfully to measure patient similarity to support clinical decision making and have demonstrated their effectiveness in exploratory and predictive analytics <ns0:ref type='bibr' target='#b63'>(Zhang et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b34'>Lee et al. 2015)</ns0:ref>. Similarly, different ways of computing distances between ordered lists are available <ns0:ref type='bibr' target='#b60'>(Van Dongen and Enright, 2012)</ns0:ref>. The Spearman's rank coefficient <ns0:ref type='bibr' target='#b11'>(Corder and Foreman, 2014</ns0:ref>) is useful for both numerical and categorical data and has been used in clinical studies <ns0:ref type='bibr' target='#b42'>(Mukaka, 2012)</ns0:ref>. However, correlation between ordered lists cannot be calculated when the lists are of different lengths <ns0:ref type='bibr' target='#b49'>(Pereira et al., 2009)</ns0:ref>.</ns0:p><ns0:p>In the context of medical diagnoses, the ICD (International Classification of Diseases) codes have been widely used for describing patient similarity. However, these typically consider the hierarchical structure of the ICD codes. <ns0:ref type='bibr' target='#b21'>Gottlieb et al. (2013)</ns0:ref>, for example, proposed a method combining the Jaccard score of two lists with the nearest common ancestor in the ICD hierarchy. The similarity measure for the ICD ontology was previously presented in <ns0:ref type='bibr' target='#b50'>Popescu and Khalilia (2011)</ns0:ref>. Each term is assigned to a weight</ns0:p></ns0:div>
<ns0:div><ns0:head>2/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52883:2:0:NEW 13 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>based on its importance within the hierarchy, which was defined as 1 − 1/n where n corresponded to its level in the hierarchy.</ns0:p><ns0:p>In our work, however, we will not leverage the hierarchical structure of the ICD codes, but employ the ICD grouping as described by Healthcare Cost and Utilization Project (2019). Our approach takes the position of the term in the list of diagnoses into account, which is a proxy to their relevance for the patient status. The metric assigns a higher weight to terms located earlier in the list.</ns0:p><ns0:p>Alternative approaches such as those by <ns0:ref type='bibr' target='#b33'>Le and Ho (2005)</ns0:ref> and <ns0:ref type='bibr' target='#b1'>Ahmad and Dey (2007)</ns0:ref> consider two elements similar if they appear together with a high number of common attributes. They must share the same relationships with other elements in the sample. The latent concept of these metrics is to find groups of co-occurrence such as the identification of comorbidity diseases <ns0:ref type='bibr' target='#b40'>(Moni et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b53'>Ronzano et al. 2019</ns0:ref>) although these studies aim to find heterogeneous types of diseases rather than different profiles of patients. The main drawback of metrics based on co-occurrence is the assumption of an intrinsic dependency between attributes without considering their relevance. The work presented by <ns0:ref type='bibr' target='#b26'>Ienco et al. (2012)</ns0:ref> and <ns0:ref type='bibr' target='#b28'>Jia et al. (2015)</ns0:ref> use the notion of contexts to evaluate pairs of categories. A context is an additional dimension used to determine the similarity between pairs. If the context is another categorical dimension, the similarity between the two categories is determined by the resulting co-occurrence table's correlation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graphical projections of patient similarity</ns0:head><ns0:p>Visually representing pairwise distance matrices remains a challenge. Most often, dimensionality reduction techniques are used to bring the number of dimensions down to two so that the data can be represented in a scatterplot <ns0:ref type='bibr' target='#b44'>(Nguyen et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Girardi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b59'>Urpa and Anders 2019)</ns0:ref>. Such scatterplots can not only indicate clusters and outliers, but are also very useful for assessing sample quality. In the case of patient data, each point in such plot represents a patient, and relative positions between them in the 2D plane correspond to the distance between them in the original higher dimensional space.</ns0:p><ns0:p>Multidimensional scaling (MDS) is arguably one of the most commonly used dimensionality reduction methods <ns0:ref type='bibr' target='#b43'>(Mukherjee et al., 2018)</ns0:ref>. It arranges points on two or three dimensions by minimizing the discrepancy between the original distance space and the distance in the two-dimensional space. Since its first use, many variations of classical MDS methods have been presented, proposing modified versions of the minimization function but conserving the initial aim <ns0:ref type='bibr' target='#b54'>(Saeed et al., 2018)</ns0:ref>. Besides MDS, recent methods have been proposed to highlight the local structure of the different patterns in high-dimensional data. For example, t-distributed stochastic neighbor embedding (t-SNE) <ns0:ref type='bibr' target='#b37'>(Maaten and Hinton, 2008)</ns0:ref> and uniform manifold approximation (UMAP) <ns0:ref type='bibr' target='#b38'>(McInnes et al., 2018)</ns0:ref> have been used in many publications on heterogeneous patient data <ns0:ref type='bibr' target='#b0'>(Abdelmoula et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Simoni et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Becht et al. 2019)</ns0:ref>. Unlike MDS, t-SNE projects the conditional probability instead of the distances between points by centering a normalized Gaussian distribution for each point based on a predefined number of nearest neighbors. This approach generates robustness in the projection, which allows the preservation of local structure in the data. In a similar fashion, UMAP aims to detect the local clusters but at the same time generates a better intuition of the global structure of data.</ns0:p><ns0:p>In addition to scatterplot representations, alternative visual solutions are also possible, for example heatmaps <ns0:ref type='bibr' target='#b3'>(Baker and Porollo, 2018)</ns0:ref>, treemaps <ns0:ref type='bibr' target='#b64'>(Zillner et al., 2008)</ns0:ref>, and networks. The latter are often built using a combination of dimensionality reduction and topological methods <ns0:ref type='bibr' target='#b35'>(Li et al. 2015;</ns0:ref><ns0:ref type='bibr'>Nielson et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b12'>Dagliati et al. 2019)</ns0:ref>. This approach has for example been used with success to visually validate the automated patient classification in analytical pipelines <ns0:ref type='bibr' target='#b47'>(Pai and Bader 2018;</ns0:ref><ns0:ref type='bibr' target='#b48'>Pai et al. 2019)</ns0:ref>.</ns0:p><ns0:p>In general, the created network encodes the distance between two datapoints in high-dimensional space into an edge between them and the full dataset can therefore be represented as a fully connected graph.</ns0:p><ns0:p>The STAD method <ns0:ref type='bibr' target='#b2'>(Alcaide and Aerts, 2020)</ns0:ref> reduces the number of edges allowing a more scalable visualization of distances. The original distance in high-dimensional space between two datapoints is correspondent to the path-length in the resulting graph between these datapoints. The main advantage of networks to display high-dimensional data is that users not only can perceive patterns by the location of points but also by the connection of elements, thereby increasing trust in the data signals.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIAL AND METHODS</ns0:head><ns0:p>The International Classification of Diseases (ICD) is a diagnosis and procedure coding system used by hospitals to bill for care provided. They are further used by health researchers in the study of electronic <ns0:ref type='table' target='#tab_2'>-2020:09:52883:2:0:NEW 13 Jan 2021)</ns0:ref> Manuscript to be reviewed Computer Science medical records (EMR) due to the ease of eliciting clinical information regarding patient status. Although these administrative databases were not designed for research purposes, their efficiency compared to the manual review of records and demonstrated reliability of information extracted have democratized the analysis of health data in this way <ns0:ref type='bibr' target='#b24'>(Humphries et al., 2000)</ns0:ref>. Even though ICD codification is hierarchically organized, some concepts in the database may be under-reported <ns0:ref type='bibr' target='#b9'>(Campbell et al., 2011)</ns0:ref>. To make analysis feasible, the ICD codes are in practice often grouped in higher categories to reduce noise and facilitate the comparison and analysis with automatic systems <ns0:ref type='bibr' target='#b10'>(Choi et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b39'>Miotto et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Baumel et al. 2018)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>3/14 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:p>In our approach, we adopt the ICD generalization introduced by the Clinical Classification Software (CSS) which groups diseases and procedures into clinically meaningful sections (Healthcare Cost and Utilization Project, 2019). Here we introduce a method to compare unequal sets of ordered lists of categories and explore the different cohorts of patients through visual representations of data. This approach employs a custom distance metric presented in section 'Diagnosis similarity and distances' within the visual analytics method as presented in section 'Spanning Trees as Abstraction of Data'.</ns0:p></ns0:div>
<ns0:div><ns0:head>Diagnosis similarity and distances</ns0:head><ns0:p>In the MIMIC dataset which was used for this work <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>, each patient's diagnosis is a list of ICD codes, as exemplified in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The average number of concepts per profile in the MIMIC III dataset is 13 with a standard deviation of 5. Diagnoses are sorted by relevance for the patient status. This order determines the reimbursement for treatment, and, from an analysis perspective, can help us to distinguish similar medical profiles even with different initial causes. The similarity between two patients (diagnosis profiles) A and B is based on which diagnoses (i.e.</ns0:p><ns0:p>ICD9 codes) are present in both, as well as the position of these elements in the list. Consider a match M between two concepts c A and c B , which contributes to the similarity according to the following formula:</ns0:p><ns0:formula xml:id='formula_1'>M C (A, B) = ln 1 + 1 max( position(c A ), position(c B ) )</ns0:formula><ns0:p>The position mentioned in the formula corresponds to the positional index in the list. As an example, the individual contribution of the concept 'Sepsis' for patients A and B in Table <ns0:ref type='table' target='#tab_0'>1 is</ns0:ref> Distance measures in categorical variables are built based on a binary statement of zero or one. Unlike other data types, categorical data generate a bimodal distribution, which can be considered as a normal when the element contains multiple dimensions <ns0:ref type='bibr' target='#b56'>(Schork and Zapala, 2012)</ns0:ref>. The similarity in diagnosis metric not only depends on the matching of elements but also on their positions on the list. These two conditions tend to generate left-skewed distance distributions, as shown in (Figure <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>). In other words, most patients are very different from other patients.</ns0:p></ns0:div>
<ns0:div><ns0:head>Simplified Topological Abstraction of Data</ns0:head><ns0:p>Simplified Topological Abstraction of Data (STAD) (Alcaide and Aerts, 2020) is a dimensionality reduction method which projects the structure of a distance matrix D X into a graph U. This method converts datapoints in multi-dimensional space into an unweighted graph in which nearby points in input space are mapped to neighboring vertices in graph space. This is achieved by maximizing the Pearson correlation between the original distance matrix and a distance matrix based on the shortest paths between any two nodes in the graph (which is the objective function to be optimized). STAD projections of multi-dimensional data allow the extraction of complex patterns. The input for a STAD transformation consists of a distance matrix of the original data, which in this case is based on the metric as defined in the previous section.</ns0:p><ns0:p>As mentioned above, high dissimilarity between datapoints (i.e. patients) results in a left-skewed distance distribution. Unfortunately, this skew poses a problem for STAD analysis. As mentioned above, the STAD method visualizes the distances between elements by means of the path length between nodes.</ns0:p><ns0:p>Hence, to represent a big distance between two elements, STAD needs to use a set of intermediate connections that help to describe a long path. In case no intermediate nodes can be found, the algorithm forces a direct connection between the two nodes. As a result, in a left-skewed distribution, STAD tends to generate networks with an excessively high number of links, even when high correlation can be achieved as shown in Figure <ns0:ref type='figure' target='#fig_0'>1B and D</ns0:ref>. This means that the principle that nodes that are closely linked are also close in the original space (i.e. are similar) does not hold anymore <ns0:ref type='bibr' target='#b31'>(Koffka, 2013)</ns0:ref>.</ns0:p><ns0:p>Therefore, we propose a modification of the STAD algorithm, named STAD-R (where the R stands for</ns0:p></ns0:div>
<ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52883:2:0:NEW 13 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science , in which the sum of d network edge refers to the sum of distances of edges included in the network (see Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). Note that edges represent the distance between two elements of the dataset and constitute a cell in the pairwise distance matrix.</ns0:p><ns0:p>This ratio R is added to the objective function of the algorithm, which maximizes the correlation ρ between the distance matrices D X (of the input dataset) and D U (based on shortest path distances in the graph). When including the ratio R, the objective function in STAD-R is not only a maximization problem based on the Pearson correlation but also a maximization of ratio R. Table <ns0:ref type='table'>2</ns0:ref> shows the difference between STAD and STAD-R.</ns0:p><ns0:p>The ratio R is the sum of those distances of datapoints in D X that are directly connected in network U. other distributions types, e.g., right-skewed or normal. Nevertheless, the general 'shape' of the resulting network remains the same. An example is presented in Figure <ns0:ref type='figure' target='#fig_3'>3A</ns0:ref>, showing a right-skewed distance distribution, leading to networks with different numbers of edges for STAD and STAD-R, respectively.</ns0:p><ns0:p>However, the structure is still preserved in both networks (Figure <ns0:ref type='figure' target='#fig_3'>3D and E</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We applied this approach to the MIMIC-III database <ns0:ref type='bibr' target='#b30'>(Johnson et al., 2016)</ns0:ref>, which is a publicly available dataset developed by the MIT Lab for Computational Physiology, containing anonymized health data from intensive care unit admissions between 2008 and 2014. The MIMIC-III dataset includes the diagnosis profiles of 58,925 patients. Their diagnoses are described using the ICD-9 codification and sorted according to their relevance to the patient. To reduce the number of distinct terms in the list of diagnoses, ICD codes were first grouped as described in the ICD guidelines Healthcare Cost and Utilization Project ( <ns0:ref type='formula'>2019</ns0:ref>). The proof-of-principle interface as well as the underlying code can be found on http://vda-lab.be/mimic.html.</ns0:p><ns0:p>The interface is composed of two main parts: an overview node-link network visualization including all patients (Figure <ns0:ref type='figure' target='#fig_5'>4A</ns0:ref>), and a more detailed view of selected profile groups (Figure <ns0:ref type='figure' target='#fig_5'>4B</ns0:ref>). Networks for each ICD code are precomputed: for each ICD-9 code the relevant patient subpopulations were extracted from the data, diagnosis distances and the resulting graph were computed using STAD-R. When the user selects an ICD-9 code from the interface (in this case code 2910; alcohol withdrawal delirium), the corresponding precomputed network is displayed.</ns0:p><ns0:p>The output of Louvain community detection <ns0:ref type='bibr' target='#b15'>(De Meo et al., 2011)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The definition of a custom similarity metric together with a flexible dimensionality reduction technique constitute the key elements of our approach. In this section, we evaluate the benefits of STAD to detect patterns in diagnostic data compared to other popular methods and further discuss the application of the presented distance metric in a different but similar context.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparing STAD to other dimensionality reduction methods</ns0:head><ns0:p>The projection of distances in STAD-R aims to enhance the representation of similarities using networks.</ns0:p><ns0:p>Similar groups of patients tend to be inter-connected, which are perceived as a homogeneous cohort.</ns0:p><ns0:p>The outputs of three popular algorithms (MDS, t-SNE, and UMAP) are compared with STAD-R in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science node-link representations provide a more flexible platform for exploring data, especially when node positions can be readjusted according to the analyst and data needs <ns0:ref type='bibr' target='#b23'>(Henry et al., 2007)</ns0:ref>.</ns0:p><ns0:p>In the four plots of Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, the same points were highlighted to ease the comparison between them.</ns0:p><ns0:p>These groups correspond to three communities identified by the Louvain method in the interface. For instance, community 1 and 3 correspond to the patients analyzed in section 'Results'. Community 1 were patients diagnosed with alcohol withdrawal delirium as the primary diagnosis (Group A in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>); community 3 are patients with fractures of bones as the primary diagnosis (Group B in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>); community 2 are patients with intracranial injuries such as concussions. Despite the simple comparison presented, further analysis between these groups confirmed qualitative differences between profiles and a closer similarity between communities 2 and 3 than 1. The initial causes of communities 2 and 3 are associated with injuries while the primary diagnosis of patients in community 1 is the delirium itself.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, we can see that communities that are defined in the network (Figure <ns0:ref type='figure' target='#fig_7'>5A</ns0:ref>) are relatively well preserved in t-SNE (Figure <ns0:ref type='figure' target='#fig_7'>5C</ns0:ref>) but less so in MDS (Figure <ns0:ref type='figure' target='#fig_7'>5B</ns0:ref>). However, t-SNE does not take the global structure into account which is apparent from the fact that communities 2 and 3 are very far apart in t-SNE but actually are quite similar (STAD-R and MDS). UMAP (Figure <ns0:ref type='figure' target='#fig_7'>5D</ns0:ref>) improves on the t-SNE output and results in a view similar to MDS.</ns0:p><ns0:p>Although the interpretation of these visualizations is difficult to assess, quality metrics may help quantify the previous intuitions. Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> presents the quantitative measures for global distance and local distance preservation of projections in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>. Global distance preservation was measured using the Spearman rank correlation (ρ Sp ). It compares the distances for every pair of points between the original data space and the two-dimensional projection <ns0:ref type='bibr' target='#b62'>(Zar, 2005)</ns0:ref>. Local distance preservations were measured by the proportion of neighbors identified in the projection. This metric quantifies how many of the neighbors in the original space are neighbors in the projection <ns0:ref type='bibr' target='#b17'>(Espadoto et al., 2019)</ns0:ref>. We evaluated this metric using a neighborhood of the first fourteen neighbors, since fourteen is the average cluster size in the MIMIC-III dataset found using Louvain community detection (14 − nn).</ns0:p><ns0:p>The richness of the node-link diagram representation of STAD-R cannot be captured using node position in the 2D plane alone. Therefore, STAD-R is analyzed from two perspectives. First, as the abstract graph generated by STAD-R (STAD-R graph) and, second, the two-dimensional projection after graph drawing (STAD-R layout). The abstract graph only considers the connections between nodes to determine the distances between them, whereas the graph drawing results only consider the node placement in the 2D plane.</ns0:p><ns0:p>Based on the values from The selection of fourteen neighbors corresponds to the average cluster size in the MIMIC-III dataset using Louvain community detection. Column 'STAD-R graph' represents the abstract graph and column 'STAD-R layout' represents the node placement generated by a ForceAtlas2 layout <ns0:ref type='bibr' target='#b27'>(Jacomy et al., 2014)</ns0:ref> which is the layout implemented in the interface. These results were obtained from a single execution, and stochastic methods such as t-SNE and ForceAtlas2 may provide different values between executions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Similarity measures for ICD procedures</ns0:head><ns0:p>The diagnosis similarity described in section 'Diagnosis similarity and distances' is designed for assessing distance between diagnosis profiles, but the principles presented here can be generalized to other termi- Manuscript to be reviewed</ns0:p><ns0:p>Computer Science nologies. For example, the procedures which patients receive during a hospital stay are also recorded and also follow an ICD codification: they also contain a list of categories similar to diagnosis. Unlike ICD diagnoses lists, which encodes priority, the order of procedure code lists indicate the sequence in which encode procedures were performed. Thus the weight distribution in the similarity that was used for the diagnosis metric must be adapted to the nature of the procedure data. We can alter the formula to include the relative distance between positions of matched elements instead of the top position in the diagnosis case. Formally, the similarity between two procedure concepts can be then described as follows:</ns0:p><ns0:formula xml:id='formula_2'>M C (A, B) = ln 1 + 1 |position(C A ) − position(C B )| + 1</ns0:formula><ns0:p>As with diagnosis similarity, the metric is estimated as the sum of individual contributions of matched concepts, S(X,Y ) = ∑ n i=1 M(X ∩Y ).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref> shows a STAD network generated using this adapted similarity for procedures. This example illustrates the population of patients with partial hip replacement (ICD 9: 81.52) in the MIMIC-III population. We can identify three clusters which describe three types of patients: group A are patients with the largest list of activities and are often characterized by venous catheterization and mechanical ventilation; patients in group B are mainly patients with a single procedure of partial hip replacement;</ns0:p><ns0:p>patients in group C are characterized by the removal of an implanted device and a blood transfusion (data not shown).</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, we introduced a custom distance metric for lists of diagnoses and procedures, as well as an extension to STAD to improve its effectiveness for dissimilar datapoints. The diagnosis similarity measure can be applied to any ordered list of categories in a manner that is not possible with the measures available in the literature so far. The metric is designed to identify differences between patients through standardized concepts (diagnosis and procedures) where the weights of matching concepts are adapted to highlight the most relevant terms. As mentioned in <ns0:ref type='bibr' target='#b7'>Boriah et al. (2008)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science exploration of cohorts. Furthermore, the embedding of the network into an interactive dashboard provides a level of convenience that supports interpretation of the analysis results of the network.</ns0:p><ns0:p>Moreover, as discussed previously, STAD-R can reveal equivalent data signals at multiple levels to other dimensionality reduction methods. Quantitative and qualitative (user) evaluation of the method can be further extended with other datasets to assess both the information captured by the graph and the benefits of node-links diagrams to represent the similarity between datapoints. In future work, we plan to further explore STAD-R in collaboration with domain experts in diverse case studies. We also plan to build a more robust interface that allows the computation and exploration of STAD-R networks in a tailored environment.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Distance distributions of a population of patients with sepsis, STAD, and STAD-R projections. The dataset is composed of a selection of 1,271 patients from MIMIC-III diagnosed with sepsis (ICD-9: 99591). Predefined conditions cause more homogeneous populations that mitigate the skewness of the diagnosis similarity distribution. (A) Distribution of diagnosis distance. (B) Correlation between original distance matrix and distance matrix based on STAD graph, given different numbers of edges. (C) Idem as (B) using STAD-R. (D) STAD network. (E) STAD-R network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Creation of the STAD-R network for different iterations. (A) Distance matrix D X : Pairwise distances between all elements in a point cloud are calculated using a defined distance metric. (B) Distance list: Transformation of the matrix into a edges list. Edges are sorted by their distance. Smaller distances are first candidates to become part of the network U. (C) The Minimum spanning tree connects all nodes with minimum distance. It guarantees that a path exists between all nodes and becomes the initial iteration in the evaluation of the optimal STAD network (D) The addition of edges over the MST may improve the correlation between the two distance matrices. Edges are added in sequential order following the list in B. (E) The optimal network is found at the iteration with the maximum combination of correlation between D X and D U and the ratio R.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.56,315.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 illustrates the creation of a STAD-R network during different iterations.The result of STAD-R over STAD is presented in Figure1E. The network has considerably fewer links (Figure1C), and patterns in the data are much more apparent.The STAD-R algorithm generates networks with considerably lower number of links compared to the correlation-based version. The ratio R restricts the inclusion of dissimilarities and therefore, the number of edges in the network. This new constraint also alters the number of edges in networks generated from</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distance distributions of traffic activity, STAD, and STAD-R projections. The dataset contains the traffic activity in the city of Barcelona from October 2017 until November 2018. The dataset was presented and analyzed in (Alcaide and Aerts, 2020). (A) Distribution of diagnosis distance. (B) Correlation between original distance matrix and distance matrix based on STAD graph, given different numbers of edges. (C) Idem as (B) using STAD-R. (D) STAD network. (E) STAD-R network.</ns0:figDesc><ns0:graphic coords='8,141.73,165.31,413.54,148.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>is added as post-hoc annotation to facilitate the selection and exploration of the most evident patterns. The Louvain algorithm defines clusters by measuring the density of links inside the group compared to the links between them, which is close to the user interpretation of networks. However, the interpretation of a STAD-R network is not limited to discrete clusters. It aims to represent all relationships between points, including other types of patterns, such as trends or loops. The user can subsequently select either a cluster in this visualisation or 7/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52883:2:0:NEW 13 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The interface to explore the diagnosis profiles in the MIMIC-III database. (A) Network visualization of those patients who have alcohol withdrawal delirium as one of their diagnoses. The network is visualized using a force-directed layout. Node colors are assigned automatically following Louvain community detection. (B) Bar-charts to compare the diagnosis profiles of selected groups in the network. Color corresponds to ICD category. In this example Group A contains patients with alcohol withdrawal delirium as the primary diagnosis; in contrast, Group B lists closed fractures as the most relevant diagnosis, and alcohol withdrawal delirium is only in the 2nd to 8th position.</ns0:figDesc><ns0:graphic coords='9,193.43,63.79,310.17,483.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of STAD-R, MDS, t-SNE and UMAP using the population of patients with patients with alcohol withdrawal delirium (ICD-9 291.0). Panel (A) shows ForceAtlas2 embedding of STAD-R graph; panels (B), (C) and (D) are MDS, t-SNE and UMAP projections of the same distance matrix used to compute the STAD-R graph, respectively. The three communities were determined by the Louvain algorithm. Community 1 are patients diagnosed with alcohol withdrawal delirium in the first positions of the list. Community 2 were patients with intracranial injuries as concussions. Community 3 are patients with fractures of bones as the primary diagnosis.</ns0:figDesc><ns0:graphic coords='10,193.43,63.80,310.15,329.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The population used in this example is the collection of MIMIC-III patients with alcohol withdrawal delirium (ICD-9 291.0), which was also used for Figure 4. The MDS projection endeavors to approximate all distances in data within a single 2D plane. Dimensionality methods such as t-SNE and UMAP favor the detection of local structures over the global, although UMAP also retains part of the general relations. Conversely, the abstract graph produced by STAD-R must still be embedded to be visualized, and the selection of the layout may produce slightly different results. Unlike scatterplots,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The population of patients who received a partial hip replacement (ICD 9: 81.52). The network was computed using STAD-R, and distances were estimated using an adapted version of diagnosis similarity for procedures. Color is based on Louvain community detection.</ns0:figDesc><ns0:graphic coords='12,245.13,63.78,206.79,206.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>, selecting a similarity measure must be based on an understanding of how it handles different data characteristics. The projection of data using STAD-R allows both for the detection of local structures and the representation of the global data structure. While no dimensionality reduction output from a high-dimensional dataset can completely project all relationships in the data, the connection of nodes in the graph allows a granular selection and 11/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52883:2:0:NEW 13 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,141.73,63.79,413.54,148.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The similarity metric presented in this work takes this duality into account and provides support for comparing profiles with an unequal length of elements. Objective function in STAD and STAD-R. The correlation ρ is computed between the original distance matrix D X and the distance matrix derived from the shortest path graph in D U . The ratio R is calculated from the network at each iteration considering the edges included in the network. Note that distance d network edge are normalized values between zero and one.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Patient A (115057)</ns0:cell><ns0:cell cols='3'>Patient B (117154)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ICD section Label (ICD9)</ns0:cell><ns0:cell /><ns0:cell cols='2'>ICD section Label (ICD9)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>996-999.</ns0:cell><ns0:cell>Infection and inflammatory</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>430-438.</ns0:cell><ns0:cell>Unspecified intracranial</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>reaction due to other vas-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>hemorrhage (4329)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>cular device, implant, and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>graft (99662)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>990-995.</ns0:cell><ns0:cell>Sepsis (99591)</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>430-438.</ns0:cell><ns0:cell>Cerebral artery occlusion,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>unspecified with cerebral</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>infarction (43491)</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>590-599.</ns0:cell><ns0:cell>Urinary tract infection,</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>996-999.</ns0:cell><ns0:cell>Iatrogenic cerebrovascular</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>site not specified (5990)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>infarction or hemorrhage</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(99702)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>401-405.</ns0:cell><ns0:cell>Unspecified essential hy-</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>990-995.</ns0:cell><ns0:cell>Sepsis (99591)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>pertension (4019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>590-599.</ns0:cell><ns0:cell>Urinary tract infection,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>site not specified (5990)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>401-405.</ns0:cell><ns0:cell>Unspecified essential hy-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>pertension (4019)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>STAD-R obtained equivalent results to other dimensionality reduction methods in the preservation of global and local structures. Although MDS captured global relationships most effectively, STAD-R layout obtained a correlation value equal to UMAP. Local community structure was most effectively captured in the t-SNE layout (at the expense of global structure).</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>Whilst STAD-R's graph is more effective, this local structure is lost on embedding. In comparison with</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>other projection methods, we note that node-link diagrams provide tangible information through links,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>which enhance the interpretation of relationships and allow thorough exploration through interactions,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>such as dragging nodes to other positions.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Global/local focus Measure MDS t-SNE UMAP STAD-R graph STAD-R layout</ns0:cell></ns0:row><ns0:row><ns0:cell>Global</ns0:cell><ns0:cell>ρ Sp</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Local</ns0:cell><ns0:cell>14 − nn</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.62</ns0:cell><ns0:cell>0.52</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Distance preservation measures of projections in Figure5. The table describes the Spearman's rank correlation (ρ Sp ) and the proportion of the first fourteen nearest neighbors preserved (14 − nn).</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "Dear Editor,
We thank you and the reviewers for your valuable comments on our manuscript.
Below, we will explain how your comments were considered and incorporated into the
revised manuscript as we send it to you today.
Thank you very much for your consideration.
Yours Sincerely,
Daniel Alcaide and Prof. Jan Aerts
Editor: I note a number of minor revisions to address grammatical issues and suggested
rewordings to more clearly communicate your findings. These are described in the attached
PDF.
We reviewed the text to correct these issues.
In lines 304-308 an interpretation of the results of Table 3 is given. This is one of the most
substantial of the new additions to the manuscript, and whilst the results are very clear, I
found their reporting and interpretation needs work in order for the manuscript to be
accepted for publication.
We clarified the findings identified in Table 3. In addition, we added a sentence in the table
caption to describe that these values were obtained from a single execution, and stochastic
methods may provide different values between executions.
1
" | Here is a paper. Please give your review comments after reading it. |
15 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A lot of hard work and years of research are still needed for developing successful Blockchain (BC) applications. Although it is not yet standardized, BC technology was proven as to be an enhancement factor for security, decentralization, and reliability, leading to be successfully implemented in cryptocurrency industries. Fog computing (FC) is one of the recently emerged paradigms that needs to be improved to serve Internet of Things (IoT) environments of the future. As hundreds of projects, ideas, and systems were proposed, one can find a great R\&D potential for integrating BC and FC technologies.</ns0:p><ns0:p>Examples of organizations contributing to the R\&D of these two technologies, and their integration, include Linux, IBM, Google, Microsoft, and others. To validate an integrated Fog-Blockchain protocol or method implementation, before the deployment phase, a suitable and accurate simulation environment is needed. Such validation should save a great deal of costs and efforts on researchers and companies adopting this integration. Current available simulation environments facilitate Fog simulation, or BC simulation, but not both. In this paper, we introduce a Fog-Blockchain simulator, namely FoBSim, with the main goal is to ease the experimentation and validation of integrated Fog-Blockchain approaches. According to our proposed workflow of simulation, we implement different Consensus Algorithms (CA), different deployment options of the BC in the FC architecture, and different functionalities of the BC in the simulation. Furthermore, technical details and algorithms on the simulated integration are provided. We validate FoBSim by describing the technologies used within FoBSim, highlighting FoBSim novelty compared to the stateof-the-art, discussing the event validity in FoBSim, and providing a clear walk-through validation. Finally, we simulate case studies, then present and analyze the obtained results, where deploying the BC network in the fog layer shows enhanced efficiency in terms of total run time and total storage cost.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The remainder of the paper is organized as follows: Section 2 presents and discusses the state-of-theart simulation environments that are maybe suitable to simulate FC-BC systems. To properly introduce FoBSim, we discuss, in detail, how FC architectural elements are deployed in Section 3. Additionally, we discuss the categories of BC systems, each with its properties and components in Section 4. Accordingly, we propose the components, the algorithms, and the functions of the FoBSim environment in Section 5.</ns0:p><ns0:p>To validate FoBSim, we simulate some use cases and present the simulation results in Section 6. Finally, we present our future work and conclude in Section 7.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Searching the literature for tools specifically implemented for simulating FC-BC integration scenarios, we found that no previous work has directly targeted our objective. That is, we found several simulation tools that mimic fog-enhanced cloud systems, IoT-Fog-Cloud scenarios, etc., and several tools that mimic BC scenarios, each with specific constraints on the used CAs. Nevertheless, some proposals for IoT-BC simulation tools can be somewhat related to our work. For example, the ABSOLUT tool, investigated in <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>, models the deployment of BCs in IoT environments. Accordingly, some critical analysis were provided regarding network latency, effects of miners number on the overall efficiency of the IoT network, and simulation errors.</ns0:p><ns0:p>Liaskos et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> proposed a general architecture that a BC simulation needs to follow in order to be considered comprehensive. Further, some properties were declared as necessary for encouraging the adoption and re-usability of the simulation. The proposed architecture includes extensible connection strategies, BC nodes, BC chains, Transactions (TX) and Transaction pools, users, events, Blocks, and most importantly Consensus mechanisms. Events can include different triggers to other events -that may be performed by any entity of the network -(such as TX/block arrival, TX/block validation, connection requests, etc.). Also, Events need to be handled by concise and well implemented strategies.</ns0:p><ns0:p>In light of the lack of simulation tools similar to our proposal, we found it more suitable to present this section in two separate groups: namely FC simulation tools, and BC simulation tools.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>FC simulation tools</ns0:head><ns0:p>Recently, our research group has started to investigate the state-of-the-art related to cloud, IoT and fog simulation tools in <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>. Within this study, several simulation tools were classified, compared, and analyzed, such as the DockerSim tool <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>, FogNetSim++ <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>, and EdgeCloudSim <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. Furthermore, technical details, advantages, vulnerabilities, and software quality issues were also discussed.</ns0:p><ns0:p>Rahman et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> surveyed 15 simulation tools for cloud and data centers networks scenarios. The tools were discussed and compared according to several criteria, such as the Graphical User Interface (GUI) availability, the language with which the simulator was implemented, and the communications model.</ns0:p><ns0:p>Consequently, they proposed the Nutshell tool which addresses some drawbacks that were ignored by most of the surveyed simulators. For example, most surveyed simulators had abstract network implementation and low-level details were missing. Further, non of the studied tools provided an addressing scheme, a</ns0:p></ns0:div>
<ns0:div><ns0:head>3/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science congestion control mechanism, or a traffic pattern recognition mechanism. Out of those 15 presented simulation tools, seven were defined as extensions of the CloudSim toolkit <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>.</ns0:p><ns0:p>Yousefpour et al. <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> presented a complete survey about FC, referencing 450 publications specifically concerned with FC development and applications. Within their extended survey, some FC simulation tools, such as iFogSim <ns0:ref type='bibr' target='#b23'>[23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref>, Emufog <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>, Fogbed <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>, and MyiFogSim <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> were discussed. As iFogSim was conceptually built using the CloudSim communications model, it inherited some of its properties, such as the ability to co-execute multiple tasks at the same time and the availability of plugable resource management policies.</ns0:p><ns0:p>Generally speaking, any cloud simulation tool can be extended to be a fog-enabled simulation tool. This is because of the fundamental property of the fog layer acting as a bridge between end-users and the cloud. In other words, adding a fog module to a cloud simulation tool, describing communications, roles, services, and parameters of fog nodes, is sufficient to claim that the tool is a fog-enhanced cloud simulation tool. Additionally, in a project that targets a Fog-BC integration applications, many researchers used a reliable, general-purpose fog simulator and implemented the BC as if it was an application case, such as in <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. The results of such simulation approach can be trusted valid for limited cases, such as providing a proof of concept of the proposal. However, critical issues, such as scalability and heterogeneity in huge networks, need to be simulated in a more specialized simulation environments. To mention one critical case, the BC protocols deployed in different CAs require more precise and accurate deployment of the BC entities and inter-operation in different layers of a Fog-enhanced IoT-Cloud paradigm. Consequently, as some simulation scenarios need an event-driven implementation, while others need a data-driven implementation, a scenario outputs may differ when simulated using different simulation environments.</ns0:p><ns0:p>Such possibility of fluctuated simulation outputs should normally lead to unreliable simulation results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>BC simulation tools</ns0:head><ns0:p>As we have previously investigated how a Fog-Blockchain integration is envisioned, we started the implementation of FoBSim with a simple BC simulation tool described in <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>. Consequently, we discuss the state of the art regarding BC simulation tools available in the literature. In later sections, we describe how FoBSim serves as a reliable tool to mimic an FC-BC integration scenario.</ns0:p><ns0:p>Anilkumar et al. <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref> have compared different available simulation platforms specifically mimicking the Ethereum BC, namely Remix Ethereum <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>, Truffle Suite [32], Mist <ns0:ref type='bibr' target='#b32'>[33]</ns0:ref>, and Geth <ns0:ref type='bibr' target='#b33'>[34]</ns0:ref>. The comparison included some guidelines and properties such as the initialization and the ease of deployment.</ns0:p><ns0:p>The authors concluded that truffle suite is ideal for testing and development, Remix is ideal for compilation and error detection and correction, while Mist and Geth are relatively easy to deploy. Alharby et al. <ns0:ref type='bibr' target='#b34'>[35]</ns0:ref> and Faria et al. <ns0:ref type='bibr' target='#b35'>[36]</ns0:ref> proposed a somewhat limited simulation tool, namely BlockSim, implemented in Python, which specifically deploys the PoW algorithm to mimic the BitCoin and Ethereum systems.</ns0:p><ns0:p>Similarly, Wang et al. <ns0:ref type='bibr' target='#b36'>[37]</ns0:ref> proposed a simulation model to evaluate what is named Quality of Blockchain (QoB). The proposed model targeted only the PoW-based systems aiming to evaluate the effect on changing different parameters of the simulated scenarios on the QoB. For example, average block size, number of TXs per block/day, the size of the memPool, etc. affecting the latency measurements. Furthermore, the authors identified five main characteristics that must be available in any BC simulation tool, namely the ability to scale through time, broadcast and multi-cast messages through the network, be Event-Driven, so that miners can act on received messages while working on other BC-related tasks, process messages in parallel, and handle concurrency issues.</ns0:p><ns0:p>Gervais et al. <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> analyzed some of the probable attacks and vulnerabilities of PoW-based BCs through emulating the conditions in such systems. Sub-consequently, they categorized the parameters affecting the emulation into consensus-related, such as block distribution time, mining power, and the distribution of the miners, and network-related parameters, such as the block size distribution, the number of reachable network nodes, and the distribution of those nodes. However, they basically presented a quantitative framework to objectively compare PoW-based BCs rather than providing a general-purpose simulation tool.</ns0:p><ns0:p>Memon et al. <ns0:ref type='bibr' target='#b39'>[39]</ns0:ref> simulated the mining process in PoW-based BC using the Queuing Theory, aiming to provide statistics on those, and similar systems. Zhao et al. <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref> simulated a BC system for specifically validating their proposed Proof-of-Generation (PoG) algorithm. Hence, the implementation objective was comparing the PoG with other CAs such as PoW and PoS. Another limited BC implementation was proposed by Piriou et al. in <ns0:ref type='bibr' target='#b41'>[41]</ns0:ref>, where only the blocks appending and broadcasting aspects are considered.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Ref.</ns0:p></ns0:div>
<ns0:div><ns0:head>PL</ns0:head><ns0:p>PoW PoS PoA SC DM PM IDM F <ns0:ref type='bibr' target='#b34'>[35,</ns0:ref><ns0:ref type='bibr' target='#b35'>36]</ns0:ref> Python</ns0:p><ns0:formula xml:id='formula_0'>χ χ χ χ χ [37] Python χ χ χ χ χ χ [39] Java χ χ χ χ χ χ [40] Python χ χ χ χ χ [41] Python χ χ χ χ χ χ χ [42] Java χ χ χ χ χ FoBSim Python</ns0:formula></ns0:div>
<ns0:div><ns0:head>Table 2. Blockchain simulation tools and their properties</ns0:head><ns0:p>The tool was implemented using Python, and it aimed at performing Monte Carlo simulations to obtain probabilistic results on consistency and ability to discard double-spending attacks of BC protocols. In <ns0:ref type='bibr' target='#b42'>[42]</ns0:ref>,</ns0:p><ns0:p>the eVIBES simulation was presented, which is a configurable simulation framework for gaining empirical insights into the dynamic properties of PoW-based Ethereum BCs. However, the PoW computations were excluded in eVIBES, and the last updates on the code were committed in 2018.</ns0:p><ns0:p>To highlight the comparison between the mentioned BC simulation tools and our proposed FoBSim tool, we gathered the differences in Table <ns0:ref type='table'>2</ns0:ref>. PL, PoW, PoS, PoA, SC, DM, PM, IDM, and F are abbreviations for Programming Language, Proof-of-Work, Proof-of-Stake, Proof-of-Authority, Smart Contracts, Data Management, Payment Management, Identity Management, and Fog-enhanced, respectively. As shown in the table, none of the previously proposed BC simulation tools made the PoA algorithm available for simulation scenarios, provided a suitable simulation environment for identity management applications, or, most importantly, facilitated the integration of FC in a BC application.</ns0:p><ns0:p>Many other references can be found in the literature, in which a part of a BC system, or a specific mechanism is implemented. The simulated 'part' is only used to analyze a specific property in strict conditions, or to validate a proposed technique or mechanism under named and biased circumstances, such as in <ns0:ref type='bibr' target='#b43'>[43]</ns0:ref> and <ns0:ref type='bibr' target='#b45'>[44]</ns0:ref>. It is also worth mentioning here that some open-source BC projects are available and can be used to simulate BC scenarios. For example, the HyperLedger <ns0:ref type='bibr' target='#b46'>[45]</ns0:ref> projects administered by the Linux Foundation are highly sophisticated and well implemented BC systems. One can locally clone any project that suits the application needs and construct a local network. However, those projects are not targeting the simulation purposes as much as providing realized BC services for the industrial projects.</ns0:p><ns0:p>Additionally, most of these projects, such as Indy, are hard to re-configure and, if re-configured, very sensitive to small changes in their code. Indy, for example, uses specifically a modified version of PBFT CA, namely Plenum, while Fabric uses RAFT.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>FC ARCHITECTURAL ELEMENTS</ns0:head><ns0:p>The FC layer can be studied in three levels, namely the node level, the system level, and the service level <ns0:ref type='bibr' target='#b47'>[46]</ns0:ref>. The fog consists of several nodes connected to each other and to the cloud. The main purpose of the fog layer is to provide cloud services, when possible, closer to end-users. Further, the fog layer, conceptually, provides enhanced security and latency measures. Hence, an FC system uses its components in the fog layer to provide the services that end-users request from the cloud.</ns0:p><ns0:p>In a simple scenario, the fog receives a service request from end-users, perform the required tasks in the most efficient method available, and sends the results back to end-users. As the clouds mainly provide Infrastructure, Software, and Platform -as-a-Service models, those three models can be used for computational tasks, storage tasks, or communication tasks <ns0:ref type='bibr' target='#b48'>[47]</ns0:ref>.</ns0:p><ns0:p>In a Fog-enhanced Cloud system, a general overview of the workflow is presented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. As presented in the figure, the service is requested from end-users and the fog layer provides this service if possible, otherwise, the request is forwarded to the cloud where complex and time consuming actions are performed. However, information of the complexity of the system, and the decision making process in the fog layer, should not be within the concern of end-users. That is, end-users require their tasks to be performed within a privacy-aware context and the QoS measures implications that were agreed on.</ns0:p><ns0:p>In FoBSim, the fog layer can be configured according to the scenario that needs to be simulated. For example, the number of fog nodes, the communications within the fog layer and with other entities of the simulated system, and the services provided by the fog, can all be modified.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='4'>BC ARCHITECTURAL ELEMENTS</ns0:head><ns0:p>BC as a DLT that consists of several elements which need to efficiently interact with each other, in order to achieve the goal of the system. A general view of BC systems suggests some fundamental components that need to be present in any BC system. A BC system implies end-users who request certain types of services from a BC network. The BC network consists of multiple nodes, who do not trust each other, that perform the requested services in a decentralized environment. Consequently, the service provided by a BC network can only be valid if the BC network deployed a trusted method, i.e. CAs, to validate the services provided by its untrusted entities.</ns0:p><ns0:p>In FoBSim, the BC network can provide two models of services; namely data storage, and computations. Meanwhile, the communications within the BC network and with the fog layer are configurable. Data storage service model implies that pieces of data are saved on the immutable distributed ledger. Such data may be of any type including data records, IDs, digital payment registration, or reputation measures of end-users or Fog components. It can also be noted that some applications require assets to be transferred between clients, such as cryptocurrency transfer applications or real estate ownership applications. Other applications do not require transferring assets rather than saving data on the chain only, such as voting applications and eHealth applications. However, the mentioned second type of applications may also need, on some level, a digital payment method be embedded. In such cases, SCs on other payment platforms can be implemented and generated, such as Bitcoin or Ethereum.</ns0:p><ns0:p>Performing computations for end-users is the second service model that the BC in FoBSim can be configured to provide. That is, computational tasks can be sent by end-users/fog entities to the BC in the form of SC, which are small chunks of code, run by BC nodes upon fulfillment of algorithmically verifiable conditions <ns0:ref type='bibr' target='#b49'>[48]</ns0:ref>. After running the SCs, the results can be saved in a centralized or decentralized form according to the pre-run configuration. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> presents how the services, classically provided by a Cloud/Fog system, can be interpreted into the form of services that can be provided by a BC system. We can notice in the figure that SCs can be considered relevant to cloud computational services, while different types of data saved on the decentralized BC can be considered a relevant option to the centralized storage model provided by a cloud system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Consensus Algorithms</ns0:head><ns0:p>Several approaches were proposed as a solution for the aforementioned needs, among which are the most famous Proof-of-Work (PoW) CA. PoW was deployed in 2009 in the first BC system, i.e. Bitcoin <ns0:ref type='bibr' target='#b50'>[49]</ns0:ref>, and is currently used in other robust BC systems; such as Ethereum <ns0:ref type='bibr' target='#b51'>[50]</ns0:ref>. Although PoW methods have proven strong security and support to BC systems, it has some drawbacks, such as high energy consumption and high latency, that encouraged the R&D communities to search for other trusted methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed The Proof-of-Stake (PoS) algorithm <ns0:ref type='bibr' target='#b52'>[51]</ns0:ref> was proposed couple a years later in order to solve the high energy consumption problem implied by PoW. PoS is currently being optimized to provide similar advantages as PoW. Ethereum, for example, is planning to substitute PoW with PoS in the very near future. However, some drawbacks of PoS need to be solved before its official deployment, such as The Monopoly Problem <ns0:ref type='bibr' target='#b53'>[52]</ns0:ref>, The Bribe Attack <ns0:ref type='bibr' target='#b54'>[53,</ns0:ref><ns0:ref type='bibr' target='#b55'>54]</ns0:ref>, and relatively low reliability <ns0:ref type='bibr' target='#b56'>[55]</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In PoW-based BCs, a BC node proves the validity of its generated block of data by coupling a puzzle solution within the block. The puzzle solution is generally characterized by hardship to be obtained while it can easily be validated once found. Generally, the puzzle is a mathematical problem that requires high computational power to be obtained. In PoS-based BCs, the BC node that is allowed to generate the next block is chosen randomly by the system. To encourage the system to pick a specific BC node, staking more digital coins in deposit shall increase the probability of being chosen. This provides high trust measures as faulty generated blocks are not tolerated by the system, and the staked coins of the malicious/faulty BC node would be burned as a penalty.</ns0:p><ns0:p>Other approaches were proposed that provide trust in BCs. Examples include the Proof-of-Elapsed-Time (PoET) <ns0:ref type='bibr' target='#b57'>[56]</ns0:ref>, and the Proof-of-Authority (PoA) <ns0:ref type='bibr' target='#b58'>[57]</ns0:ref>. PoET-based BCs generate randomly selected times for BC nodes. The one node whose randomly picked time elapses first, is the one who is granted the opportunity to generate the next block. PoA, on the other hand, implies that only blocks signed by authorized members are validated and confirmed by the BC network. Those authorized nodes must be known trusted participants that can be tracked and penalized in case of faulty behaviour. Both of these CAs share the property of being suitable for private and permissioned BCs, while PoW and PoS are known for being suitable for public and permissionless BCs.</ns0:p><ns0:p>FoBSim allows to choose the suitable CA according to the simulated scenario. While there are many versions of each CA mentioned, we currently provide the simplest version of each so that modifications can be performed with no complexities. To obtain more information about them, however, more detailed information can be found at <ns0:ref type='bibr' target='#b60'>[58,</ns0:ref><ns0:ref type='bibr' target='#b61'>59,</ns0:ref><ns0:ref type='bibr' target='#b62'>60]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Transactions</ns0:head><ns0:p>In a very simple scenario, an end-user sends a request to the BC network, which consists of BC nodes, to perform a defined TX. As stated in the beginning of this section, TXs may be data to be stored (i.e. payment data, reputation data, identity data, etc.), or can be SCs whose results can be either saved in a centralized (in the case of Cloud) or distributed manner (in the cases of fog or BC). Once the TX is performed, it should be agreed on by the majority of BC nodes if to be saved on the distributed ledger and, sub-consequently, be added to the chain saved in all BC nodes.</ns0:p><ns0:p>On the other hand, if the fog layer is controlling and automating the communications between the end-user layer and the BC network, as in <ns0:ref type='bibr' target='#b63'>[61]</ns0:ref>, the TXs are sent from end-users to the fog. After that, some communications take place between the fog layer and the BC network in order to successfully perform the tasks requested by end-users. In such system model, we assume that the BC network lays in a different layer than the fog layer. The case where the BC network is placed in the fog layer is covered in Subsection 4.4. Nevertheless, a feedback with the appropriate result of each TX should be easily Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='4.3'>Distributed Ledger</ns0:head><ns0:p>In the case were data needs to be stored in a decentralized manner, no Trusted Third Party (TTP) needs to be included in the storing process. The entity considered as a TTP in regular Fog-enhanced Cloud systems is the cloud, where data is stored. However, computations can take place in the fog layer to enhance the QoS.</ns0:p><ns0:p>Within DLT-enabled systems, such as BC, groups of data are accumulated in blocks, and coupled with a proof of validity, as explained in Subsection 4.1. Once a new block of TXs is generated, and the proof is coupled with them, the new block is broadcast among all BC nodes. Nodes who receive the new blocks verify the proof and the data within each TX, and if everything is confirmed valid, the new block is added to the local chain. With each BC node behaving this way, the new block is added to the chain in a distributed manner. That is, a copy of the same chain, with the same exact order of blocks, exists in each BC node. Further, a hash of the previous block is added to the new block, so that any alteration attack of this block in the future will be impractical, and hence almost impossible.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Functionality of the BC Deployment</ns0:head><ns0:p>As a BC-assisted FC system can provide computational and storage services, the BC placement within the the FC architecture may differ. That is, BC can be placed in the fog layer, the end-user layer, or the cloud layer. In FoBSim, however, we consider only the first two mentioned placement cases.</ns0:p><ns0:p>When the BC is deployed in the fog layer, storage and computational services are performed by the fog nodes themselves. In other words, fog nodes wear a second hat, which is a BC network hat. Thus, when storage to be provided by the fog while fog nodes are also BC nodes, data is stored in all fog nodes in the fog layer. A simple system model is demonstrated in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>.a, where only one chain is constructed in the lower fog layer and one fog control point in the upper layer monitors the BC functionality. However, such model is not practical and more complexities appear in a real-life scenario, including heterogeneous fog nodes, multiple BCs deployment, different CAs, and different service models. In such complex systems, FoBSim can be easily extended by adding the needed classes and modules and, hence, cover necessary proposed scenario entities. A note is worth underlining here is the importance of differentiating between the services provided by fog nodes who are BC nodes, and the services provided by fog nodes who are not BC nodes. The first type gets incentivized by end-users for providing both fog services and BC services, while the second type gets incentivized by end-users for providing only fog services. Such critical issues need to be taken care of, when simulating Fog-BC scenarios, to maximize the reliability of the obtained results.</ns0:p><ns0:p>In a system model where the BC is deployed in the end-user layer, we can distinguish two types of end-users; namely task requester and BC node. In a Fog-enhanced BC system, the fog controls the communications between the two types of end-users. Specifically, BC nodes perform the tasks that were sent to the BC network by the fog, which originally were requested by task requester end-users.</ns0:p><ns0:p>Further, the fog can control the privacy preserving of data and incentivize BC nodes in the form of digital currency, as in <ns0:ref type='bibr' target='#b64'>[62]</ns0:ref>. To be specific, BC nodes can be further sub-categorized according to the scenario to be simulated. Adding other types of BC nodes is up to the developers and the system model. For example,</ns0:p></ns0:div>
<ns0:div><ns0:head>8/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science the Bitcoin system is modeled in a simpler way, were BC is directly connected to task requester end-users, and it only provides a payment ledger service. Ethereum, on the other hand, provides computational and data management services. This makes Ethereum surpass Bitcoin because it can provide more services to end-users. However, FoBSim improves both system models by optionally adding the fog layer. The system model provided by FoBSim when the BC is deployed in the end-user layer is demonstrated in </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>THE FOBSIM ENVIRONMENT</ns0:head><ns0:p>To cover all architectural elements described in Sections 3 and 4, we implemented FoBSim according to the conceptual workflow demonstrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> of the Appendices. The current version of FoBSim covers all the architectural elements of a BC system and an FC system. This means that FoBSim successfully inlines with the general architecture of a reliable BC simulation presented in <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. In fact, many more services and scenarios can be simulated using FoBSim, covering the fog layer inclusion besides the BC. As presented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> of the Appendices, different CAs can be used, different services of the BC network can be declared, and different placement scenarios of the BC network can be chosen.</ns0:p><ns0:p>When the BC network is located in the fog layer, the number of BC nodes does not need to be input because, as described earlier, each fog node is also a BC node. Nevertheless, number of task requester end-users connected to each fog node needs to be input, while some fog nodes in a PoA-based scenario might be not authorized to mint new blocks. Once the network is built, running and testing the system model can take place.</ns0:p><ns0:p>The FoBSim environment is implemented using Python v3.8, with the inclusion of some common packages such as: random, randrange, multiprocessing, time, and hashlib. The current version of FoBSim can be cloned and directly run as all the variables, lists, dictionaries, and sets have been given initial values. However, these parameters can be modified before running the code in the Sim parameters.json file. FoBSim tool is open-source and freely available at <ns0:ref type='bibr' target='#b65'>[63]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>FoBSim Modules</ns0:head><ns0:p>To facilitate the understanding of FoBSim, we demonstrate the methods within each FoBSim module in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> of the Appendices. Further, we conclude the classes and methods of FoBSim modules in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> to Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref> of the Appendices. Some notes to be taken care of need to be underlined as well:</ns0:p><ns0:p>1. There is a big opportunity for developers to implement new methods in the fog layer. For example, the fog nodes can be extensible to provide privacy-preserving mechanisms (such as described in <ns0:ref type='bibr' target='#b66'>[64]</ns0:ref>), computational services (such as described in <ns0:ref type='bibr' target='#b67'>[65]</ns0:ref>), or reputation and trust management services (such as described in <ns0:ref type='bibr' target='#b68'>[66]</ns0:ref>).</ns0:p><ns0:p>2. memPool.py: In this module, the mempool, where TXs are accumulated, is a python multiprocessing queue that allows different processes to synchronously add() and get() TXs.</ns0:p><ns0:p>3. There are other minor methods from other modules are also called by FoBSim entities that mints a new Block, or receives a new TX/Block, in order to synchronously and smoothly apply each different CA's policies, as declared in its simple version. 4. After each simulation run, some temporary files can be found in the temporary folder of FoBSim.</ns0:p><ns0:p>These files are originally initiated by the main module, the BC module, or the miner module. The temporary files are used synchronously by different FoBSim entities, mimicking the real-world interaction between BC entities. The current version of FoBSim generates some or all of the following files depending on the simulated scenario:</ns0:p><ns0:p>• Miners' local chains.</ns0:p><ns0:p>• Miners' local records of users' wallets.</ns0:p><ns0:p>• Log of blocks confirmed by the majority of miners.</ns0:p><ns0:p>• Log of final amounts in miners' wallets (initial values -staked values + awards).</ns0:p><ns0:p>• Log of coin amounts which were staked by miners. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• The longest confirmed chain.</ns0:p><ns0:p>• Forking log</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Genesis Block Generation</ns0:head><ns0:p>The first block added to the chain in each simulation run is the most important block of the chain. Different scenarios imply different formats of this block, and different methods to broadcast it among, and be accepted by, miner nodes. In the current version of FoBSim, however, a genesis block is initiated with a list of TXs containing only the string 'genesis block' and the labels of the miners available when this block was generated. The block number is 0, the nonce is 0, the generator id is 'The Network', previous hash is 0, and the hash is generated using the hashing function in the blockchain.py module. The timestamp of genesis block indicates when the chain was launched, hence all blocks shall have bigger timestamp values than the genesis's timestamp. Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> in the Appendices shows a standard FoBSim genesis block, generated in a BC network that consists of two miner nodes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>FoBSim Consensus Algorithms</ns0:head><ns0:p>Currently, there are three available CAs ready to be used in different simulation scenarios. Next, we describe each one individually as to facilitate any modifications by developers. However, we need to indicate that the three included CAs are in their simplest versions and may require some individual modification in case of the need of more complicated ones. Before delving into the CAs, however, we need to discuss the Gossip protocol in FoBSim, as it is deployed regardless of what CA is chosen.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.1'>Gossip Protocol</ns0:head><ns0:p>A Gossip Protocol <ns0:ref type='bibr' target='#b69'>[67]</ns0:ref> is usually deployed in peer-to-peer (P2P) systems for maintaining the consistency of distributed data saved in decentralized networks. Specifically in BC systems, miner nodes regularly, yet randomly, gossip to their neighbours about their current version of the chain, aiming to reach consensus finality as soon as possible. According to specific characteristics of the BC, the locally saved chains are updated so that all confirmed chains are equivalent at any given moment <ns0:ref type='bibr' target='#b71'>[68]</ns0:ref>. The equivalency that any BC system is seeking is defined by the contents similarity of the chains (i.e. TXs, hashes, etc.), and the order similarity of the confirmed blocks. That is, a chain</ns0:p><ns0:formula xml:id='formula_1'>[b 1 , b 2 , b 3 ] is not equivalent to [b 1 , b 3 , b 2 ]</ns0:formula><ns0:p>despite the fact that both have similar contents.</ns0:p><ns0:p>Gossiping protocols are usually fault tolerant as many failing nodes do not affect the protocol. Furthermore, they can adapt to the dynamics of the network, so some solutions have been proposed in the literature for nodes joining and leaving the network. However, gossiping is an iterative method that never quits as long as the network is up, and it may take time to converge. Additionally, high level of communication costs is expected for gossiping, while randomly chosen neighbors are informed about updates. Thus, one cannot provide precise analysis about the time needed for the network agreement on a piece of data.</ns0:p><ns0:p>Although the implementation of such protocol is relatively simple, it is differently implemented in different systems. Some famous examples of efficient gossiping protocols include the Push-Sum protocol <ns0:ref type='bibr' target='#b72'>[69]</ns0:ref>, the Push-Flow algorithm <ns0:ref type='bibr' target='#b73'>[70]</ns0:ref>, and different versions of the Push-Pull averaging protocol <ns0:ref type='bibr' target='#b74'>[71]</ns0:ref>. Furthermore, we found that its application in FoBSim was useful, when the PoW CA is used in a multiprocessing scenario, with a relatively low puzzle difficulty. Additionally, it can be easily noted that the number of simulated TXs/blocks and the initial TX per block configuration affects the speed of the system to reach consensus finality. That is, for low number of TXs, blocks, and low ratios of TX per block, miners might not have the required time to converge locally saved chains. Accordingly, final versions of local chains in some FoBSim simulations, under such circumstances, may not coincide, which is normal and expected as described in <ns0:ref type='bibr' target='#b75'>[72]</ns0:ref>. Nevertheless, we deployed a simple Push-Pull Gossip version in FoBSim that works perfectly fine, so that modifications can be easily conducted if needed. In the current version of FoBSim, a Time To Live (TTL) parameter was not added to the Pull requests when gossiping. This, as expected, floods the network with Pull and Push requests each time a node wants to gossip. Nevertheless, we faced no problem whatsoever when the network consisted up to 1500 miners. If more miners need to be deployed in the simulation scenario,where gossiping is activated, we recommend either configuring the gossiping requests to have a TTL (i.e. a number of hops the request perform before it is terminated), and/or decreasing the number of neighbors the gossiping node is sending the gossip request to. That is, instead of gossiping with all neighbors, a miner can randomly choose a neighbor Manuscript to be reviewed Computer Science to gossip with. Consequently, each neighbor will gossip with a randomly chosen neighbor of his, etc.</ns0:p><ns0:p>More details on such implementation approach can be found in <ns0:ref type='bibr' target='#b76'>[73]</ns0:ref>, while detailed analysis regarding the success rate of gossiping, with a given TTL in a given P2P network, can be found in <ns0:ref type='bibr' target='#b77'>[74]</ns0:ref>.</ns0:p><ns0:p>Algorithm 1 describes how the Pull-request in the default Gossip protocol of the current version of FoBSim works. If the gossiping property was set to true, Each miner runs this algorithm each time the Gossip() function is called for that miner (as a default, the Gossip function is called each time a miner is triggered to build a new block and when a new block is received). As demonstrated in the algorithm, a default FoBSim miner requests information about the longest chain, and adopts it if its contents were agreed on by the majority of the network, which is a condition tested using Algorithm 2. Additionally, if a miner receives a new valid block, and the resulting local chain was longer than the global chain, the miner updates the global chain instantly, which represent the Push request of the Gossip protocol in FoBSim.</ns0:p><ns0:p>In big BC networks, the mentioned issues need to be carefully designed, so that the consistency of the distributed ledger by the end of the simulation run is guaranteed, while the efficiency of the algorithm is optimized. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.2'>The Proof of Work</ns0:head><ns0:p>In a simplified scenario of a PoW-based BC, miners collect TXs from the mempool (which is a shared queue in FoBSim) and accumulate them in blocks that they mint. Specifically, all available miners compete to produce the next block that will be added to the chain. The fastest miner producing the next block is the miner whose block is accepted by all other miners of the BC. Synchronously, all blocks that are being minted by other miners are withdrawn, and all TXs within are sent back to the mempool. To mimic this Each miner then works within an isolated core of the device on which the simulation is conducted.</ns0:p><ns0:p>Using this approach is doable and explainable in simple scenarios, where each process needs to access one or few shared objects. However, we found it challenging to mimic complex scenarios, where huge number of processes require accessing the same shared lists. For example, when BC functionality is payment, the BC deployed in the fog layer, and the CA is PoS the wallets of end-users, fog nodes, and mining nodes need to be all global for read and update by all processes. We also experimented the Python package: multiprocessing.shared memory, which partially solved the problem as multi processes can read and update values in a Shareable List object. However, as declared in the official Python documentation <ns0:ref type='bibr' target='#b78'>[75]</ns0:ref>, the Shareable List object lacks the dynamicity required in terms of length and slicing. According to the mentioned insights, we implemented two approaches for PoW mining in FoBSim, the first starts all miners in parallel (using the multiprocessing package), while the second consequentially calls for miners to mint new blocks (using a FOR loop). Both approaches are available in the miners trigger() function in the main.py module, and developers are free to use either. We do encourage the developers, however, to be cautious and carefully test their results when using the parallel processing approach, as each different scenario may require different access management scheme to different FoBSim entities.</ns0:p><ns0:p>Hence, a complex scenario simulation may require some modifications to some variables and lists so that they become shareable by all processes in different modules. Detailed instructions for implementing different memory-sharing scenarios can be found in the Python official documentation <ns0:ref type='bibr' target='#b79'>[76]</ns0:ref>.</ns0:p><ns0:p>When a Miner receives a new block, it checks whether the hash of the block (in which the nonce or the puzzle solution is included) is in line with the acceptance condition enforced by the blockchain.py module. Further, the receiver miner checks whether sender end-users have sufficient amount of digital coins to perform the TX (in the case of payment functionality). Unlike PoS and PoA, all miners work at the same time for achieving the next block. Hence, any miner is authorized to produce a block and there is no miner verification required. Algorithm 3 presents how PoW is implemented in FoBSim.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.3'>The Proof of Stake</ns0:head><ns0:p>In a simplified version of PoS, miners stake different amounts of digital coins (which they temporarily are not allowed to claim) in the BC network. The network then randomly chooses a miner to mint the next block, with higher probability to be chosen for miners who stake more coins. Once a miner is chosen, it is the only one authorized to mint and broadcast the next block. In case of faulty TXs/blocks, the minter loses its staked coins as a penalty, while in case of correct blocks, the minter is awarded some digital coins. To mimic this in FoBSim, each miner is initiated with specific amount of coins in its wallet. After that, randomly generated number of coins (up to the amount of coins in its wallet) is staked by each miner. In this way, every miner has different probability to be chosen by the network. Next, the network randomly chooses, say 10% of the available, miners and picks the one with the highest stake. This chosen miner's address is immediately broadcast to all miners so that any block received from any other miner is rejected.</ns0:p><ns0:p>Once the new block is received, it is validated and added to the local chain. Algorithm 4 presents how PoS is implemented in FoBSim.</ns0:p><ns0:p>Here, a very wide space is available for implementing reputation management schemes in FoBSim.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/28</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Different scenarios and different applications require different parameters affecting entities' reputation.</ns0:p><ns0:p>Further, adding other types of miners, end-users, or even Fogs implies that different DBs can be suggested.</ns0:p><ns0:p>It is also worth mentioning here that we found it unnecessary to use the multiprocessing package because only one miner is working on the next block. Hence, no competition is implied in the PoS scenario.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.4'>The Proof of Authority</ns0:head><ns0:p>In a simplified version of the PoA algorithm. only authorized network entities (by the network administrators) are illegible to mint new blocks. Regardless of the BC functionality, there is also no need to deploy the multiprocessing package for PoA-based scenarios as there is no competition as in PoS-based scenarios. To mimic the PoA in FoBSim, we allow the user to declare which entities are authorized to mint new blocks. The declaration requested from the user appears in the case of BC deployment in the fog or end-user layer. That is, each fog node is administering a group of end-users, and providing communications (and probably computations) services to them. However, it is not necessary for each fog node in the fog layer to be a BC node as well, but it should be there as only a fog node. Authorized fog nodes then are wearing both hats, fog nodes and BC miners. When the BC is deployed in the end-user layer, authorized miners are responsible for minting new blocks and maintaining the distributed ledger. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Meanwhile, unauthorized miners are only responsible for validating new blocks, received from their neighbors, and maintaining the distributed ledger. This approach allows for comfortably emulating a scenario where the BC in the fog layer and part of the fogs are included in the BC functionality. Notice that a fog node that is also a BC node performs all the required tasks in logical isolation. This means that a fog node that is administering a group of end-users has a buffer to save the end-users TXs, but it does not use these TXs to mint a new block. Rather, it sends these TXs to the mempool as required, and then, only if it was authorized, it collects TXs from the mempool. Notice also, that the mempool is a simple queue in FoBSim, yet it can be implemented for some scenarios to be a Priority Queue. Our implementation of isolating the services provided by a fog node that is also a BC miner facilitates the simulation of scenarios where TXs need to be processed according to their priority. For example, miner nodes in Ethereum usually choose the SCs with the highest Gas/award provided by end-users. This is a type of prioritizing that can be simulated in FoBSim. Similarly, in Bitcoin, a priority value is computed for each TX according to Equation (1), and TXs with higher fees and higher priority values are processed faster <ns0:ref type='bibr' target='#b81'>[77]</ns0:ref>. The default PoA algorithm implemented in FoBSim is clarified in Algorithm 5.</ns0:p><ns0:formula xml:id='formula_2'>Priority = ∑ inputAge * inputValue T Xsize (1)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='5.4'>Transaction/Block Validation in FoBSim</ns0:head><ns0:p>Here, we need to underline some differences between the terms Verification, Validation and Confirmation, and we need to see how FoBSim differentiates between those terms in different scenarios. As we have touched on these differences in <ns0:ref type='bibr' target='#b63'>[61]</ns0:ref>, we need to accurately define each of these terms in order to correctly describe how FoBSim works.</ns0:p><ns0:p>Validation is the process when a miner (either a minter or receiver) checks the correctness of a claim.</ns0:p><ns0:p>That is, in the case of a minter miner, the puzzle solution (or nonce) provided with the minted block needs to be correct before the block is broadcast. If the nonce was valid, the block is broadcast, otherwise, a new solution is searched for. While in the case of a receiver miner, the nonce is checked once. If in this later case the solution was valid, the block is accepted, otherwise, the block is rejected.</ns0:p><ns0:p>In the case of payment functionality, the validity of TXs fetched from the mempool is tested. This means that the amount of coins in the wallet of the sender of each TX, in the payment functionality, is compared to the amount to be transferred. If the wallet contains less than the transferred amount, the TX is withdrawn from the block. Later when the new block is received by a miner, the same hash validation and TXs validation take place, except if one of the TXs were invalid, the whole block is rejected. In the case of a block rejection, the minter miner is usually reported in a reputation-aware context. If all the contents of a newly received block are valid (i.e. the hash, the TXs, the wallets, the block number, and the nonce) the block is added to the locally saved chain. Here, we can say that TXs are confirmed, because the block is added to the chain (i.e. the block is confirmed).</ns0:p><ns0:p>The verification, on the other hand, is the process of verifying the identity of an entity. For example, in the case of PoA, only authorized miners are allowed to mint new blocks. Similarly, in the case of PoS, a received block should be generated by a miner that all other miners expect to receive the new block from.</ns0:p><ns0:p>Additionally, public information about end-users' wallets need to be accessible by miners to validate their TXs. Thus, a received a block, with some TXs generated by end-users who do not have wallets, or whose wallets contents are not readable by miners, can not be validated and confirmed. Failing to confirm a TX is not necessarily caused by end-users not having sufficient coins to transfer, but may also happen for end-users who can not be verified.</ns0:p><ns0:p>All of these critical principles are, by default, taken care of in FoBSim. All miners are informed about the end-users public identities and wallets contents. After that, transferred coins are updated locally in each miner. Consequently, a new TX from the same end-user will be compared to the updated amount of coins in its wallet. Invalid TXs are not included in the block being minted, while invalid TXs cause the rejection of the whole received block. Once a block contents are validated, and the TXs/block generators are verified, the TXs are confirmed, the locally saved wallets amounts are updated, and the block is locally confirmed and added to the chain. The most interesting thing is that the very small probability of a double spend attack <ns0:ref type='bibr' target='#b82'>[78]</ns0:ref>, which can appear in PoW-based scenarios, can be easily simulated in FoBSim.</ns0:p><ns0:p>All processes are actually happening during each simulation run, rather than substituting them with a Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>small delay as in most BC simulation tools we checked. Hence, validation, verification, and confirmation processes can be modified according to the scenario to be simulated. Nevertheless, Bitcoin decreases the double spend attack probability by regularly raising the difficulty of the puzzle, which is a property that can be modified in FoBSim as well. To facilitate the simulation of such critical scenarios, we deployed two broadcasting approaches for newly minted blocks. The first allows the broadcast process using a simple FOR loop, where miners sequentially validate and confirm new blocks. The second allows the broadcast process using the multiprocessing package, which allows all miners to receive and process new blocks at the same time. Relatively, developers need to be cautious when using the second approach, because of some critical challenges similar to those mentioned in Subsection 5.3.2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5'>Awarding winning miners</ns0:head><ns0:p>Generally speaking, BC miners get rewarded by two system entities for providing the BC service (i.e.</ns0:p></ns0:div>
<ns0:div><ns0:head>BC functionality).</ns0:head><ns0:p>The first is the end-user who generated the TX, who pays a small fee once the TX is confirmed (e.g. GAS in Ethereum). The second is the BC network itself (i.e. all miner nodes), who updates the winning miner's wallet once a new block (minted by the winning miner) is confirmed. We can notice here how important it was to clarify the difference between validation, verification, and confirmation.</ns0:p><ns0:p>That is, a miner is verified by its public label and public wallet key/address (ID). Then, a miner being authorized to mint a new block is validated (claim). Finally, a miner is awarded for minting a conformable block (miner's wallet is updated).</ns0:p><ns0:p>In FoBSim, we implemented the second mechanism, where miners get rewarded for their services by the network. We assume this part is hard because it, also, needs to be agreed on by the majority of BC miners (i.e. at least 51%), and it requires the condition that they confirm the block. The default implementation of FoBSim does that. For the first incentivization mechanism, we thought that it is not applicable in many different scenarios, hence we left it for the developers to add it if needed. For example, to allow end-users to provide fees for getting tasks in the BC, one field can be added to generated TXs, containing the amount of fees the end-user is willing to pay for the service. Once a miner picks a TX (mostly, TXs with higher fees are faster to be picked and processed by miners) and the block containing the TX is confirmed, all miners add the TX fees to the winning miner's wallet. Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>-a of the Appendices presents a screenshot of FoBSim output, concluding that a new block was received from Miner 2 by Miner 3, and that the BC module just obtained the needed confirmations to consider the new block confirmed by the whole BC network. Thus, the minter is awarded. Later, the receiver miner presents its updated local chain according to the successful network confirmation. On the other hand, Figure <ns0:ref type='figure' target='#fig_10'>4-b</ns0:ref> presents a screenshot of the miner wallets log after a simulation run, where the PoA CA was used and all miners, except for Miner 5, were authorized to mint new blocks (initial wallet value was 1000).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.6'>Strategies in FoBSim</ns0:head><ns0:p>As had been discussed so far, there are some default strategies used by FoBSim entities throughout each simulation run. To mention some, TXs are picked by miners with no preference, e.g. the highest GAS or priority. Also, a default chain is a single linear chain and new blocks are added to the top of this chain. Some applications, however, have multiple chains or multi-dimentional chains, e.g. Directed Acyclic Graph (DAG) based chain. Additionally, if two blocks appear in the network, the block that was accepted by the majority of miners is confirmed rather than, in some BC systems, the older one is confirmed even if it was confirmed by the minority. Further, a valid block is immediately announced, once found, into the FoBSim network, while in some applications, there might be a conditional delay. For instance, if a selfish mining attack scenario to be simulated, miners would prefer to keep their newly found blocks secret, hoping they will find the next block as well <ns0:ref type='bibr' target='#b83'>[79]</ns0:ref>.</ns0:p><ns0:p>The current version of FoBSim supposes that the data flows from end-users to the fog, and from the fog to the BC network. However, there are other possible data flow schemes that can be simulated, as depicted in Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>. For example, the BC in the current version provides DLT services to end-users, which are communicating with the BC through the fog layer, while services might be provided by the fog layer to the BC network or from the BC network to the fogs in some applications. Further, an application where end-users may need to request data directly from the BC might be possible, which implies different data flow scheme as well. FoBSim facilitates the modification of the data flow in the simulated application, and presents an extra Cloud module that can add more possibilities to the application.</ns0:p><ns0:p>Network connectivity characteristics are a major and critical concern in any BC system. To facilitate network architects job, FoBSim allows to define the number of nodes in each layer, the number of Accordingly, the effect of manipulating the topology of simulated networks can be easily captured.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.7'>FoBSim Constraints</ns0:head><ns0:p>Some properties have not been implemented in the current version of FoBSim, such as Merkle Trees, Digital Signatures and Mining Pools. Additionally, FoBSim source code can be run on a PC with Microsoft Windows or Linux OS, but it may need some modifications if to be run on a PC with a MAC OS (some functions require access to OS operations such as deleting or modifying files located at the secondary memory). Finally, The default limit of recursion in python may restrict the number of miners to 1500, which may raise some error regarding the maximum allowed memory use by the interpreter. To solve this, one can modify the maximum limit using the sys.setrecursionlimit in the main function.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.7.1'>Merkle Trees</ns0:head><ns0:p>A Merkle Tree (MT), or Hash Tree, is a data structure, which is mostly a binary tree, whose leaves are chunks of data. Sub-consequently, each leaf is double hashed with its siblings to produce their new parent, which represents its two children. Hashes are recursively hashed together, in a binary manner, until obtaining one root that represents the whole tree. MTs are used in BCs such as BitCoin to decrease the probability of security attacks, along with other security measures, to reach the level where it is (a) easy for light weight nodes to validate new TXs and (b) computationally impractical to attack/alter a BC. That is, each TX in any given block is hashed with its next, and so on, so that one root hash of all TXs is saved in the block header. Using this root hash, and other components of the block, the hash of the block is generated. This means that not only a confirmed block is impossible to alter, but also a confirmed TX within a confirmed block.</ns0:p><ns0:p>However, not all BC systems deploy an MT approach due to some probable conflicts with system requirements or objectives. Thus, we decided to leave this to be implemented by developers according to the systems that need to be simulated, and we decided that the default configuration of BC nodes in the current version of FoBSim is to make all miners full node miners. That is, every miner locally stores a complete copy of the chain so that any TX can be validated according to TXs recorded locally.</ns0:p><ns0:p>Additionally, there are different deployment models of MT approaches in different BC systems. That is, some BCs may deploy MTs for hashing other chunks of data/tokens instead of TXs.</ns0:p><ns0:p>To implement an MT approach in FoBSim, one can add a function that performs a loop through all TXs in a newly minted block, up to the last TX. After that, the root of the MT is added to the block before it is broadcast to the BC and the hash of the block is computed accordingly. Miners who receive a new block shall, accordingly, validate the added root. Hence, a validation step, to test the correctness of the MT root compared with TXs within the new block, needs to be added to the validation function in the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science miner module of FoBSim. To make use of such added property, one can define a light weight miner type which saves only the header of a newly confirmed block instead of the whole block. Accordingly, such type of miners validate new TXs according to this light chain of headers, hence consume less time, energy, and storage to maintain the work of the BC system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.7.2'>Digital Signatures</ns0:head><ns0:p>As our main aim is to generally simulate TX generation, validation, and confirmation, in different BCbased, inter-operation, and consensus scenarios, we did not target security issues. This is because such issues are determined individually for each case to be simulated, leading to different mining economics.</ns0:p><ns0:p>The discussion of security techniques and approaches in BC-based Fog and IoT systems had been discussed in many previous works, such as <ns0:ref type='bibr' target='#b84'>[80]</ns0:ref>. Specifically, digitally signed coins/tokens are primarily used in real-world applications of cryptocurrencies in order to prevent security attacks, such as the What the default version of FoBSim provides, however, is a simplified protocol of coin transfer between users. That is, each miner holds a locally saved record of user wallets, which is used in TXs validation in case of Payment BC functionality. We found that this approach can output similar results to those output by systems with signed coins, except that this approach allows a double spending attack in case of malicious end-users. If a scenario to be simulated, where there are some faulty/malicious entities among system users (which is not implemented in the default version of FoBSim), then digitally signed coins need to be implemented as well. Additionally, miner nodes in FoBSim are assumed to be trusted to send reports of confirmed blocks. Thus, reports sent by miner nodes to the network aiming to participate in voting regarding winning miners are assumed always legitimate. To sum up, FoBSim miners can track who, paid whom, how much, and they are trusted to participate in voting without a crypto-graphic proof.</ns0:p><ns0:p>While, in other implementation approaches, FoBSim miners may track who has transferred, what units, of which stocks (i.e. digitally signed coins/tokens), to whom, and their votes regarding winning miners must be verified by network entities (i.e. by also adding the new block to their local chains, and following this addition with other new blocks, each newly added block can be considered, in a sense, a confirmation).</ns0:p><ns0:p>Similarly, end-users who generate new TXs do not need to sign their generated TXs as they are assumed trusted (i.e. the default implementation of FoBSim does not include malicious end-users).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.7.3'>Mining Pools</ns0:head><ns0:p>Pool mining is the collaboration between miners to form mining pools and distribute the earned rewards in accordance with pool policies to earn a steady income per miner <ns0:ref type='bibr' target='#b85'>[81]</ns0:ref>. Examples of such mining pools include BTC.com, F2Pool, and Slush Pool. Mining pools provide the advantages of making mining profits more predictable to miners and allowing small miners to participate. However, the existence of pool mining increases the probability of system centralization and discourages full nodes. The necessity of adding a mining pool extension to FoBSim is dependant on the scenario to be simulated. As the general idea of mining pools is to allow miners to perform mining under the umbrella of named group, if one of the group miners finds a block, the award is divided among all group members according to the computational power each member provides. A mining pool is managed by a pool manager, whose protocol is defined according to the business model of the pool.</ns0:p><ns0:p>In the current version of FoBSim, all miners are full nodes miners. That is, each miner attempts to solve the puzzle using its own resources, validate newly generated TXs and accumulate them into new blocks. When a block is received by a full node, it is validated and confirmed locally (all miners save the whole BC for validation, verification, and confirmation). Consequently, any profits and awards, obtained because of the full miner work, are directly added to the miner's wallet. On the contrary, a miner receives an award that is proportional to the computational power it provides, even if it was the one who found the next block. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CASE STUDIES</ns0:head><ns0:p>Following the validation and verification methods of simulation models presented in <ns0:ref type='bibr' target='#b86'>[82]</ns0:ref>, we have so far discussed the technologies and the paradigms lying within our proposed FoBSim environment. Further, we highlighted our proposal novelty compared to other related works, discussed the event validity in FoBSim, and presented the algorithms and modules lying within to facilitate a structured walk-through validation.</ns0:p><ns0:p>Next, we follow an operational validity approach by presenting case studies that we simulated using FoBSim. The setup and behaviour of FoBSim is discussed, and the results of the simulation runs are presented afterwards. Other miners then accept the new block if it was generated by the minter they were informed about, hence the verification process takes nearly no time (assuming that the transmission delay between miners is set to 0). In simple versions of those two algorithms, all miners have the same source code, thus all miners may be minters, verifiers, and chain maintainers.</ns0:p><ns0:p>The PoA algorithm is the tricky one though. This is because all authorized miners mint new blocks, verify newly minted blocks, and maintain the chain locally. Meanwhile, other BC nodes verify new blocks and maintain the chain, but do not mint new blocks <ns0:ref type='bibr' target='#b87'>[83]</ns0:ref>. Consequently, every BC node has a list of authorized entities, including the methods to verify their newly minted blocks. This implies that the more authorized entities, the more complex the verification can be on the receiver side. Accordingly, it is advised that small number of entities be given authorization for decreasing the complexity of verification <ns0:ref type='bibr' target='#b88'>[84]</ns0:ref>. Meanwhile, the more maintainers in a PoA-based BC, the higher the overall security level of the system.</ns0:p><ns0:p>In this case study, we run FoBSim several times, with which we deploy different CAs under similar conditions. The simulation runs targeted specifically the measurement of the average time consumed by each CA, from the moment where a miner is triggered to mint a new block, until the minted block by this miner is confirmed by, at least, 51% of other BC miners. To accurately measure this average, we added some variables holding the starting time and the elapsed time, exactly before calling the build block() function and right after a block is confirmed by reaching the required number of confirmations.</ns0:p><ns0:p>As described in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, we changed the difficulty of the puzzle during the PoW-based BC simulation runs from an easy level <ns0:ref type='bibr' target='#b4'>(5)</ns0:ref>, to a harder level <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref>, and finally to very hard levels <ns0:ref type='bibr' target='#b14'>(15)</ns0:ref> and <ns0:ref type='bibr' target='#b20'>(20)</ns0:ref>. During the runs where PoA was used, we changed the number of authorized miners from 2/5 (2 authorized out of a total of 5 miners), 5/10, 10/20, and 25 authorized miners for the rest of runs.</ns0:p><ns0:p>As we wanted to abstractly measure the average confirmation time, we avoided the Computational Services and the Payment functionality, because both imply extra time consumption for performing the computational tasks, and validating the payments, respectively. We also avoided the Identity management functionality because the number of TXs per end-user is limited by the number of ID attributes required to be saved on the chain. Hence, our best choice was the data management functionality. We kept the total number of TXs delivered to the mempool unchanged, which gives equivalent input for all simulation runs.</ns0:p><ns0:p>However, we changed the number of TXs generated by each user as to be equal to the number of miners in each run. More precisely, as the total number of TXs is determined using Equation <ns0:ref type='formula'>2</ns0:ref>, where a, b and c are the number of fog nodes, the number of end-users, and the number of TXs per end-user, respectively, Additionally, winning miners were awarded 5 coins for each confirmed block they had minted.</ns0:p><ns0:formula xml:id='formula_3'>|T Xs| = a × b × c (2)</ns0:formula><ns0:p>We deployed the FoBSim environment on Google Cloud Platform, using a C2-standard-16 (up to 3.8</ns0:p><ns0:p>GHz, 16 vCPUs, 64 GB memory), with Debian OS. We have chosen to place the BC in the end-user layer for all runs, not for any reason other than testing the reliability and stability, of FoBSim components and results, in such complex inter-operable <ns0:ref type='bibr' target='#b89'>[85]</ns0:ref> Edge-Fog-BC scenarios. Manuscript to be reviewed In this case, we compare BC deployment efficiency in the fog layer and end-user layer. The efficiency we are seeking is determined by both the total time needed to perform all requested BC services and total storage cost. That is, less time and storage needed to perform all tasks (e.g. confirm all newly minted blocks or run the generated SCs) indicates higher efficiency of the BC system. To fairly compare the BC efficiency when deployed in those two layers, we stabilize all BC parameters that are configurable in FoBSim, except for the number of miner nodes to deduce the trend in total time consumption when the network dynamically allows for new nodes to join the network. We deployed the FoBSim tool on the Google Cloud Platform, using a C2-standard-16 VM (up to 3.8 GHz, 16 vCPUs, 64 GB memory), with Ubuntu OS. The detailed parameter configuration while running the described scenarios is presented in Table <ns0:ref type='table' target='#tab_11'>8</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Recalling the results presented in <ns0:ref type='bibr' target='#b92'>[88]</ns0:ref> and <ns0:ref type='bibr' target='#b93'>[89]</ns0:ref>, average transmission delay between miners in the fog layer can be estimated by 12 ms., while it can be estimated between miners in the end-user layer to 1000 Manuscript to be reviewed Computer Science ms. (higher transmission delays were reported in well known BC networks, such as BitCoin, in <ns0:ref type='bibr' target='#b94'>[90]</ns0:ref>). We simulated the data management BC service and PoW consensus with gossiping activated. According to Equation 2, the number of requested tasks was automatically modified due to the continuous change in the number of fog nodes (since we oscillated the number of fog nodes to deduce the trend of total time consumption). The total average time for performing requested BC services, in similar simulation sittings, while the BC is deployed in end-user and fog layers, is compared in Figure <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>.a.</ns0:p><ns0:p>To accurately measure the storage cost during the simulation run, we implemented an independent python code, available in the FoBSim repository, namely storage cost analysis.py. As described in Section 5.1, the output analysis files, ledgers, wallets, etc. of running a given simulation scenario using FoBSim, are automatically saved in a folder titled 'temporary' within the same repository. Thus, our implemented storage analyzer aims at regularly (i.e. every one second as a default sitting) measuring the size of this temporary folder while the simulation is running. The measured sizes are then saved into an Excels sheet to facilitate performing the analysis we are seeking. To exemplify this, the total storage used by the BC network is compared in Figure <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>.b, where similar simulation sittings were configured (detailed in Table <ns0:ref type='table' target='#tab_11'>8</ns0:ref>), except for the layer where the BC is deployed.</ns0:p><ns0:p>It can be noted from the results presented in the third case that deploying the BC network in the fog layer may enhance its efficiency in terms of total time consumed to perform similar tasks in similar configuration, and in terms of total storage cost by the BC network to maintain the same distributed ledger (same number of confirmed blocks by the end of the simulation run).</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSIONS</ns0:head><ns0:p>In this paper, we proposed a novel simulation tool called FobSim that mimics the interaction between the entities of an integrated Fog-Blockchain system. We briefly described the architectural elements of Fog Computing (FC) and Blockchain (BC) technologies, and designed FoBSim in order to cover all the elements we described. We deployed three different consensus algorithms, namely PoW, PoS and PoA, and different deployment options of the BC in an FC architecture, namely the end-user layer and the fog layer. Additionally, we fine tuned the FoBSim modules so that various services, provided by FC and BC, can be adopted for any proposed integration scenario. The services that can be simulated are distributed Payment services, distributed Identity services, distributed Data storage and distributed Computational services (through Smart Contracts). In our paper, we described the modules of FoBSim, the TX modelling, the Genesis block generation, the gossiping in FoBSim, the Consensus Algorithms, TX and block validation, incentive mechanisms, and other FoBSim strategies. We validated FoBSim with case studies: the first compares the average time consumption for block confirmation in different consensus algorithms, while the second analyzes the effect of gossiping on the consistency of the distributed ledger, in fluctuated puzzle difficulty and transmission delay configurations. The last case compared the efficiency of the BC network, in terms of total time consumption and total storage required to perform similar tasks, when deployed in the fog layer against the end-user layer. The results of the first case showed that the PoS algorithm provides the least average block confirmation time, followed by PoA and PoW, respectively. The results of the second case showed how the gossiping protocol, implemented within FoBSim, effectively contributes to enhance the consistency of the distributed ledger. The last case showed that deploying the BC network in the fog layer may drastically enhance the BC performance, in terms of total execution time and total storage cost, due to low transmission delay between miners.</ns0:p><ns0:p>In the future releases of FoBSim, we are willing to make more CAs available, as well as enhancing the identity management scheme in FoBSim. We will further investigate adding the Reputation management service in a generalized and simple manner so that analysis can be provided, while proposed reputation management ideas, conditions, or properties can be easily implemented/modified.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Workflow of an automated Fog-enhanced Cloud system</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.57,209.59' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Service models provided by Cloud/Fog systems, and their relevant service models provided by BC systems</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.56,144.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>7 / 28 PeerJ</ns0:head><ns0:label>728</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. FC-BC integration system model, where (a) the BC is deployed in the fog layer, and (b) the BC is deployed in the end-user layer</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.57,122.45' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3.b.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>9 / 28</ns0:head><ns0:label>928</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>10 / 28</ns0:head><ns0:label>1028</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>11 / 28 PeerJAlgorithm 3 :</ns0:head><ns0:label>11283</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020) Manuscript to be reviewedComputer Science scenario in FoBSim, we needed to deploy the multiprocessing package of Python and trigger all miners to work together on the next block. The default PoW mining algorithm in FoBSim miner Result: New block β confirmation initialization Self(miner µ g ); Collect TXs from memPool; Gossip(); if BC function is Payment then validate collected TXs else if BC function is Computational Services then eval(TXs); add the evaluation results to TXs end Accumulate TXs in a new BC block β ; Find the puzzle solution of β (nonce); Broadcast β to neighbors; end if New block β is received then Gossip(); if β nonce is correct then if BC function is Payment then validate and confirm TXs in β end add block β to the local chain; Broadcast β to neighbors; report a successful block addition [β , µ g ] end end</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>14 / 28 PeerJ</ns0:head><ns0:label>1428</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>15 / 28 PeerJ</ns0:head><ns0:label>1528</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Possible data flow schemes in an integrated Fog-BC system</ns0:figDesc><ns0:graphic coords='18,141.73,63.78,413.55,193.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>17 / 28 PeerJ</ns0:head><ns0:label>1728</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>double spending attack. Different BC-based cryptocurrency systems used different mechanisms and protocols regarding signing and minting new coins, hence, different simulated scenario would require the implementation of the reference coins and digital signing techniques to be simulated. Examples might include a research work that aims at comparing different signing protocols in different CAs. This being said, FoBSim does not target a specific cryptocurrency system, such as BitCoin, yet it provides the generalized environment used in such systems, where problems and solutions can be implemented and emulated by researchers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Average block confirmation time (a) consumed by PoS-based BC vs. PoA-based BC, relatively to the number of miner nodes (b) consumed by PoW-based BC (the cases of difficulty = 5, 10, 15, and 20), relatively to the number of miner nodes</ns0:figDesc><ns0:graphic coords='20,141.73,63.78,413.57,130.02' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>20 / 28 PeerJTable 7 .Figure 6 .</ns0:head><ns0:label>202876</ns0:label><ns0:figDesc>Figure 6. The effect of activating the gossiping protocol in FoBSim, on the number of chain versions at the end of PoW-based BC simulation runs, where (a) the puzzle difficulty fluctuates from 5 to 20 and (b) the transmission delay between neighboring miners fluctuates from 0 to 25 ms</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. BC efficiency comparison while deployed in end-user layer vs. fog layer, in terms of (a) Total elapsed time for the BC network to perform requested services, and (b) Total storage used by the BC network to perform requested services</ns0:figDesc><ns0:graphic coords='23,141.73,63.78,413.55,129.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>22 / 28 PeerJ</ns0:head><ns0:label>2228</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,141.73,146.75,413.56,136.52' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Description of abbreviations used within the manuscript 5. FoBSim is the first simulation environment whose primary goal is to mimic integration scenarios of FC and BC technologies.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Abbreviation Description</ns0:cell><ns0:cell cols='2'>Abbreviation Description</ns0:cell></ns0:row><ns0:row><ns0:cell>BC</ns0:cell><ns0:cell>Blockchain</ns0:cell><ns0:cell>PoW</ns0:cell><ns0:cell>Proof of Work</ns0:cell></ns0:row><ns0:row><ns0:cell>FC</ns0:cell><ns0:cell>Fog Computing</ns0:cell><ns0:cell>PoS</ns0:cell><ns0:cell>Proof of Stake</ns0:cell></ns0:row><ns0:row><ns0:cell>IoT</ns0:cell><ns0:cell>Internet of Things</ns0:cell><ns0:cell>PoET</ns0:cell><ns0:cell>Proof of Elapsed Time</ns0:cell></ns0:row><ns0:row><ns0:cell>CA</ns0:cell><ns0:cell>Consensus Algorithms</ns0:cell><ns0:cell>PoA</ns0:cell><ns0:cell>Proof of Authority</ns0:cell></ns0:row><ns0:row><ns0:cell>IoE</ns0:cell><ns0:cell>Internet of Everything</ns0:cell><ns0:cell>TTP</ns0:cell><ns0:cell>Trusted Third Party</ns0:cell></ns0:row><ns0:row><ns0:cell>DLT</ns0:cell><ns0:cell>Distributed Ledger Technology</ns0:cell><ns0:cell>P2P</ns0:cell><ns0:cell>Peer-to-Peer</ns0:cell></ns0:row><ns0:row><ns0:cell>SC</ns0:cell><ns0:cell>Smart Contracts</ns0:cell><ns0:cell>TX</ns0:cell><ns0:cell>Transaction</ns0:cell></ns0:row><ns0:row><ns0:cell>GUI</ns0:cell><ns0:cell>Graphical User Interface</ns0:cell><ns0:cell>TTL</ns0:cell><ns0:cell>Time To Live</ns0:cell></ns0:row><ns0:row><ns0:cell>QoB</ns0:cell><ns0:cell>Quality of Blockchain</ns0:cell><ns0:cell>DAG</ns0:cell><ns0:cell>Directed Acyclic Graph</ns0:cell></ns0:row><ns0:row><ns0:cell>PoG</ns0:cell><ns0:cell>Proof of Generation</ns0:cell><ns0:cell>MT</ns0:cell><ns0:cell>Merkle Tree</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Algorithm 1: The default Gossip protocol in FoBSim Result: Confirmed Local chain in µ g</ns0:figDesc><ns0:table><ns0:row><ns0:cell>for block in C do</ns0:cell></ns0:row><ns0:row><ns0:cell>if Condition 1 OR Condition 2 then</ns0:cell></ns0:row><ns0:row><ns0:cell>chain is confirmed = False;</ns0:cell></ns0:row><ns0:row><ns0:cell>break</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return chain is confirmed</ns0:cell></ns0:row></ns0:table><ns0:note>initialization: Self(miner µ g ); confirmed chain = self.local chain; temporary global chain = longest chain; Condition 1 = len(temporary global chain) > len(confirmed chain); Condition 2 =blocks in temporary global chain are confirmed by network majority; if Condition 1 AND Condition 2 then confirmed chain = temporary global chain; self.local chain = confirmed chain; self.top block = confirmed chain[str(len(confirmed chain)-1)]; if BC function is Payment then self.log users wallets = confirmed chain from.log users wallets end end Algorithm 2: The default chain confirmation function in FoBSim Result: bool chain is confirmed Passed parameters: Chain C, network size; initialization: chain is confirmed = True; block confirmation log = blockchain.confirmation log; Condition 1 = not (C[block]['hash'] in block confirmation log); Condition 2 = block confirmation log[chain[block]['hash']]['votes'] <= (network size / 2);</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Algorithm 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The default PoS mining algorithm in FoBSim Result: Confirmed new block β initialization miners µ [0,1,..n] , miners.wallets, stake random no. of coins from each miner.;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The Network:;</ns0:cell></ns0:row><ns0:row><ns0:cell>while mempool.qsize() > 0 do</ns0:cell></ns0:row><ns0:row><ns0:cell>Randomly choose a predefined no. of miners;</ns0:cell></ns0:row><ns0:row><ns0:cell>Choose the miner with the highest Stake value;</ns0:cell></ns0:row><ns0:row><ns0:cell>Inform all miners of the ID of the next block generator µ g ;</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>The Miner:;</ns0:cell></ns0:row><ns0:row><ns0:cell>if a new ID µ g is received from the Network then</ns0:cell></ns0:row><ns0:row><ns0:cell>if MyAddress == µ g then</ns0:cell></ns0:row><ns0:row><ns0:cell>Collect TXs from memPool;</ns0:cell></ns0:row><ns0:row><ns0:cell>if BC function is Payment then</ns0:cell></ns0:row><ns0:row><ns0:cell>validate collected TXs</ns0:cell></ns0:row><ns0:row><ns0:cell>else</ns0:cell></ns0:row><ns0:row><ns0:cell>if BC function is Computational Services then</ns0:cell></ns0:row><ns0:row><ns0:cell>eval(TXs);</ns0:cell></ns0:row><ns0:row><ns0:cell>add the evaluation results to TXs</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>Accumulate TXs in a new BC block β ;</ns0:cell></ns0:row><ns0:row><ns0:cell>Broadcast β ;</ns0:cell></ns0:row><ns0:row><ns0:cell>else</ns0:cell></ns0:row><ns0:row><ns0:cell>Wait for a new block from µ g ;</ns0:cell></ns0:row><ns0:row><ns0:cell>if β is received then</ns0:cell></ns0:row><ns0:row><ns0:cell>if µ g == β .generator then</ns0:cell></ns0:row><ns0:row><ns0:cell>if BC function is Payment then</ns0:cell></ns0:row><ns0:row><ns0:cell>validate and confirm TXs in β</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>add block β to the local chain;</ns0:cell></ns0:row><ns0:row><ns0:cell>Broadcast β to neighbors;</ns0:cell></ns0:row><ns0:row><ns0:cell>report a successful block addition [β , µ g ]</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Simulation parameters configuration for Case 1</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Case 1: Comparing time consumption of PoW, PoS, and PoA When</ns0:head><ns0:label /><ns0:figDesc>we compare PoW, PoS and PoA in terms of average time consumed for block confirmation, PoW is expected to present the highest time consumption. This is because of the mathematical puzzle that each minter needs to solve in order to prove its illegibility to mint the next block. In PoS, on the other hand, the network algorithm randomly chooses the next minter, while it slightly prefers a miner with higher amount of staked coins. Once a minter is chosen, all miners are informed about the generator of the next block</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='6'>M=5 M=10 M=20 M=50 M=100 M=500</ns0:cell></ns0:row><ns0:row><ns0:cell>PoS algorithm</ns0:cell><ns0:cell>0.018</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.18</ns0:cell><ns0:cell>0.046</ns0:cell><ns0:cell>0.09</ns0:cell><ns0:cell>0.19</ns0:cell></ns0:row><ns0:row><ns0:cell>PoA algorithm</ns0:cell><ns0:cell cols='2'>0.002 0.008</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>2.94</ns0:cell></ns0:row><ns0:row><ns0:cell>PoW-5 algorithm</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>2.1</ns0:cell><ns0:cell>1.31</ns0:cell><ns0:cell>6.15</ns0:cell><ns0:cell>60.6</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PoW-10 algorithm 0.07</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>2.1</ns0:cell><ns0:cell>2.03</ns0:cell><ns0:cell>5.21</ns0:cell><ns0:cell>58.9</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PoW-15 algorithm 0.25</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>2.23</ns0:cell><ns0:cell>2.26</ns0:cell><ns0:cell>6.18</ns0:cell><ns0:cell>74.76</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PoW-20 algorithm 6.02</ns0:cell><ns0:cell>9.5</ns0:cell><ns0:cell>24.2</ns0:cell><ns0:cell>59.62</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Results of Case-1, where the PoW puzzle difficulty ranged from 5 to 20, and the number of Miners (M) ranged from 5 to 500.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>19/28PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Simulation parameters configuration for Case-2, where the Gossiping property is interchangeably activated and deactivated</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Configuration</ns0:cell><ns0:cell cols='4'>diff.=5 diff.=10 diff.=15 diff.=20</ns0:cell></ns0:row><ns0:row><ns0:cell>Gossip activated</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>Gossip deactivated</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell>98</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>67</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results of Case-2, where the puzzle difficulty ranged from 5-20, and the Gossiping in FoBSim was interchangeably activated and deactivated and, thus, the minter needs to perform no tasks other than accumulating TXs in a new standard block.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Capturing the effect of using the Gossip protocol</ns0:head><ns0:label /><ns0:figDesc>Table 4 presents the exact results we obtained, which are depicted in Figures 5.a and 5.b.According to the results obtained from the simulation runs, one can notice that PoW-based BCs consume much more time, to confirm a block, than PoA and PoS -based BCs, which is inline with the theoretical and experimental results of most previous research. Additionally, the average block confirmation time, in PoW-based and PoA-based BCs, seems to be directly proportional to the BC network size, which complies with the results recently presented in<ns0:ref type='bibr' target='#b90'>[86]</ns0:ref>. Comparatively, an average block confirmation time in a PoS-based BC seems unaffected by the network size, which complies with the expectations recently presented in<ns0:ref type='bibr' target='#b91'>[87]</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Case 2: In this case, we compare the number of chain forks at the end of several simulation runs, where we</ns0:cell></ns0:row><ns0:row><ns0:cell>interchangeably activate and deactivate the gossiping property in a PoW-based BC. Accordingly, one can</ns0:cell></ns0:row><ns0:row><ns0:cell>notice the effect of gossiping on ledger finality under different conditions, namely the puzzle difficulty</ns0:cell></ns0:row><ns0:row><ns0:cell>and the transmission delay between miners. As it was mentioned in Subsection 5.3.1, gossiping is a</ns0:cell></ns0:row><ns0:row><ns0:cell>continuous process during the life time of the network, which implies that miners would mostly have</ns0:cell></ns0:row></ns0:table><ns0:note>different chain versions at any given moment. In this case, we detect the number of chain versions at the end of simulation runs, which can be decreased to one version under strictly designed parameters, such as medium network size, high puzzle difficulty, low transmission delay, low number of neighbors per miner, etc. Nevertheless, our goal in this case is to demonstrate how the activation of the gossiping property during a simulation run on FoBSim can decrease the number of chain versions and, thus, it can positively contribute to the consistency of the distributed ledger. For this case, we also deployed the21/28PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>presents the initial configuration in each simulation scenario, while Tables6 and 7present the results we obtained by running the described scenarios, which are depicted in Figures 6.a and 6.b.As can be noted from the results, the default gossip protocol in FoBSim could decrease the number of chain versions at the end of each simulation run. Although the number of chain versions did not reach the optimum value (i.e. one chain version), it is obvious that activating the gossiping property decreases the number of chain versions at each simulation run and, thus, enhances the distributed ledger consistency.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Simulation parameter</ns0:cell><ns0:cell cols='2'>For total time efficiency For total storage efficiency</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of Fog Nodes</ns0:cell><ns0:cell>10-100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of users per fog node</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of TX per user</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of miners</ns0:cell><ns0:cell>10-100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of neighbours per miner</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>no. of TX per Block</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>puzzle difficulty</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>15</ns0:cell></ns0:row><ns0:row><ns0:cell>Max end-user payment</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>miners initial wallet value</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row><ns0:row><ns0:cell>mining award</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>delay between neighbors</ns0:cell><ns0:cell>fog layer: 12 ms.,</ns0:cell><ns0:cell>fog layer: 12 ms.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>end-user layer: 1000 ms.</ns0:cell><ns0:cell>end-user layer: 1000 ms.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Simulation parameters configuration for Case-3, where the efficiency of BC is assessed in the fog layer and end-user layer, in terms of total run time and total storage cost</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Case 3: Comparing deployment efficiency of BC in the fog layer vs. end-user layer</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot' n='16'>/28 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='18'>/28 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='28'>/28 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54599:1:2:CHECK 24 Dec 2020) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Manuscript no.: CS-2020:10:54599
Manuscript title: FoBSim: an extensible open-source simulation tool for integrated fog-blockchain systems
Authors: H. Baniata and A. Kertesz
Submitted: 29 Oct 2020
First Decision made: 27 Nov 2020
Revised version submission is due: 27 Dec 2020
Dear Respected Editor and Reviewers,
We are very grateful for providing such valuable suggestions and comments, which we have tried to
address completely to revise our article.
We have incorporated all the suggestions and comments in the revised version of our article and hope
that the updated version will satisfy the reviewers. Below we present our point-by-point responses to the
reviewers’ comments. Where applicable, we made corresponding modifications in the revised version of
our manuscript.
Together with this responses letter, we enclose a clean version of the revised paper and a highlighted
version, where all modifications are highlighted.
P.S: Due to an editorial request, we reduced the paper length and replaced some figures and tables
into a supplemental material file.
Regards,
H. Baniata and A. Kertesz
Reviewer 1
1) The writing of the paper should be improved for example Section 4.4 Them selves.
Authors response: We performed a detailed proofreading and revised the corresponding section.
2) It is said that the complex tasks are forwarded to the cloud. However, I don’t see any explanation for
these kind of tasks and how are they linked to the blockchain network? It needs explanation in the paper.
Authors response: In fact, we mentioned that the service is requested by end-users from the fog
layer, which provides the requested service, if possible. Otherwise, the request is forwarded to the
cloud where complex and time-consuming actions are performed (Section 3). In FoBSim, we
define a Cloud module, in which functions and classes (e.g. Virtual Machine, Scheduler, load
balancing) can be implemented according to the scenario to be simulated, or the type of cloud to
be deployed in the simulated system. An example of a system consisting of BC, Fog and Cloud
layers interacting with each other is briefly discussed referring to our previous work [PF-BTS, H.
Baniata et al., 2020, Information Processing and Management]. In PF-BTS, an earlier version of
FoBSim was used to validate our proposal. The current version of FoBSim is more mature and
robust, and even better tested. Thus, this property is available in FoBSim, but we did not present a
case where the cloud module is configured. There are many scenarios that can be simulated
using FoBSim (represented in Figures 1, 2, 3, and 4) and we believe that mentioning a possible
interaction, along with some cases to validate the correctness of the implementation is sufficient.
As FoBSim is primarily proposed to mimic Fog-BC interaction, the use cases discussed in Section
6 are now enhanced to present how FoBSim can validate and describe BC networks deployed in
the fog or the end-user layer. The cloud layer implementation is left for researchers depending on
the scenario to be simulated (we also plan to address this in our future work).
3) “Meanwhile, the communications within the BC network and with the fog layer are configurable”. What
exactly this statement signifies. What exactly is the definition of communication here?
Authors response: In the current (default) version of FoBSim, each fog component can send enduser tasks to a unified sink of the Blockchain, namely mempool. Some researchers may want to
try to configure fog nodes to send tasks to another component of the Blockchain or the Fog layer.
We claim that such modification is easily doable in FoBSim, as the send_tasks() function in the
fog module can be reconfigured according to the scenario to be simulated. As this is only a
simulation tool, we described how communications are modeled in FoBSim, and we presented all
possible communications that can appear in a Fog-Blockchain integrated system, but we kept
FoBSim flexible to technical configurations of different systems to be simulated. A
communication can be defined as a direct connection between two components of the system.
Accordingly, we are implying that FoBSim provides flexible environment to simulate different
network architectures, where any component of a Fog-Blockchain integrated system can directly
connect to any other component depending on the scenario to be simulated. These pieces of
information are now clarified in Section 5.6 (lines: 609—616).
4) It is said that the blockchain can be placed in Fog-layer as well as End user layer. But how do we
decide this? Is there a possibility that BC layer act independently in the simulator, i.e., BC should not be
placed in Fog-layer and End user layer?
Authors response: Thank you. Your question had been investigated in many previous works,
including our survey (Baniata and Kertesz, 2020, IEEE-Access). The first paper that proposed an
integrated Fog-BC system was published only recently in 2016. Hence, the integration of these
innovative technologies have not yet become mature enough to answer your questions. In fact,
the main purpose of FoBSim is providing a suitable, specialized, and well implemented tool to
simulate Fog-BC systems. Maybe after some time and analysis we can answer the question
regarding the best deployment practices of BC in fog-enhanced systems. According to our
current, ongoing research results, we believe that there is a trade-off between the computational
capacity and transmission delays among miners. Additionally, the number of miner nodes,
number of neighbors per miner, number of blocks being minted at the same time, and the
difficulty of the puzzle, are effective parameters on the optimal deployment. FoBSim provides
researchers with easy methods to oscillate all these parameters.
Regarding the BC acting independently in the simulator, the answer is: “yes, it can”. As it was
discussed in the previous comment response (depicted in Figure 4), entities of the system can
connect directly to each other according to the scenario to be simulated. Simply, the end-user
module can be configured so that it sends its tasks directly to the BC mempool, instead of
sending them to the fog layer in the default version of FoBSim. To clarify, we refer to Figure 3-b in
our manuscript, where some end-users act as miners, and other end-users act as task requesters.
Note that the fog is, by default, the entity that receives end-user tasks and send them to the
mempool of the simulated BC. If connections were configured to be direct between end-users and
miners (which is simply doable), then the system scenario becomes no more a fog-enhanced
system (which is an advantage that FoBSim provides, because it is an integration of two
technologies that can simulate one of them or both together in a reliable manner).
5) Similarly, it is said that the BC can be implemented in cloud? Is it reasonable to put BC nodes in cloud
architecture?
Authors response: For some systems, yes, it is an optimal solution. BC consists of miners and
miners may be virtual machines, schedulers, etc. Examples of such schemes are many, such as
[A. Wilczyński, J. Kołodziej, 2020] to mention one.
6) There is an initiate_miner() function. If BC functionality is implemented in End-users, all the nodes will
act as a miner or there is a possibility that few nodes act as a miner and few nodes functions normally? If
that is the case, can we tune the number of participating nodes in the blockchain network?
Authors response: Number of miners is configurable in the Sim_parameters.json file of FoBSim.
And the number of end-users is independently configurable within the same file. Hence, number
of miners can either be bigger, equal, or less than the number of end-users, yet they are not the
same (e.g. if there was 10 end-users and 10 miners, and the BC is deployed in the end-user layer,
then there is a total of 20 end-users, half of which are requesting tasks and the rest are providing
BC services). Thus, the number of end-users, miners, fog nodes, and many other characteristics
of the system can be easily tuned before every simulation run. To address this ambiguity, we
clariy this in lines 330—333. However, if the BC is deployed in the fog layer, the maximum number
of miner nodes is equal to the number of fog nodes. In a scenario where some fog nodes should
not participate in the BC network, the number of fog nodes wearing two hats (i.e. as fog nodes
and miner nodes at the same time) can also be tuned once the tool is run (by the user). The
related
description
is
given
in
lines
502-512
and
in
lines
510—515.
7) Finally, along with the time consumption analysis, authors should also perform the storage cost
analysis for the transactions in FoBSim.
Authors response: We have now added a third case in Section 6, where we analyze the storage
cost of the BC. This added case includes comparisons between different deployments of the BC
network (i.e. in the end-user layer against the fog-layer). The added case showed that deploying
the BC in the fog layer may drastically enhance the BC efficiency in terms of storage cost and
total time consumption to perform similar tasks in both deployments. However, if this comment
purpose was the actual cost that an end-user pays in order to perform the transaction and save it
on the immutable ledger, then such costs can easily be simulated if the TX fee/Gas was
configured within FoBSim. We assume this was not the point of this reviewer as we clarified in
Section 5.5 (lines: 586—591) that this property is not available in the current version of FoBSim.
Reviewer 2
1. Does not evaluate different fog computing infrastructures, geo-distribution, layers, etc. I think
this is the selling point of fog computing, and I would expect experiments that show the effect of
fog computing on blockchain applications. However, the authors only evaluate block chain
applications in a fixed 'fog' infrastructure, which may leave the readers a bit disappointed. I'd
suggest adding experiments where a given blockchain application is tested how it behaves in
different fog computing infrastructures. This would shed light on the issue whether a given
blockchain application would perform well in different fog settings.
Authors response: Throughout Section 5, we clarify how FoBSim can be, and should be,
pre-configured according to the scenario to be simulated (all configurations can be
conducted into one file, namely Sim_parameters.json). Accordingly, the current version of
FoBSim does not allow dynamic nodes that join and leave the network while running.
However, we added a third case study, where the total elapsed time is analyzed, which is
needed to perform tasks and provide BC services to end-users. The added case compared
the efficiency of the BC network when deployed in the fog layer against the case when it is
deployed in the end-user layer (different layers). This also addresses another comment
(Comment no. 4) by this reviewer regarding the relation of Fog Computing to BC and how
the FC can benefit from the BC technology. Additionally, the added case study showed
that a fog-located BC outperforms a BC deployed in the end-user layer in terms of both
storage and total run time efficiency. Additionally, the use cases presented in Section 6
shows that FoBSim allows different infrastructure to systematically interact with each
other (i.e. BC miner nodes, FC nodes, end-user nodes). Meanwhile, FoBSim allows easy
configuration of transmission delay between each layer’s nodes which mimics the geodistribution criteria of the scenario to be simulated. Accordingly, FoBSim allows the fog
layer to provide computations and storage services in case the BC is deployed in the fog
layer, while the fog layer provides communications services in case the BC is deployed in
the end-user layer. We illustrate these facts in lines 228—253, lines 295—296, and in
Figure 2.
2. Why is fog computing and blockchain a good match?
Authors response: As discussed in our previous works (along with hundreds of papers
concerned with such integration) fog computing provides enhanced latency, and nearly
real-time services, while blockchain technology provides high security in distributed
environments, where no Trusted Third Party (TTP) is available/preferable. Such integration
may also ease the optimization of several current Cloud-Edge issues, e.g. credibility and
resource efficiency. Additionally, decentralizing FC applications decreases the appearance
of single points of failure and the control of a centralized authority. To address this
comment in our manuscript, we discuss the advantages and challenges of integrating Fog
Computing and BC technologies in lines 53-75.
3. Important concepts are left unexplained. For instance, I miss an explanation of private vs.
public blockchains. The authors seem only to address public blockchains; but they also make
references to private blockchains like Hyperledger Fabric.
Authors response: Indeed. We intentionally ignored such discussion. The reason is that all
BC applications are usually tested privately (e.g. using Simulation tools, private locally
configured networks or Cloud VMs), and deployed later publicly only if they
achieved/fulfilled targeted goals while in the private setting. To make FoBSim suitable for
both approaches simulation, we deployed the PoW and PoS algorithms which are typically
used in public BC systems (e.g. Bitcoin and Ethereum), and the PoA algorithm which is
typically used in private BC systems (e.g. Hyperledger Besu). We clarify this distinction in
lines 273-280. Furthermore, FoBSim is advantageous against other BC simulation tools,
because it allows permissioned and permission-less configuration as well. That is, if the
PoA is chosen by the user, he/she needs to determine which miners are trusted and
allowed to generate new blocks. Thus, those who are not permitted to generate new
blocks only maintain the Distributed Ledger by verifying and accepting/rejecting new
blocks. On the other hand, using the PoW or PoS algorithms allows FoBSim (by default) to
choose the miner that generates the next block randomly (according to algorithms 3 and
4), which mimics public BC behavior. Finally, public BCs usually provide mechanisms that
enforce end-users to pay fees/Gas in order to get their TXs/SCs performed. While this
policy can be easily added to FoBSim, we left it for the developers to add it to their
scenario to be simulated. To address this comment, we discuss this in detail in lines 255—
280, as well as in Section 5.5, which indicates that FoBSim is not only meant to simulate
public, but also private BCs.
4. There are no real blockchain applications evaluated, only consensus protocols. It stays unclear
how real applications would benefit (if at all) from fog computing. Further, as mentioned above,
different fog infrastructures are not compared to each other, which would be interesting.
Authors response:
a. The efficiency comparison of BC deployment in the fog against end-user layer is now
addressed by the added third case in Section 6.
b. Rather than proposing Fog-Blockchain applications, the main goal of proposing
FoBSim is to provide a suitable and flexible tool to simulate Fog-Blockchain
applications (also the ones proposed by other researchers). The cases we present in
our paper aim at validating the tool as a software, rather than validating the integration
concept itself, concerning Fog and Blockchain technologies. Thus, although we
provided extended state-of-the-art discussion (Section 2) and technical description of
both technologies (Sections 3 and 4), we believe that proposing new ideas for BC-FC
integration is beyond the scope of our paper. We respectfully encourage the reviewer
to check our previous works that surveys [FC-BC Survey, H. Baniata and A. Kertesz
2020, IEEE-Access], and proposes FC-BC integration solutions [PF-BVM, H. Baniata
and A. Kertesz 2020,CLOSER], [PF-BTS, H. Baniata et al., 2020, Information Processing
and Management] and others. Within each of these previous works, an earlier version
of FoBSim was used to validate our proposals. The current version of FoBSim is more
mature and robust, and even better tested. We are currently deploying FoBSim to
assess the Consistency of Distributed Ledgers in Blockchain and Fog Computing, but
we omitted this discussion in order to keep the FoBSim paper focused on the software
itself, rather than convincing the reader about the benefits of FC-BC integration.
5. The authors use too many abbreviations, like BC, FC, CA, SC, ... which make the text
sometimes hard to comprehend.
Authors response: To address this comment, we added Table 1. which concludes all
abbreviations used within the text.
" | Here is a paper. Please give your review comments after reading it. |
16 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The rice leaves related diseases often pose threats to the sustainable production of rice affecting many farmers around the world. Early diagnosis and appropriate remedy of the rice leaf infection is crucial in facilitating healthy growth of the rice plants to ensure adequate supply and food security to the rapidly increasing population. Therefore, machine-driven disease diagnosis system could mitigate the limitations of the conventional methods for leaf disease diagnosis techniques that is often time-consuming, inaccurate, and expensive. Although nowadays, computer-assisted rice leaf disease diagnosis systems are becoming very popular. However, several limitations ranging from strong image backgrounds, vague symptoms' edge, dissimilarity in the image capturing weather, lack of real field rice leaf image data, variation in symptoms from the same infection, multiple infections producing similar symptoms, and lack of efficient real-time system marred the efficacy of the system and its usage. To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) is employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions. The robustness of the Faster R-CNN model is enhanced by training the model with publicly available online and own real-field rice leaf datasets. The proposed deep-learning-based approach is observed to be effective in the automatic diagnosis of three discriminative rice leaf diseases including rice blast, brown spot, and hispa with an accuracy of 98.09%, 98.85%, and 99.17% respectively. Moreover, the model was able to identify a healthy rice leaf with an accuracy of 99.25%. The results obtained herein demonstrated that the Faster R-CNN model offers a high-performing rice leaf infection identification system that could diagnose the most common rice diseases</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Plant disease has become a serious threat towards the production as well as the provision of food security all over the world. For instance, it was reported that more than 800 million people globally lack sufficient food, about 10 percent of the world's food supply is lost due to plant disease which significantly affects over 1.3 billion people who survive on less than $1 per day <ns0:ref type='bibr' target='#b51'>(Strange & Scott, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b6'>(Christou & Twyman, 2004)</ns0:ref>. It is worth noting that plant diseases lead to 10-16 percent annual losses by costing an estimated US$ 220 billion in global crop harvests ('Combating plant diseases is key for sustainable crops --ScienceDaily'). These statistics portrayed the lingering food scarcity as a result of damage to food production induced by plant diseases that have become a global issue which should not be overlooked by plant pathologists <ns0:ref type='bibr' target='#b51'>(Strange & Scott, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b32'>(Ng, 2016)</ns0:ref>. Therefore, to ensure an adequate supply of food to the rapidly increasing population, agricultural production must be raised by up to 70 percent. Nonetheless, several factors militate against the provision as well as the supply of the food to satisfy the need of the teeming population globally.</ns0:p><ns0:p>Rice is amongst the widely consumed food in the world with the total consumption of 493.13 million metric tons in 2019-2020 and 486.62 in the year 2018-2019 ('Total global rice consumption 2020 | Statista'). This has shown an increase in the consumption of rice when compared with the metric tons consumed across the years. It is expected that the increase in the consumption of rice tallies with production rates. However, the absence or lack of proper monitoring of farmland often resulted in the destruction of a large amount of rice emanating from diseases related problems. Several diseases frequently occur in the cultivation of rice which is the key reason for major economic losses. In addition, the abundant utilization of chemicals, for example, bactericides, fungicides, and nematicides have produced adverse effects in the agroecosystem to combat plant diseases <ns0:ref type='bibr' target='#b30'>(Nagaraju & Chawla, 2020)</ns0:ref>.</ns0:p><ns0:p>Disease prediction and forecasting of rice leaves are essential in order to preserve the quantity and quality of rice production since detection at the initial stage of the disease are useful in ensuring that timely intervention could be provided to convert the growth of the disease to facilitate the healthy growth of the plant for increasing the production as well as the supply of the rice <ns0:ref type='bibr' target='#b2'>(Barbedo, 2016)</ns0:ref>. Generally, the rice diseases are sheath blight, bacterial blight, rice blast and symptoms characterized by texture, the color and the shape, that are typical of rapid occurrence and easy infection <ns0:ref type='bibr' target='#b54'>(Zarbafi & Ham, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b17'>(Han et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b49'>(Sibin, Duanpin & Xing-hua, 2010)</ns0:ref>. The artificial identification, querying rice diseases maps, and automated detection are currently considered as the procedure of rice disease detection.</ns0:p><ns0:p>The conventional means of rice diseases identification are often carried out manually and has shown to be unreliable, expensive as well as time-consuming. The mapping technique of rice disease detection is relatively clear and easier to perform; however, it is possible to misinterpret some extremely similar diseases that have negative impacts on the growth of the rice. The latest computer-based identification system is yet to be broadly implemented due to the large environmental effect, slow detection speed, and low accuracy. Hence, developing a rice disease detection technique that could provide quick and accurate decisions on rice diseases is of great significance. Rice diseases are localized in leaves, which can be directed by leaf diagnosis to guide growers on whether the crops should be sprayed. It is worth noting that to date, a substantial progress has been made in the detection of plant diseases through the leaf features <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref> <ns0:ref type='bibr' target='#b21'>(Hwang & Haddad, 1995)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Pantazi, Moshou & Tamouridou, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b38'>(Rahnemoonfar & Sheppard, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b59'>(Zhang et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Many researchers have worked on the automatic diagnosis of rice diseases through conventional means such as pattern recognition techniques <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Rahman et al., 2020)</ns0:ref>, support vector machine <ns0:ref type='bibr' target='#b34'>(Phadikar, 2012)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Prajapati, Shah & Dabhi, 2017)</ns0:ref>, digital image processing techniques (Arnal <ns0:ref type='bibr' target='#b0'>Barbedo, 2013)</ns0:ref> <ns0:ref type='bibr' target='#b61'>(Zhou et al., 2013)</ns0:ref> <ns0:ref type='bibr'>(Sanyal et al., 2008)</ns0:ref> <ns0:ref type='bibr'>(Sanyal & Patel, 2008)</ns0:ref> and computer vision <ns0:ref type='bibr' target='#b1'>(Asfarian et al., 2014)</ns0:ref> for enhancing the accuracy and rapidity of diagnosing the results. In an earlier study, <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref>, Phadikar et al. proposed a rice disease identification approach where the diseased rice images were classified utilizing Self Organizing Map (SOM) (via neural network) in which the train images were obtained by extracting the features of the infected parts of the leave while four different types of images were applied for testing purposes. A somewhat satisfactory classification results were reported. In a different study, <ns0:ref type='bibr' target='#b34'>Phadikar et al. (Phadikar, 2012)</ns0:ref> proposed an automated approach to classify the rice plant diseases, namely leaf brown spot and the leaf blast diseases based on the morphological changes. A total of 1000 spot images captured by Nikon COOLPIX P4 digital camera from a rice field were used. The results obtained were 79.5% and 68.1% accuracies from the Bayes' and SVM classifiers, respectively.</ns0:p><ns0:p>Support Vector Machine (SVM) technique was also utilized by Prajapati et al. <ns0:ref type='bibr' target='#b36'>(Prajapati, Shah & Dabhi, 2017)</ns0:ref> for multi-class classification to identify three types of rice diseases (bacterial leaf blight, brown spot, and leaf smut). The images of infected rice plants were captured using a digital camera from a rice field and obtained 93.33% accuracy on training dataset and 73.33% accuracy on the test dataset. In <ns0:ref type='bibr' target='#b61'>(Zhou et al., 2013)</ns0:ref>, Zhou et al. investigated a technique to evaluate the degree of hopper infestation in rice crops where a fuzzy C-means algorithm was used to classify the regions into one of four classes: no infestation, mild infestation, moderate infestation and severe infestation. Their study illustrated that the accuracy reached 87% to differentiate cases in which rice plant-hopper infestation had occurred or not whilst the accuracy to differentiate four groups was 63.5%. <ns0:ref type='bibr'>Sanyal et al. (Sanyal et al., 2008)</ns0:ref> proposed an approach for detecting and classifying six types of mineral deficiencies in rice crops where each kind of feature (texture and color) was submitted to its own specific multi-layer perceptron (MLP) based neural network. Both networks consist of one hidden layer with a different number (40 for texture and 70 for color) of neurons in the hidden layer where 88.56% of the pixels were correctly classified. Similarly, the same authors proposed another similar work <ns0:ref type='bibr'>(Sanyal & Patel, 2008)</ns0:ref> where two kinds of diseases (blast and brown spots) that affect rice crops were successfully identified. In <ns0:ref type='bibr' target='#b1'>(Asfarian et al., 2014)</ns0:ref>, <ns0:ref type='bibr'>Asfarian et al. developed</ns0:ref> a new approach of texture analysis to identify four rice diseases (bacterial leaf blight, blast, brown spot and tungro virus) using fractal Fourier. In their proposed study, the image of the rice leaf was converted to CIELab color space and the system was able to achieve an of accuracy 92.5%.</ns0:p><ns0:p>The feature extraction from diseased and unaffected leaf images, the grey level co-occurrence matrix (GLCM) and the color moment of the leaf lesion region were implemented by Ghyar and Birjdar to create a 21-D feature vector and related features <ns0:ref type='bibr' target='#b12'>(Ghyar & Birajdar, 2018)</ns0:ref>. The redundant features were eliminated with the genetic algorithm-based feature selection method to generate 14-D feature vectors to minimize complexity. The technique has shown a promising result; however, to improve its detection accuracy there is need for more optimization procedure to take place. The rice disease from the brown spot and blast diseases was described utilizing the color texture of rice leaf photos by <ns0:ref type='bibr'>Sanyal et al. (Sanyal & Patel, 2008)</ns0:ref>. However, the technological PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science standard of identification of rice diseases needs to be strengthened. In <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref>, the entropy-based bipolar threshold technique was employed for segmentation of the image after improving its brightness and contrast. The author sought to integrate the image processing and soft computing technique for the detection of rice plant attacked by several types of diseases. The idea behind the technique was robust when utilized effectively. However, the average accuracy of identification on the four datasets was 82 percent which indicates that more enhancement is still required. The image processing and machine learning methods were utilized to non-destructively screen seedlings with rickets by <ns0:ref type='bibr' target='#b7'>Chung et al. (Chung et al., 2016)</ns0:ref>. Moreover, genetic algorithms were employed to develop SVM classifiers in order to optimize feature selection and model parameters for differentiating healthy seedlings and infected ones. The overall accuracy achieved in their study was 87.9 percent. However, since various diseases may have several symptoms, this approach should be tested if it is needed to use in other diseases, suggesting that this procedure has some limitations.</ns0:p><ns0:p>Nonetheless, it is worth noting that researchers have also begun to move away from such techniques to deep learning models in an effort to detect diseases in various plants <ns0:ref type='bibr' target='#b10'>(DeChant et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b56'>(Zhang et al., 2018b)</ns0:ref> <ns0:ref type='bibr' target='#b58'>(Zhang & Zhang, 2010)</ns0:ref> <ns0:ref type='bibr' target='#b26'>(Liu et al., 2017)</ns0:ref>. The Convolutional Neural Networks (CNN) is a deep learning method that has become one of the best image classification technique which has already acquired great success <ns0:ref type='bibr' target='#b53'>(Xu et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b60'>(Zhao & Jia, 2016)</ns0:ref> <ns0:ref type='bibr' target='#b45'>(Sainath et al., 2015)</ns0:ref> <ns0:ref type='bibr' target='#b42'>(Ribeiro et al., 2016)</ns0:ref>(Dan Claudiu Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, 2011) <ns0:ref type='bibr' target='#b22'>(Kawasaki et al., 2015)</ns0:ref>. A rice disease identification system based on a deep convolutional neural network was reported by <ns0:ref type='bibr' target='#b28'>Lu, et al. (Lu et al., 2017b)</ns0:ref>. It was observed that the average identification rate was 95.48 percent for 10 common rice diseases using the 10-fold crossvalidation scheme. In <ns0:ref type='bibr'>(Zhou et al., 2019)</ns0:ref>, <ns0:ref type='bibr'>Zhou et al. suggested</ns0:ref> Faster R-CNN approach, which seems to be ideal for the detection of rice diseases due to its good speed and high accuracy. Another method suggested by <ns0:ref type='bibr' target='#b41'>(Ren et al., 2017)</ns0:ref> was capable of detecting plant diseases as well as enhancing the accuracy using Faster R-CNN. However, it is required to reduce the time for disease identification in order to allow it to be suitable for monitoring large-scale cultivation.</ns0:p><ns0:p>These advanced techniques are used not only for the rice diseases diagnosis but also for some other crops including wheat <ns0:ref type='bibr' target='#b27'>(Lu et al., 2017a)</ns0:ref> <ns0:ref type='bibr' target='#b23'>(Khairnar & Dagade, 2014)</ns0:ref>, maize <ns0:ref type='bibr'>(Zhang & Yang, 2014)</ns0:ref>, pumpkin <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2018a)</ns0:ref>, cotton <ns0:ref type='bibr' target='#b18'>(He et al., 2013)</ns0:ref> and tomato <ns0:ref type='bibr' target='#b52'>(Wang et al., 2019</ns0:ref><ns0:ref type='bibr'>), amongst others. DeChant et al. (DeChant et al., 2017)</ns0:ref> proposed a three-stage architecture (consisting of multiple convolutional neural networks (CNNs) where the stage-one model is trained on fullscaled images by dividing a single image into several smaller images. On the other hand, two improved deep convolution neural network models (GoogLeNet and Cifar10) were utilized by <ns0:ref type='bibr' target='#b56'>Zhang et al. (Zhang et al., 2018b)</ns0:ref> to improve the recognition accuracy of the maize leaf diseases and enhance the traditional identification techniques that often require long convergence times and large numbers of model parameters. It was shown from the study that an average accuracy of 98.9% and 98.8%, respectively are attainable. <ns0:ref type='bibr' target='#b26'>Liu et al. (Liu et al., 2017)</ns0:ref> Manuscript to be reviewed Computer Science disease identification technique that includes of generating sufficient pathological images and designing a novel architecture of a deep convolutional neural network based on AlexNet that was able to achieve an overall accuracy of 97.62%. The CNN approach has also been applied by <ns0:ref type='bibr' target='#b29'>(Martin & Rybicki, 1998)</ns0:ref> to classify the Helminthosporium leaf spot of wheat, and an accuracy of 91.43 % and standard error of 0.83 % were recorded.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b11'>(Fuentes et al., 2017)</ns0:ref>, Fuentes et al. proposed a deep-learning-based approach using three architectures, namely, Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD) that can effectively recognize nine different types of diseases and pests in tomato plants. In a recent study, Rahman et al. <ns0:ref type='bibr' target='#b37'>(Rahman et al., 2020)</ns0:ref> developed a CNN approach for detecting diseases and pests (five classes of diseases, three classes of pests and one class of healthy plant and others) from rice plant images. A total number of 1426 images were collected that were captured using four different types of cameras and the system achieved a mean validation accuracy of 94.33 %. In <ns0:ref type='bibr' target='#b22'>(Kawasaki et al., 2015)</ns0:ref>, <ns0:ref type='bibr'>Kawasaki et al.</ns0:ref> suggested a method to identify cucumber leaf disease based on CNNs by achieving 94.9 percent accuracy in distinguishing between melon yellow spot virus, zucchini yellow mosaic virus, and non-diseased type virus. A new stacked CNN architecture is suggested by Rahman et al. <ns0:ref type='bibr' target='#b37'>(Rahman et al., 2020)</ns0:ref> which uses two-stage training to substantially reduce the model size while retaining high classification accuracy. It was found that the test accuracy was able to achieve 95 percent using stacked CNN compared to VGG16, while the model size was reduced by 98 percent.</ns0:p><ns0:p>The development of a technique for automatic identification of rice leaf disease is hitherto faced many challenges. It is noted that the diagnosis, as well as detection, involves processes that could render the specific area in which the symptoms manifest within the rice plant very difficult to segment correctly. The capture conditions are hard to handle, which can make it harder to predict images and make detection of the disease more difficult. Moreover, the symptoms caused in different diseases can be identical visually, and the approaches of discrimination could be based on very tiny variations. Another very common issue is the discrepancies in the distribution of the data features to train the model as well as the data that could be used to validate the model. This situation creates overfitting problem. This is very important when plant diseases are automatically detected because the symptoms can differ from the geographical position and fall into the overfitting problem. It has also been observed that many of the suggested rice leaf disease diagnostic architectures are off-line, and only a few experiments have been carried out in realtime. Usually, the image resolution is enhanced in real-time by which the computational complexity should also be enhanced. In addition, the difficulty of real-time operations increases with a large variety of disease features, complex backgrounds and obscure boundaries of the disease symptoms. In order to address these challenges, the current study endeavors to employ the latest deep learning approach based on Faster R-CNN to conduct real-time detection of rice leaf diseases. The present investigation is sought to mitigate the lingering problems in the process of developing a system of diagnosing rice disease. The key contributions of the research are summed up as follows:</ns0:p><ns0:p> Disease spot identification is considered as the basis of recognition for rice leaf disease, as such the accuracy of spot identification directly impacts on the accuracy of recognition of rice leaf disease. Hence, when choosing the target detection algorithm, recognition accuracy should be employed as the key indicator. YOLO, SSD and Faster R-CNN are the mainstream algorithms for the detection of the deep learning target. Among them, the Faster R-CNN algorithm creatively proposes the RPN structure to generate candidate regions, making the target positioning very precise. In addition, Faster R-CNN also has strong advantages in detection accuracy compared to YOLO and SSD. The proposed study employed Faster R-CNN as the key research algorithm due to its efficacy in detecting the spot of the disease reliably.</ns0:p><ns0:p> The data set for rice leaf disease is designed to provide a significant guarantee of the proposed model's generalization capability. Here, diseased rice leaf images with standardized and complex backgrounds are captured both in the lab and in real field conditions to improve the robustness of the Faster R-CNN model. In addition, naturaldiseased rice leaf images are processed to produce sufficient training images through data augmentation technology in order to solve the complexity of insufficient diseased rice leaf images and to avoid overfitting of Faster R-CNN models in the training phase.</ns0:p><ns0:p> A Faster R-CNN network is employed for the real-time detection of rice leaf diseases. With the proposed deep-learning method, the discriminatory features of diseased rice images will automatically be classified, and the three major types of rice leaf diseases are recognized with high accuracy. Furthermore, the proposed method could manage all the rice leaf images collected from the rice farmland in real conditions.</ns0:p><ns0:p>The present manuscript is structured as follows: rice leaf diseases dataset (RLDD) generation techniques are implemented in the Materials and Methodology section. A detail description of the development of the model for the detection for the rice leaf diseases is described in this section. Experimental outcomes to determine the accuracy of the proposed solution are described in the Results section and Discussion section exhibits a discussion on the comparison of the proposed model with other related studies along with limitations with prospective solutions for rice leaf disease detection approaches, followed by the Conclusion section, which draws the outcome of the present study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methodology</ns0:head><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows the comprehensive procedure of real-time identification. First of all, RLDD is constructed using a combination of an online database and an own dataset that was collected in this experiment. The online database is freely accessible. The own dataset was created by capturing diseased rice leaf images in the laboratory which were collected by the authors from actual rice fields. The original RLDD is then annotated manually and expanded through the several data augmentation procedures. The entire dataset is subsequently split into two groups: training dataset and testing dataset. To train the Faster R-CNN model, training dataset is employed whereas testing dataset is utilized for performance assessment. The detection outcomes consist of the classes as well as the locations of the identified rice leaf diseases.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>: Complete architecture of the proposed study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>Due to the absence of adequate data for real-time rice leaf disease, some of our authors and material resources were committed at the start of our study to collect diseased rice leaves. The patterns of diseases of rice leaves are varied over the season and other factors including moisture, temperature, different insects and illuminance. For example, most conspicuous symptoms of brown leaf spot disease occur on leaves and glumes of maturing plants. In real-time operation, the data collection process is very important since the inappropriate information in a dataset may hamper the experimental result. Hence, during the data collection process, the standard rule should be introduced and maintained.</ns0:p><ns0:p>In this study, the rice images have been captured from the rice farmland and a different condition of the leaves were collected and brought to the lab. The rice leaf datasets were also collected from Online (Kaggle) ('Rice Diseases Image Dataset | Kaggle') due to the lack of suitable and different conditions data from real field. It also helps to check the validation of the proposed model. Then, the entire datasets were merged to train the model and each data has been checked individually to avoid misclassification. The rice leaf infected image database consists of healthy leaf and three diseases including rice blast, brown spot, and hispa. To enhance the robustness of the proposed system, our own captured rice leaf image is combined with a publicly available online database. From the dataset, 600 images of rice blast, 650 images of brown spot, 500 images of hispa and 650 images of healthy rice leaf have been collected. A Total number of 2400 images were collected. The total number of images collected from each database (Kaggle and own dataset) are summarized in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Total number of images collected from each database.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data augmentation</ns0:head><ns0:p>Data augmentation is the process of broadening the dataset to enhance the model's performance by generating different forms of images. It also serves useful in mitigating the overfitting problem in the model during the training stage. The overfitting problem occurs when there is the presence of random noise or errors, rather than the underlying relationship. With the help of data augmentation, more image was generated from each image to train the model since some irrelevant patterns may occur during the training process of the model. For data augmentation operations, several techniques were used namely, rotation transformations, horizontal and vertical flips, as well as intensity disturbance which includes disturbances of brightness. A Gaussian noise processing scheme is employed in which the natural sources like thermal are responsible for the Gaussian noise. It is worth noting that in digital images, Gaussian noise interrupts the gray values. To train the model with training data set, Gaussian noise images were used for better results. With the above approaches, 7 new images are generated from each image as shown in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. Finally, the dataset containing 16,800 images were created using the data augmentation technique. </ns0:p></ns0:div>
<ns0:div><ns0:head>Image annotation</ns0:head><ns0:p>Image annotation plays a key role in labeling the positions and classes of object spots in the disease and healthy images for multiclass object detection. In computer vision, Pascal VOC is the method which stores annotation in the XML file and the separate annotation files are saved for each image. The LabelIMG is the graphical image tool used for this process in VOC format which is developed in python. The Pascal VOC provides standardized image data sets for object detection. We constructed a file for every image of the dataset in the Pascal VOC. The XML file created includes information such as the bounding box coordinate values and the disease classes. For training purposes, 400 images were annotated for each class (rice blast, hispa, brown spots, healthy) from the dataset and the rest of the images for testing our model performance. Although the whole process is very challenging owing to the fact that the disease area seems to be tiny and difficult to detect a times, nonetheless, it is worth highlighting that a high detection performance in our model was observed. The Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> shows the annotated images of the brown spots. Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> shows that the image contains the object details. The object tag and its content are replicated when images have several annotations. The object tag components are name, pose, truncated, difficult and bound box. These are the names of the objects that are to be detected. Truncated states the bounding box that the object specifies does not fit the entire extent of the object. If an object is partly visible in the image, the truncated is set to 1. Otherwise, the object truncated is set to 0, if completely visible. Difficult: When the object is seen as difficult to identify, an object is identified as difficult. If the object is difficult to recognize, then difficult is set to 1, else is set to 0. The axis-aligned rectangle in the bounding box indicates the size of the object is visible in the image. This technique contributed to understanding the specifics of the two common computer vision data formats.</ns0:p></ns0:div>
<ns0:div><ns0:head>Model architecture with Faster R-CNN</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A new effort Faster R-CNN <ns0:ref type='bibr' target='#b41'>(Ren et al., 2017)</ns0:ref> was launched in 2015 by the team in the target detection community with Ross Girshick after R-CNN <ns0:ref type='bibr' target='#b15'>(Girshick et al., 2014)</ns0:ref> and Fast R-CNN <ns0:ref type='bibr' target='#b13'>(Girshick, 2015)</ns0:ref> were launched. Faster R-CNN does not need a fixed size to detect rice diseases image. As an image input, the length and width must be limited to a certain degree, thereby preventing distortion. The detection speed is significantly increased after the enhancement of the Regional Proposal Network (RPN). Instead of selective search in Fast R-CNN, Faster R-CNN may be simply regarded as a model of ''the regional generation network + Fast R-CNN'' that employs the RPN which is a recommendation algorithm for this propose. The convolution layer / full connection layer processing takes place on the feature map, then a position regression and classification is applied to the detected target. The recommendation of the region is utilized to secure a better location of the disease. Fast R-CNN refers to the detailed measurement of the frame position and object categories in the frame. The following steps of Faster R-CNN are used to build the model for rice leaf disease detection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Box 1: Steps of the Faster R-CNN technique</ns0:head></ns0:div>
<ns0:div><ns0:head> RPNs for candidate regions</ns0:head><ns0:p>The main concept of RPN is to produce regions utilizing CNN that are explicitly suggested. The shared convolution network is fed by the rice leaf diseased images where feature map is achieved that is used as RPN input. The convolutional feature map points are the original image positions <ns0:ref type='bibr' target='#b16'>(Girshick et al., 2016)</ns0:ref>. The components on every map are nine anchor boxes of several sizes. There are two convolutional networks in the RPN. One is a convolution technique of 18-dimensional using a 1×1 convolution kernel to decide about a foreground image which belongs to the anchor box or not. The other is a 1×1 volume with which another convolution structure is passed. To achieve the relative position coordinates dx(A), dy(A) and dw(A) of the bounding box in the case of Ground Truth, a 36-dimensional convolution mechanism is conducted by the accumulative kernel. The original image is mapped with every point on the feature map in which 'anchor point' is described by each pixel <ns0:ref type='bibr' target='#b39'>(Ramaswamy et al., 2014)</ns0:ref>. Each anchor point is utilized to be positioned of multiple anchors of different sizes. The mostly utilized 3 varied aspect ratios are 2:1, 1:1 and 1:2 for popular scales 5122, 1282 and 2562 respectively. The new rectangular position achieved by the anchor is modified at first by the adjustment parameters in the proposal laying of the RPN. The position vector of the lower-left edge and top edge within each target area are considered as the outcomes for the target areas of the earliest photo. This is how the RPN measures are as follows:</ns0:p><ns0:p>Box 2: Steps of the RPN for candidate regions</ns0:p></ns0:div>
<ns0:div><ns0:head> Feature extraction</ns0:head><ns0:p>The processed RPN image is sent to the layer of RoI Pooling, which pools the areas of rice diseases. By further enhancing the SPP-Net algorithm, the Faster R-CNN algorithm suggests a region of interest (RoI Pooling). The RoI Pooling layer enables a number of dimensions to be transformed into a fixed size in line with the needs of the next fully connected network. Every rice disease candidate's area is equally divided by the ROI pooling layer in M×N blocks and performs maximum pooling per block <ns0:ref type='bibr' target='#b5'>(Chang et al., 2019)</ns0:ref>. On the rice disease map, disease candidates of different sizes are converted into standardized data and forwarded to the following layer. Although the size of the input image and the feature mapping is different, a feature representation of a fixed dimension can be extracted for each area by applying the ROI pooling layer to define the disease classification later.</ns0:p></ns0:div>
<ns0:div><ns0:head> Classification, regression and location refinement</ns0:head><ns0:p>The diseases are classified, and the position is refined by taking into account the pictures of rice diseases. The classification steps shall be: first, the classification of objects or non-objects for each of the two regions corresponding to the Anchor Box, k then models of regression (both equal to a different Anchor Box). The formula (Equation <ns0:ref type='formula'>1</ns0:ref>) for the complete estimation of the classification layer is as follows:</ns0:p><ns0:formula xml:id='formula_0'>(𝑥 1 𝑥 2 𝑥 3 ) ( 𝑤 11 𝑤 12 𝑤 21 𝑤 22 𝑤 31 𝑤 32 ) + (𝑏 1 𝑏 2 ) = (𝑦 1 𝑦 2 ) 1</ns0:formula><ns0:p>The rice disease location is determined by the size of the overlap region. The inaccuracy of the candidate's frame and the slight overlap are often the main reasons for unreliable test results. Therefore, a judicious identification of the location is non-trivial towards attaining encouraging results. The eigenvectors achieved in the classification are determined by a complete connection and Softmax, and a species is generated with a probability of a certain rice disease species. The anchor box regression is used to compensate the region from its actual GT position, hence closer to the real position of the rice disease detection frame.</ns0:p></ns0:div>
<ns0:div><ns0:head> The training processes</ns0:head><ns0:p>The Caffe deep learning approach is used to carry out the experiment. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Feature visualization process</ns0:head><ns0:p>Owing to the limited explanatory nature of CNN, visualization methods are often used to further understand the CNN features maps in order to decide how CNN's can learn features of the different class evaluated. This experiment is carried out to comprehend better the variations between the feature maps extracted from different diseased rice leaf images. The visualization outcomes are shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>, which suggest that all the disease spots are clearly identified from the background images. Therefore, the proposed model demonstrates excellent performance in the discrimination of rice leaf diseases. The visualization outcome for the healthy leaf is shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref> (C). Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref></ns0:p></ns0:div>
<ns0:div><ns0:head>(D) indicates</ns0:head><ns0:p>Hispa that is commonly occurred in a very small region, and the boundaries are not explicit. For brown spot, the spots are divided into two laps, as shown in Figure <ns0:ref type='figure' target='#fig_4'>5(B</ns0:ref>). In addition, Figure <ns0:ref type='figure' target='#fig_4'>5(A</ns0:ref>) explores the Rice Blast, which is almost similar to Brown Spot as shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>(B). They can still be identified according to their minute differences. This experiment demonstrates the strong performance in the design of the proposed model for disease detection, and it clarifies how the CNNs can differentiate between classes by visualizing the features of different rice leaf diseases.</ns0:p></ns0:div>
<ns0:div><ns0:head>Detection visualization and failure analysis</ns0:head><ns0:p>The outcomes for the identification of rice leaf disease are shown in Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>. The proposed approach can identify both a single object and multiple objects of a single class, also multiple objects of multiple classes. The proposed method therefore demonstrates high detection performance in both single and multi-class assessments. Although the model is excellent in terms of accuracy, there are inevitable detection failures which occur when the spot region of the leaf is too small. A rice leaf disease example is illustrated in Figure <ns0:ref type='figure' target='#fig_5'>6 (F, H</ns0:ref>) containing two leaf disease types in a single class. The proposed model is able to detect rice blast and hispa diseases of this class, but the small portion of hispa disease is not detected successfully. The model detects hispa and a healthy portion of the leaf successfully, as shown in Figure <ns0:ref type='figure' target='#fig_5'>6 (H)</ns0:ref>. On the other hand, from the Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref> (F), it is evident that the model can detect the multi-class disease (rice blast and hispa) efficiently; however, it fails to detect the very tiny portion of hispa. The reduction in the detection accuracy is attributed to the similar characteristics of the diseases as shown in Figure <ns0:ref type='figure' target='#fig_5'>6 (E)</ns0:ref>. Owing to the similar characteristics of brown-spot and rice-blast, the developed model was confused in some cases. Environmental variables including complex background, blurriness and lighting also influence the accuracy of identification. Furthermore, one of the factors contributing to increase the detection failure is the small size of the lesion. Hence, it will be difficult to extract and detect the feature if only a small part of the image is taken by the leaf or the diseased region. Despite of all the limitations, in most of the cases, the proposed model has the ability to detect the leaf spot as shown in Figure <ns0:ref type='figure' target='#fig_5'>6 (A, B, C, D, G</ns0:ref>). The detection ability of the leaf spot in a real rice field is presented in Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of pre-network recognition accuracy</ns0:head><ns0:p>Object detection algorithms like Single Shot Detector (SSD), Deconvolutional Single Shot Detector (DSSD) and Rainbow Single Shot Detector (R-SSD) essentially consist of two components. The first element is the pre-network model used to extract the basic features. The other is an auxiliary structure that utilizes multi-scale detection of feature maps. Various deep convolution networks including ResNet-101, ResNet-50, and VGGNet-16 <ns0:ref type='bibr' target='#b50'>(Simonyan & Zisserman, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b25'>(Liu & Deng, 2016)</ns0:ref>, and MobileNET <ns0:ref type='bibr' target='#b20'>(Howard, 2017)</ns0:ref> are trained and tested to compare the recognition performances of traditional networks with that of our proposed Faster R-CNN on RLDD. The stochastic gradient descent (SGD) algorithm is employed during training to learn about the neural network weights and biases, which reduces the loss function. A limited number of training sets are selected randomly by the SGD algorithm, known as the batch size. The batch size is set to 1 where the final number of iterations is fixed at 50965. The learning rate is set at 0.0002, although very small, it contributes towards more reliable results. The momentum, which acts as an additional factor to decide how quickly the SGD algorithm converges to the optimal point, is set at 0.9. The accuracy curve is indicated, as shown in Figure <ns0:ref type='figure' target='#fig_7'>8</ns0:ref> with the number of training iterations in the X-axis and corresponding Y-axis shows the training accuracy. The comparison of test accuracies of different pre-networks (VGGNet-16, ResNet-50, ResNet-101, MobileNet3 and Faster R-CNN) are defined in terms of accuracy curve, as shown in Figure <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>. The VGGNet-16 networks have higher convergence speed but lower accuracy. On the other hand, from the figure, it is evident that the Faster R-CNN model shows high accuracy on the RLDD as compared to other pre-trained models. </ns0:p></ns0:div>
<ns0:div><ns0:head>Confusion matrix</ns0:head><ns0:p>When dealing with multiple classes of similar shape, classifiers may be confused. Infected rice leaf images on different levels or backgrounds can cause high complexity which leads to lower performance for the patterns displayed in the same class. The classification accuracy of a model can be visually tested using a confusion matrix. The entire dataset of our study is split into a training set and a testing set randomly in order to train and test the model. To evaluate the proposed model, the 50% dataset is used to train and the remaining 50% dataset is used to test. Total 8400 observations are utilized for training the model, whereas another 8400 observations are utilized for testing the model. Figure <ns0:ref type='figure' target='#fig_8'>9</ns0:ref> displays the final test results confusion matrix. The deeper the color in the visualization results, the greater the model's accuracy in the respective class. All correct predictions are located diagonally, whilst all wrong predictions are off diagonal. The classification accuracy can be visually assessed based on these findings. The study shows that for the above three diseases and healthy leaf. Brown spot and hispa diseases are significantly differentiated from other diseases by their features and by their identification rates with 98.85% and 99.17%, respectively. In the healthy leaf study, the accuracy is achieved by 99.25%. According to the confusion matrix, it is apparent that the detection model is more prone to confusion in distinguishing rice blast and brown spot compared with other classes. Among 2100 images in the testing set of rice blast spot, 31 images have been detected incorrectly as brown spot. On the other hand, among 2275 images in the testing set of brown spots 20 images have been detected incorrectly as rice blast spots. This misclassification may be caused by the geometrical feature similarities between the two diseases. However, other classes are well distinguished. The confusion matrix describes the low inaccuracies in the identification of different classes in the present investigation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Loss analysis of the proposed model</ns0:head><ns0:p>The research seeks to mitigate the loss of function, thereby reducing errors in the model. In doing so, every machine learning algorithm repeats calculations several times, until the loss is plateaued. The learning rate plays a significant role in order to minimize the loss function. In the proposed study, the learning rate is set to 0.0002. TensorBoard is a fantastic tool for viewing these metrics and finding possible problems. TensorBoard frequently upgrades the measurements and provides the outcomes to the user. In this purpose, the model trained with 50965 iterations with the help of a training dataset. Figure <ns0:ref type='figure' target='#fig_9'>10</ns0:ref> depicts the generated loss analysis by the TensorBoard, indicating that the total loss is withing the vicinity of 0.1. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Comparison of the proposed model with other related studies</ns0:head><ns0:p>The comparison of the proposed model with existing related studies is represented in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. Manuscript to be reviewed Computer Science always do not ensure the stability of the model's performance in a real-time approach. To address this issue, our proposed method is validated with both publicly available and own dataset. Moreover, the total observation of our proposed study is higher than other studies tabulated in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. Despite these facts, the performance of the proposed model is higher than the other models for rice leaf diseases identification. The detection accuracy achieved by Rathore et al. (Narendra Pal Singh Rathore, 2020) is slightly higher than our proposed method, as their dataset consists of only one type of rice leaf disease (leaf blast), hence the discrepancies are acceptable. Therefore, by considering the strong dataset, real-time disease detection ability and detection accuracy, our proposed method is somewhat superior to that of other related approaches for rice leaf disease identification reported in the literature.</ns0:p></ns0:div>
<ns0:div><ns0:head>Uncertainties and limitations</ns0:head><ns0:p>Although the proposed model outperforms state-of-art rice leaf diseases detection methods, some of the drawbacks are also identified. Some limitations of this study with the prospective solution to address these challenges are as follows:</ns0:p><ns0:p> The network looks at the whole image, and not in just one go but sequentially concentrates on part of the image. Thus, the algorithm requires many passes to extract all objects through a single image which is time-consuming. To address this issue, a network should be recommended which can extract objects of an image in a single pass.</ns0:p><ns0:p> Since several processes have been conducted one after the other, the performance of the further system depends on how the previous system performed. Thus, a model should be trained carefully with appropriate datasets to achieve the desired performance.</ns0:p><ns0:p> The misclassification issues could occur as a result of the geometrical feature similarities between the diseases. To overcome this obstacle, it should be required to train the network with more datasets which have similar geometrical features. It also recommended addressing more efficient deep learning algorithm which can classify the diseases containing small dissimilarities in features.</ns0:p><ns0:p> In a real-time approach, the rice leaves conditions vary with the season having different humidity, temperature, and illuminance. Hence, some overfitting problem may emanate when random noise or errors occurs rather than the underlying relationship, as previously described <ns0:ref type='bibr' target='#b19'>(Heisel et al., 2017)</ns0:ref>. During the training stage, the overfitting problem is expected to occur randomly due to the lack of proper images with various conditions. To Manuscript to be reviewed Computer Science achieve the higher performance. More approaches such as saturation, hue and Generative Adversarial Networks (GANs) <ns0:ref type='bibr' target='#b4'>(Bowles et al., 2018)</ns0:ref> can be employed to overcome this issue.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The signs of infection appear in various sections of the plant, and leaves are widely used to diagnose the plant disease. The advanced computer vision technology encourages researchers around the world to carry out extensive experiments on plant disease recognition using leaf image analysis techniques. In the past few years, deep learning methods have notably been utilized to recognize plant leaf infection. This paper proposes a real-time rice leaf disease diagnosis framework based on the Faster R-CNN technique. The rice leaf infected image database consists of healthy leaf and three diseases, including rice blast, brown spot, and hispa. In order to enhance the robustness of the proposed system, our own captured rice leaf image is combined with a publicly available online database. Moreover, we have used several image augmentations schemes to enrich the dataset, which familiarizes the model with the different possible conditions of the image. This strategy also enhances the model's performance and generalization capability. The obtained results of the proposed study are very encouraging to diagnose healthily and the different types of infected leaves in both laboratory-based images and real-field images. However, an additional study should be carried out to make segmented the infected portions of the leaf image by minimizing the surrounding interference. The existing rice leaf disease diagnosis systems are designed using laboratory-based captured images. Although we have implemented real-time disease recognition architecture using real field rice leaf images, the proposed system is still not fully automated. Therefore, further study should be carried out to implement a dynamic and automatic system to recognize large-scale rice leaf diseases. This system could be made up of a mobile terminal processor and agricultural Internet of Things that may be favorable to modernize the agricultural industry. 0.001 5 Initial steps to make a prototype for automatic detection of RFS Rice false smut <ns0:ref type='bibr' target='#b34'>(Phadikar, 2012)</ns0:ref> Bayes <ns0:ref type='bibr'>'</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>The Faster R-CNN technique:</ns0:head><ns0:p>Step 1: To acquire a feature map, the entire image of rice diseases is fed into CNN.</ns0:p><ns0:p>Step 2:</ns0:p><ns0:p>To gain the feature information of the candidate frame and the convolution feature is then fed into the RPN.</ns0:p><ns0:p>Step 3:</ns0:p><ns0:p>To recognize whether the features of rice diseases from the candidate box belongs to a specific disease category and then classify.</ns0:p><ns0:p>Step 4:</ns0:p><ns0:p>To adjust the disease location again by a regression device for the candidate frame belonging to a specific disease feature.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RPN steps for candidate regions:</ns0:head><ns0:p>Step 1: To slide a window on the map of rice disease.</ns0:p><ns0:p>Step 2:</ns0:p><ns0:p>To classify the leaf infections and revert back the location of the frame, a neural network is formed.</ns0:p><ns0:p>Step 3:</ns0:p><ns0:p>To provide approximate distribution details of leaf infection according to the position of the sliding window.</ns0:p><ns0:p>Step 4:</ns0:p><ns0:p>To achieve a better location of leaf infection with the box's regression. The model variables stated in Phase 3 are kept unaltered. The Faster R-CNN architecture is formed and trained the network for the 2nd attempt to optimize the specifications.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Data augmentation of rice leaf disease images: (A) original image (B) image rotated by 180-degree (C) high brightness (D) Gaussian noise (E) horizontal flip (F) low brightness (G) vertical flip.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The image annotation outcome in XML file.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>The training set of rice diseases was sent randomly to the neural network for training. The model was tested, and the test results were analyzed after the completion of the training process. The following phases reflect the Faster R-CNN training model: Box 3: Phases of the Training processes (Faster R-CNN training model) The complete architecture of a Faster R-CNN is presented in Figure 4.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Architecture of Faster R-CNN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Activation visualization results (A) Rice Blast (B) Brown Spot (C) Healthy (D) Hispa.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Types of detection results (Images collected from Online and captured in the lab).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Types of detection results (Real Field Image).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Performance comparison with other pre-trained models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Confusion matrix of the proposed approach.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: The classification loss of the proposed system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>overcome these problems, in this study we have used data augmentation in the process of the training stage of Faster R-CNN. During the training, the proposed model can learn huge irrelevant patterns through the large amounts of images which is generated by the data augmentation process. This phenomenon helps to reduce the overfitting problem and PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Training processes: Different Phases of Faster R-CNN training model: Phase 1: After initializing the RPN structure with the pre-trained framework, the RPN is trained. The model's distinctive value and RPN are revised when the training is finished. Phase 2: The Faster R-CNN architecture is formed. Subsequently the proposal is calculated by utilizing the trained RPN and then the proposal is sent to the Faster R-CNN network. Following this, the network is trained. Then the model and the uniqueness of the Faster R-CNN is updated through the training process. Phase 3: The RPN network is initialized by employing the model that was formed in the Phase 2. Then a second training is carried out on the RPN network. The RPN's distinctive value is altered at the time of the training procedure while the model parameters remain unchanged. Phase 4:</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,453.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,219.37,525.00,261.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,508.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,522.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,249.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,263.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,391.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,429.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,362.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>developed an apple leaf</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of the proposed model with other related studies.Most of the studies listed inTable 2 have used either utilized publicly available dataset or own captured dataset to validate their methods. The models validated with publicly available dataset PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:1:1:NEW 24 Jan 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Editor’s Comment:
Recent state-of-the-art works on plant leaf recognition must be discussed. The novelty and contribution of the paper must be explicitly stated. The results achieved should be compared with experimental results from related studies.
Author Response:
Thank you for giving us the opportunity to submit a revised draft of our manuscript titled 'A Real-Time Approach of diagnosing Rice Leaf Disease Using Deep Learning-based Faster R-CNN Framework'. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. Specifically, we have included a brief discussion on the recent state-of-the-art works on plant leaf recognition in the section 'Introduction' and inserted a table (Table 2) by summarizing the recent studies on rice leaf disease identification. Moreover, the novelty of our proposed study is described in Introduction with the contribution of the work. In addition, we have validated the findings of our proposed method by comparing the obtained results with related works for rice leaf disease identification. A comparison table (Table. 2: Comparison of the proposed model with other related studies) has been inserted that reflects the superiority of our proposed model with the other related studies. To address the above issue, a subsection has been inserted in the revised manuscript by labeling 'Comparison of the proposed model with the other related studies'. We have highlighted the changes within the manuscript.
Here is a point-by-point response to the reviewers’ comments and concerns.
Reviewer’s Comment (Reviewer 1):
Reviewer’s Comment-1
Your description of Sub-section 'Data collection' is not explanatory enough. Since the experiment was done using two datasets, the first dataset was collected from an online database (Kaggle database) and the second database is a primary dataset. Give a detailed summary of the total number of images collected from each database (you can use a Table).
Author Response-1
We appreciate the concern raised by the reviewer. A Total number of 2400 images were collected in our experiment. From the dataset, 600 images of rice blast, 650 images of brown spot, 500 images of hispa and 650 images of healthy rice leaf have been utilized. The total number of images collected from each database (Kaggle and own dataset) are summarized in Table 1.
Reviewer’s Comment-2
For a better view and understanding of the different steps involved in your methodology. Kindly insert all the different stages steps in a textbox with appropriate caption such as:
• The steps of Faster R-CNN:
• RPNs for candidate regions
• Training processes: Faster R-CNN training model:
This would help the reader understand the steps better.
Author Response-2
We agree with the reviewer concern. In the revised manuscript, all the steps of different mentioned stages have been inserted in a textbox in the methodology section.
Reviewer’s Comment-3
Improve discussion of your findings in figures 6 and 7 of Section: Detection and visualization and failure analysis.
Author Response-3
Thank you for your suggestion. In the revised manuscript, we have included more discussion on the findings in figures 6 and 7 in the subsection 'Detection visualization and failure analysis'.
Reviewer’s Comment-4
Figure 9 shows the confusion matrix, it is expected to more detailed as the rate of misclassification of each class is not specified on the figure. Use a more labelled confusion matrix showing the wrong predictions on each class
Author Response-4
Thank you for pointing this out. Therefore, we have modified Figure 9 (Confusion matrix of the proposed approach) by labeling the rate of misclassification of each class in details for the confusion matrix. Moreover, we have briefly discussed the performance of each class by showing their identification rate.
Reviewer’s Comment-5
To identify the validity of your findings, it is expected that the results achieved should be compared with experimental results from related studies. Since an online/ publicly available dataset was used (Kaggle database).
Author Response-5
We appreciate the reviewer’s suggestion. In the revised manuscript, we have made a comparison of the findings of our proposed study with the existing related study to validate the model's performance. In addition, we have inserted a table (Table. 2: Comparison of the proposed model with other related studies) to support this discussion by reflecting the superiority of our proposed model. To address the above issue, a subsection has been inserted in the revised manuscript by labeling 'Comparison of the proposed model with other related studies'.
Reviewer’s Comment-6
Highlights what are the limitations or challenges of your model and how it affects the overall performance.
Author Response-6
Thank you for the concern raised. Despite the excellent performance of our proposed model for the rice leaf diseases detection, some challenges are also identified. To address this issue, we have inserted a subsection in the revised manuscript by labeling ' Uncertainties and limitations' that includes the limitation with the prospective solution.
Reviewer’s Comment (Reviewer 2):
Reviewer’s Comment-1
The English language should be improved to ensure that an international audience can clearly understand your text. Some examples where the language could be improved include lines 164-165, 262 the current phrasing makes comprehension difficult.
Author Response-1
Thanks for the suggestion. In addition to the above comments, the languages pointed out by the reviewers have been rephrased.
Reviewer’s Comment-2
The summary of the existing technology in the introduction is not comprehensive. I suggest you add a more detailed description in lines 112-116.
Author Response-2
Thank you for pointing this out. We agree with this comment. Therefore, in the revised manuscript, we have added the description of the existing technology for rice leaf disease identification along with other plant disease detection to make the introduction comprehensive.
" | Here is a paper. Please give your review comments after reading it. |
17 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The rice leaves related diseases often pose threats to the sustainable production of rice affecting many farmers around the world. Early diagnosis and appropriate remedy of the rice leaf infection is crucial in facilitating healthy growth of the rice plants to ensure adequate supply and food security to the rapidly increasing population. Therefore, machine-driven disease diagnosis system could mitigate the limitations of the conventional methods for leaf disease diagnosis techniques that is often time-consuming, inaccurate, and expensive. Although nowadays, computer-assisted rice leaf disease diagnosis systems are becoming very popular. However, several limitations ranging from strong image backgrounds, vague symptoms' edge, dissimilarity in the image capturing weather, lack of real field rice leaf image data, variation in symptoms from the same infection, multiple infections producing similar symptoms, and lack of efficient real-time system marred the efficacy of the system and its usage. To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) is employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions. The robustness of the Faster R-CNN model is enhanced by training the model with publicly available online and own real-field rice leaf datasets. The proposed deep-learning-based approach is observed to be effective in the automatic diagnosis of three discriminative rice leaf diseases including rice blast, brown spot, and hispa with an accuracy of 98.09%, 98.85%, and 99.17% respectively. Moreover, the model was able to identify a healthy rice leaf with an accuracy of 99.25%. The results obtained herein demonstrated that the Faster R-CNN model offers a high-performing rice leaf infection identification system that could diagnose the most common rice diseases</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Plant disease has become a serious threat towards the production as well as the provision of food security all over the world. For instance, it was reported that more than 800 million people globally lack sufficient food, about 10 percent of the world's food supply is lost due to plant disease which significantly affects over 1.3 billion people who survive on less than $1 per day <ns0:ref type='bibr' target='#b48'>(Strange & Scott, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b6'>(Christou & Twyman, 2004)</ns0:ref>. It is worth noting that plant diseases lead to 10-16 percent annual losses by costing an estimated US$ 220 billion in global crop harvests ('Combating plant diseases is key for sustainable crops --ScienceDaily'). These statistics portrayed the lingering food scarcity as a result of damage to food production induced by plant diseases that have become a global issue which should not be overlooked by plant pathologists <ns0:ref type='bibr' target='#b48'>(Strange & Scott, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b32'>(Ng, 2016)</ns0:ref>. Therefore, to ensure an adequate supply of food to the rapidly increasing population, agricultural production must be raised by up to 70 percent. Nonetheless, several factors militate against the provision as well as the supply of the food to satisfy the need of the teeming population globally.</ns0:p><ns0:p>Rice is amongst the widely consumed food in the world with the total consumption of 493.13 million metric tons in 2019-2020 and 486.62 in the year 2018-2019 ('Total global rice consumption 2020 | Statista'). This has shown an increase in the consumption of rice when compared with the metric tons consumed across the years. It is expected that the increase in the consumption of rice tallies with production rates. However, the absence or lack of proper monitoring of farmland often resulted in the destruction of a large amount of rice emanating from diseases related problems. Several diseases frequently occur in the cultivation of rice which is the key reason for major economic losses. In addition, the abundant utilization of chemicals, for example, bactericides, fungicides, and nematicides have produced adverse effects in the agroecosystem to combat plant diseases <ns0:ref type='bibr' target='#b30'>(Nagaraju & Chawla, 2020)</ns0:ref>.</ns0:p><ns0:p>Disease prediction and forecasting of rice leaves are essential in order to preserve the quantity and quality of rice production since detection at the initial stage of the disease are useful in ensuring that timely intervention could be provided to convert the growth of the disease to facilitate the healthy growth of the plant for increasing the production as well as the supply of the rice <ns0:ref type='bibr' target='#b3'>(Barbedo, 2016)</ns0:ref>. Generally, the rice diseases are sheath blight, bacterial blight, rice blast and symptoms characterized by texture, the color and the shape, that are typical of rapid occurrence and easy infection <ns0:ref type='bibr' target='#b51'>(Zarbafi & Ham, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b18'>(Han et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b46'>(Sibin, Duanpin & Xing-hua, 2010)</ns0:ref>. The artificial identification, querying rice diseases maps, and automated detection are currently considered as the procedure of rice disease detection.</ns0:p><ns0:p>The conventional means of rice diseases identification are often carried out manually and has shown to be unreliable, expensive as well as time-consuming. The mapping technique of rice disease detection is relatively clear and easier to perform; however, it is possible to misinterpret some extremely similar diseases that have negative impacts on the growth of the rice. The latest computer-based identification system is yet to be broadly implemented due to the large environmental effect, slow detection speed, and low accuracy. Hence, developing a rice disease detection technique that could provide quick and accurate decisions on rice diseases is of great significance. Rice diseases are localized in leaves, which can be directed by leaf diagnosis to guide growers on whether the crops should be sprayed. It is worth noting that to date, a substantial progress has been made in the detection of plant diseases through the leaf features <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref> <ns0:ref type='bibr' target='#b22'>(Hwang & Haddad, 1995)</ns0:ref> <ns0:ref type='bibr' target='#b33'>(Pantazi, Moshou & Tamouridou, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b38'>(Rahnemoonfar & Sheppard, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b56'>(Zhang et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Many researchers have worked on the automatic diagnosis of rice diseases through conventional means such as pattern recognition techniques <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Rahman et al., 2020)</ns0:ref>, support vector machine <ns0:ref type='bibr' target='#b34'>(Phadikar, 2012)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Prajapati, Shah & Dabhi, 2017)</ns0:ref>, digital image processing techniques (Arnal <ns0:ref type='bibr' target='#b1'>Barbedo, 2013)</ns0:ref> <ns0:ref type='bibr' target='#b58'>(Zhou et al., 2013)</ns0:ref> <ns0:ref type='bibr'>(Sanyal et al., 2008)</ns0:ref> <ns0:ref type='bibr'>(Sanyal & Patel, 2008)</ns0:ref> and computer vision <ns0:ref type='bibr' target='#b2'>(Asfarian et al., 2014)</ns0:ref> for enhancing the accuracy and rapidity of diagnosing the results. In an earlier study, <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>identification approach where the diseased rice images were classified utilizing Self Organizing Map (SOM) (via neural network) in which the train images were obtained by extracting the features of the infected parts of the leave while four different types of images were applied for testing purposes. A somewhat satisfactory classification results were reported. In a different study, <ns0:ref type='bibr' target='#b34'>Phadikar et al. (Phadikar, 2012)</ns0:ref> proposed an automated approach to classify the rice plant diseases, namely leaf brown spot and the leaf blast diseases based on the morphological changes. A total of 1000 spot images captured by Nikon COOLPIX P4 digital camera from a rice field were used. The results obtained were 79.5% and 68.1% accuracies from the Bayes' and SVM classifiers, respectively.</ns0:p><ns0:p>Support Vector Machine (SVM) technique was also utilized by Prajapati et al. <ns0:ref type='bibr' target='#b36'>(Prajapati, Shah & Dabhi, 2017)</ns0:ref> for multi-class classification to identify three types of rice diseases (bacterial leaf blight, brown spot, and leaf smut). The images of infected rice plants were captured using a digital camera from a rice field and obtained 93.33% accuracy on training dataset and 73.33% accuracy on the test dataset. In <ns0:ref type='bibr' target='#b58'>(Zhou et al., 2013)</ns0:ref>, Zhou et al. investigated a technique to evaluate the degree of hopper infestation in rice crops where a fuzzy C-means algorithm was used to classify the regions into one of four classes: no infestation, mild infestation, moderate infestation and severe infestation. Their study illustrated that the accuracy reached 87% to differentiate cases in which rice plant-hopper infestation had occurred or not whilst the accuracy to differentiate four groups was 63.5%. <ns0:ref type='bibr'>Sanyal et al. (Sanyal et al., 2008)</ns0:ref> proposed an approach for detecting and classifying six types of mineral deficiencies in rice crops where each kind of feature (texture and color) was submitted to its own specific multi-layer perceptron (MLP) based neural network. Both networks consist of one hidden layer with a different number (40 for texture and 70 for color) of neurons in the hidden layer where 88.56% of the pixels were correctly classified. Similarly, the same authors proposed another similar work <ns0:ref type='bibr'>(Sanyal & Patel, 2008)</ns0:ref> where two kinds of diseases (blast and brown spots) that affect rice crops were successfully identified. In <ns0:ref type='bibr' target='#b2'>(Asfarian et al., 2014)</ns0:ref>, <ns0:ref type='bibr'>Asfarian et al. developed</ns0:ref> a new approach of texture analysis to identify four rice diseases (bacterial leaf blight, blast, brown spot and tungro virus) using fractal Fourier. In their proposed study, the image of the rice leaf was converted to CIELab color space and the system was able to achieve an of accuracy 92.5%.</ns0:p><ns0:p>The feature extraction from diseased and unaffected leaf images, the grey level co-occurrence matrix (GLCM) and the color moment of the leaf lesion region were implemented by Ghyar and Birjdar to create a 21-D feature vector and related features <ns0:ref type='bibr' target='#b12'>(Ghyar & Birajdar, 2018)</ns0:ref>. The redundant features were eliminated with the genetic algorithm-based feature selection method to generate 14-D feature vectors to minimize complexity. The technique has shown a promising result; however, to improve its detection accuracy there is need for more optimization procedure to take place. The rice disease from the brown spot and blast diseases was described utilizing the color texture of rice leaf photos by <ns0:ref type='bibr'>Sanyal et al. (Sanyal & Patel, 2008)</ns0:ref>. However, the technological standard of identification of rice diseases needs to be strengthened. In <ns0:ref type='bibr' target='#b35'>(Phadikar & Sil, 2008)</ns0:ref>, the entropy-based bipolar threshold technique was employed for segmentation of the image after improving its brightness and contrast. The author sought to integrate the image processing and soft computing technique for the detection of rice plant attacked by several types of diseases. The idea behind the technique was robust when utilized effectively. However, the average accuracy of identification on the four datasets was 82 percent which indicates that more enhancement is still required. The image processing and machine learning methods were utilized to non-destructively screen seedlings with rickets by <ns0:ref type='bibr' target='#b7'>Chung et al. (Chung et al., 2016)</ns0:ref>. Moreover, genetic algorithms were employed to develop SVM classifiers in order to optimize feature selection and model parameters for differentiating healthy seedlings and infected ones. The overall accuracy achieved in their study was 87.9 percent. However, since various diseases may have several symptoms, this approach should be tested if it is needed to use in other diseases, suggesting that this procedure has some limitations.</ns0:p><ns0:p>Nonetheless, it is worth noting that researchers have also begun to move away from such techniques to deep learning models in an effort to detect diseases in various plants <ns0:ref type='bibr' target='#b10'>(DeChant et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b53'>(Zhang et al., 2018b)</ns0:ref> <ns0:ref type='bibr' target='#b55'>(Zhang & Zhang, 2010)</ns0:ref> <ns0:ref type='bibr' target='#b26'>(Liu et al., 2017)</ns0:ref>. The Convolutional Neural Networks (CNN) is a deep learning method that has become one of the best image classification technique which has already acquired great success <ns0:ref type='bibr' target='#b50'>(Xu et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b57'>(Zhao & Jia, 2016)</ns0:ref> <ns0:ref type='bibr' target='#b43'>(Sainath et al., 2015)</ns0:ref> <ns0:ref type='bibr' target='#b41'>(Ribeiro et al., 2016)</ns0:ref>(Dan Claudiu Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, 2011) <ns0:ref type='bibr' target='#b23'>(Kawasaki et al., 2015)</ns0:ref>. A rice disease identification system based on a deep convolutional neural network was reported by <ns0:ref type='bibr' target='#b28'>Lu, et al. (Lu et al., 2017b)</ns0:ref>. It was observed that the average identification rate was 95.48 percent for 10 common rice diseases using the 10-fold crossvalidation scheme. In <ns0:ref type='bibr' target='#b59'>(Zhou et al., 2019)</ns0:ref>, <ns0:ref type='bibr'>Zhou et al. suggested</ns0:ref> Faster R-CNN approach, which seems to be ideal for the detection of rice diseases due to its good speed and high accuracy. Another method suggested by <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref> was capable of detecting plant diseases as well as enhancing the accuracy using Faster R-CNN. However, it is required to reduce the time for disease identification in order to allow it to be suitable for monitoring large-scale cultivation.</ns0:p><ns0:p>These advanced techniques are used not only for the rice diseases diagnosis but also for some other crops including wheat <ns0:ref type='bibr' target='#b27'>(Lu et al., 2017a)</ns0:ref> <ns0:ref type='bibr' target='#b24'>(Khairnar & Dagade, 2014)</ns0:ref>, maize <ns0:ref type='bibr'>(Zhang & Yang, 2014)</ns0:ref>, pumpkin <ns0:ref type='bibr' target='#b52'>(Zhang et al., 2018a)</ns0:ref>, cotton <ns0:ref type='bibr' target='#b19'>(He et al., 2013)</ns0:ref> and tomato <ns0:ref type='bibr' target='#b49'>(Wang et al., 2019</ns0:ref><ns0:ref type='bibr'>), amongst others. DeChant et al. (DeChant et al., 2017)</ns0:ref> proposed a three-stage architecture (consisting of multiple convolutional neural networks (CNNs) where the stage-one model is trained on fullscaled images by dividing a single image into several smaller images. On the other hand, two improved deep convolution neural network models (GoogLeNet and Cifar10) were utilized by <ns0:ref type='bibr' target='#b53'>Zhang et al. (Zhang et al., 2018b)</ns0:ref> to improve the recognition accuracy of the maize leaf diseases and enhance the traditional identification techniques that often require long convergence times and large numbers of model parameters. It was shown from the study that an average accuracy of 98.9% and 98.8%, respectively are attainable. <ns0:ref type='bibr' target='#b26'>Liu et al. (Liu et al., 2017)</ns0:ref> Manuscript to be reviewed Computer Science designing a novel architecture of a deep convolutional neural network based on AlexNet that was able to achieve an overall accuracy of 97.62%. The CNN approach has also been applied by <ns0:ref type='bibr' target='#b29'>(Martin & Rybicki, 1998)</ns0:ref> to classify the Helminthosporium leaf spot of wheat, and an accuracy of 91.43 % and standard error of 0.83 % were recorded.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b11'>(Fuentes et al., 2017)</ns0:ref>, Fuentes et al. proposed a deep-learning-based approach using three architectures, namely, Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD) that can effectively recognize nine different types of diseases and pests in tomato plants. In a recent study, <ns0:ref type='bibr' target='#b37'>Rahman et al. (Rahman et al., 2020)</ns0:ref> developed a CNN approach for detecting diseases and pests (five classes of diseases, three classes of pests and one class of healthy plant and others) from rice plant images. A total number of 1426 images were collected that were captured using four different types of cameras and the system achieved a mean validation accuracy of 94.33 %. In <ns0:ref type='bibr' target='#b23'>(Kawasaki et al., 2015)</ns0:ref>, <ns0:ref type='bibr'>Kawasaki et al.</ns0:ref> suggested a method to identify cucumber leaf disease based on CNNs by achieving 94.9 percent accuracy in distinguishing between melon yellow spot virus, zucchini yellow mosaic virus, and non-diseased type virus. A new stacked CNN architecture is suggested by <ns0:ref type='bibr' target='#b37'>Rahman et al. (Rahman et al., 2020)</ns0:ref> which uses two-stage training to substantially reduce the model size while retaining high classification accuracy. It was found that the test accuracy was able to achieve 95 percent using stacked CNN compared to VGG16, while the model size was reduced by 98 percent.</ns0:p><ns0:p>The development of a technique for automatic identification of rice leaf disease is hitherto faced many challenges. It is noted that the diagnosis, as well as detection, involves processes that could render the specific area in which the symptoms manifest within the rice plant very difficult to segment correctly. The capture conditions are hard to handle, which can make it harder to predict images and make detection of the disease more difficult. Moreover, the symptoms caused in different diseases can be identical visually, and the approaches of discrimination could be based on very tiny variations. Another very common issue is the discrepancies in the distribution of the data features to train the model as well as the data that could be used to validate the model. This situation creates overfitting problem. This is very important when plant diseases are automatically detected because the symptoms can differ from the geographical position and fall into the overfitting problem. It has also been observed that many of the suggested rice leaf disease diagnostic architectures are off-line, and only a few experiments have been carried out in realtime. Usually, the image resolution is enhanced in real-time by which the computational complexity should also be enhanced. In addition, the difficulty of real-time operations increases with a large variety of disease features, complex backgrounds and obscure boundaries of the disease symptoms. In order to address these challenges, the current study endeavors to employ the latest deep learning approach based on Faster R-CNN to conduct real-time detection of rice leaf diseases. The present investigation is sought to mitigate the lingering problems in the process of developing a system of diagnosing rice disease. The key contributions of the research are summed up as follows:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Disease spot identification is considered as the basis of recognition for rice leaf disease, as such the accuracy of spot identification directly impacts on the accuracy of recognition of rice leaf disease. Hence, when choosing the target detection algorithm, recognition accuracy should be employed as the key indicator. YOLO, SSD and Faster R-CNN are the mainstream algorithms for the detection of the deep learning target. Among them, the Faster R-CNN algorithm creatively proposes the RPN structure to generate candidate regions, making the target positioning very precise. In addition, Faster R-CNN also has strong advantages in detection accuracy compared to YOLO and SSD. The proposed study employed Faster R-CNN as the key research algorithm due to its efficacy in detecting the spot of the disease reliably.</ns0:p><ns0:p> The data set for rice leaf disease is designed to provide a significant guarantee of the proposed model's generalization capability. Here, diseased rice leaf images with standardized and complex backgrounds are captured both in the lab and in real field conditions to improve the robustness of the Faster R-CNN model. In addition, naturaldiseased rice leaf images are processed to produce sufficient training images through data augmentation technology in order to solve the complexity of insufficient diseased rice leaf images and to avoid overfitting of Faster R-CNN models in the training phase.</ns0:p><ns0:p> A Faster R-CNN network is employed for the real-time detection of rice leaf diseases. With the proposed deep-learning method, the discriminatory features of diseased rice images will automatically be classified, and the three major types of rice leaf diseases are recognized with high accuracy. Furthermore, the proposed method could manage all the rice leaf images collected from the rice farmland in real conditions.</ns0:p><ns0:p>The present manuscript is structured as follows: rice leaf diseases dataset (RLDD) generation techniques are implemented in the Materials and Methodology section. A detail description of the development of the model for the detection for the rice leaf diseases is described in this section. Experimental outcomes to determine the accuracy of the proposed solution are described in the Results section and Discussion section exhibits a discussion on the comparison of the proposed model with other related studies along with limitations with prospective solutions for rice leaf disease detection approaches, followed by the Conclusion section, which draws the outcome of the present study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methodology</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows the comprehensive procedure of real-time identification. First of all, RLDD is constructed using a combination of an online database and an own dataset that was collected in this experiment. The online database is freely accessible. The own dataset was created by capturing diseased rice leaf images in the laboratory which were collected by the authors from actual rice </ns0:p></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>Due to the absence of adequate data for real-time rice leaf disease, some of our authors and material resources were committed at the start of our study to collect diseased rice leaves. The patterns of diseases of rice leaves are varied over the season and other factors including moisture, temperature, different insects and illuminance. For example, most conspicuous symptoms of brown leaf spot disease occur on leaves and glumes of maturing plants. In real-time operation, the data collection process is very important since the inappropriate information in a dataset may hamper the experimental result. Hence, during the data collection process, the standard rule should be introduced and maintained.</ns0:p><ns0:p>In this study, the rice images have been captured from the rice farmland and a different condition of the leaves were collected and brought to the lab. The rice leaf datasets were also collected from Online (Kaggle) ('Rice Diseases Image Dataset | Kaggle') due to the lack of suitable and different conditions data from real field. It also helps to check the validation of the proposed model. Then, the entire datasets were merged to train the model and each data has been checked individually to avoid misclassification. The rice leaf infected image database consists of healthy leaf and three diseases including rice blast, brown spot, and hispa. To enhance the robustness of the proposed system, our own captured rice leaf image is combined with a publicly available online database. From the dataset, 600 images of rice blast, 650 images of brown spot, 500 images of hispa and 650 images of healthy rice leaf have been collected. A Total number of 2400 images were collected. The total number of images collected from each database (Kaggle and own dataset) are summarized in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Total number of images collected from each database.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data augmentation</ns0:head><ns0:p>Data augmentation is the process of broadening the dataset to enhance the model's performance by generating different forms of images. It also serves useful in mitigating the overfitting problem in the model during the training stage. The overfitting problem occurs when there is the presence of random noise or errors, rather than the underlying relationship. With the help of data augmentation, more image was generated from each image to train the model since some irrelevant patterns may occur during the training process of the model. For data augmentation operations, several techniques were used namely, rotation transformations, horizontal and vertical flips, as well as intensity disturbance which includes disturbances of brightness. A Gaussian noise processing scheme is employed in which the natural sources like thermal are responsible for the Gaussian noise. It is worth noting that in digital images, Gaussian noise interrupts the gray values. To train the model with training data set, Gaussian noise images were used for better results. With the above approaches, 7 new images are generated from each image as shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. Finally, the dataset containing 16,800 images were created using the data augmentation technique. </ns0:p></ns0:div>
<ns0:div><ns0:head>Image annotation</ns0:head><ns0:p>Image annotation plays a key role in labeling the positions and classes of object spots in the disease and healthy images for multiclass object detection. In computer vision, Pascal VOC is the method which stores annotation in the XML file and the separate annotation files are saved for each image. The LabelIMG is the graphical image tool used for this process in VOC format which is developed in python. The Pascal VOC provides standardized image data sets for object detection. We constructed a file for every image of the dataset in the Pascal VOC. The XML file created includes information such as the bounding box coordinate values and the disease classes. For training purposes, 400 images were annotated for each class (rice blast, hispa, brown spots, healthy) from the dataset and the rest of the images for testing our model performance. Although the whole process is very challenging owing to the fact that the disease area seems to be tiny and difficult to detect a times, nonetheless, it is worth highlighting that a high detection performance in our model was observed. The Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the annotated images of the brown spots. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows that the image contains the object details. The object tag and its content are replicated when images have several annotations. The object tag components are name, pose, truncated, difficult and bound box. These are the names of the objects that are to be detected. Truncated states the bounding box that the object specifies does not fit the entire extent of the object. If an object is partly visible in the image, the truncated is set to 1. Otherwise, the object truncated is set to 0, if completely visible. Difficult: When the object is seen as difficult to identify, an object is identified as difficult. If the object is difficult to recognize, then difficult is set to 1, else is set to 0. The axis-aligned rectangle in the bounding box indicates the size of the object is visible in the image. This technique contributed to understanding the specifics of the two common computer vision data formats.</ns0:p></ns0:div>
<ns0:div><ns0:head>Model architecture with Faster R-CNN</ns0:head><ns0:p>A new effort Faster R-CNN <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref> was launched in 2015 by the team in the target detection community with Ross Girshick after R-CNN <ns0:ref type='bibr' target='#b16'>(Girshick et al., 2014)</ns0:ref> Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b14'>(Girshick, 2015)</ns0:ref> were launched. The R-CNN approach is very important for understanding proposal regions since the proposal regions are classified into object categories or background by training of CNNs end-to-end in R-CNN technique <ns0:ref type='bibr' target='#b16'>(Girshick et al., 2014)</ns0:ref>. Basically, R-CNN works as a network classifier. The accuracy of the model is based on the performance of the region proposal module <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref>. Faster R-CNN does not need a fixed size to detect rice diseases image. As an image input, the length and width must be limited to a certain degree, thereby preventing distortion. The detection speed is significantly increased after the enhancement of the Regional Proposal Network (RPN). An integration of the region proposal algorithm into a CNN model will lead to a faster speedup implementation <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref>. This approach is mainly conducted by Faster R-CNN in order to build a single and unified model that is consisted of region proposal network (RPN) and fast R-CNN with shared convolutional feature layers. Instead of selective search in Fast R-CNN, Faster R-CNN may be simply regarded as a model of ''the regional generation network + Fast R-CNN'' that employs the RPN which is a recommendation algorithm for this propose. The convolution layer / full connection layer processing takes place on the feature map, then a position regression and classification is applied to the detected target. The recommendation of the region is utilized to secure a better location of the disease. Fast R-CNN refers to the detailed measurement of the frame position and object categories in the frame. The following steps of Faster R-CNN are used to build the model for rice leaf disease detection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Box 1: Steps of the Faster R-CNN technique</ns0:head></ns0:div>
<ns0:div><ns0:head> RPNs for candidate regions</ns0:head><ns0:p>The main concept of RPN is to produce regions utilizing CNN that are explicitly suggested. The shared convolution network is fed by the rice leaf diseased images where feature map is achieved that is used as RPN input. The convolutional feature map points are the original image positions <ns0:ref type='bibr' target='#b17'>(Girshick et al., 2016)</ns0:ref>. The components on every map are nine anchor boxes of several sizes. There are two convolutional networks in the RPN. One is a convolution technique of 18-dimensional using a 1×1 convolution kernel to decide about a foreground image which belongs to the anchor box or not. The other is a 1×1 volume with which another convolution structure is passed. To achieve the relative position coordinates dx(A), dy(A) and dw(A) of the bounding box in the case of Ground Truth, a 36-dimensional convolution mechanism is conducted by the accumulative kernel. The original image is mapped with every point on the feature map in which 'anchor point' is described by each pixel <ns0:ref type='bibr' target='#b39'>(Ramaswamy et al., 2014)</ns0:ref>. Each anchor point is utilized to be positioned of multiple anchors of different sizes. The mostly utilized 3 varied aspect ratios are 2:1, 1:1 and 1:2 for popular scales 5122, 1282 and 2562 respectively. The new rectangular position achieved by the anchor is modified at first by the adjustment parameters in the proposal laying of the RPN. The position vector of the lower-left edge and top edge within each target area are considered as the outcomes for the target areas of the earliest photo. This is how the RPN measures are as follows:</ns0:p><ns0:p>Box 2: Steps of the RPN for candidate regions</ns0:p></ns0:div>
<ns0:div><ns0:head> Feature extraction</ns0:head><ns0:p>The processed RPN image is sent to the layer of RoI Pooling, which pools the areas of rice diseases. By further enhancing the SPP-Net algorithm, the Faster R-CNN algorithm suggests a region of interest (RoI Pooling). The RoI Pooling layer enables a number of dimensions to be transformed into a fixed size in line with the needs of the next fully connected network. Every rice disease candidate's area is equally divided by the ROI pooling layer in M×N blocks and performs maximum pooling per block <ns0:ref type='bibr' target='#b5'>(Chang et al., 2019)</ns0:ref>. On the rice disease map, disease candidates of different sizes are converted into standardized data and forwarded to the following layer. Although the size of the input image and the feature mapping is different, a feature representation of a fixed dimension can be extracted for each area by applying the ROI pooling layer to define the disease classification later.</ns0:p></ns0:div>
<ns0:div><ns0:head> Classification, regression and location refinement</ns0:head><ns0:p>The diseases are classified, and the position is refined by taking into account the pictures of rice diseases. The classification steps shall be: first, the classification of objects or non-objects for each of the two regions corresponding to the Anchor Box, k then models of regression (both equal to a different Anchor Box). The formula (Equation <ns0:ref type='formula'>1</ns0:ref>) for the complete estimation of the classification layer is as follows:</ns0:p><ns0:formula xml:id='formula_0'>(𝑥 1 𝑥 2 𝑥 3 ) ( 𝑤 11 𝑤 12 𝑤 21 𝑤 22 𝑤 31 𝑤 32 ) + (𝑏 1 𝑏 2 ) = (𝑦 1 𝑦 2 ) 1</ns0:formula><ns0:p>The rice disease location is determined by the size of the overlap region. The inaccuracy of the candidate's frame and the slight overlap are often the main reasons for unreliable test results. Therefore, a judicious identification of the location is non-trivial towards attaining encouraging results. The eigenvectors achieved in the classification are determined by a complete connection and Softmax, and a species is generated with a probability of a certain rice disease species. The anchor box regression is used to compensate the region from its actual GT position, hence closer to the real position of the rice disease detection frame. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Faster R-CNN is optimized for a multi-task loss function <ns0:ref type='bibr' target='#b14'>(Girshick, 2015)</ns0:ref>. The multi-task loss function combines the losses of classification and bounding box regression. For training RPNs, a binary class label (of being an object or not) has been assigned to each anchor. Equation 2 represented a loss function for an image following the multi-task loss in Fast R-CNN [ <ns0:ref type='bibr' target='#b0'>(Alamsyah & Fachrurrozi, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref>]. </ns0:p><ns0:formula xml:id='formula_1'>𝐿 𝑟𝑒𝑔 (𝑡 𝑖 ,𝑡 * 𝑖 ) = 𝑅(𝑡 𝑖 -𝑡 * 𝑖 )<ns0:label>3</ns0:label></ns0:formula><ns0:p>Where, R is represented as a robust loss function.</ns0:p><ns0:p>The complete architecture of a Faster R-CNN is presented in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Feature visualization process</ns0:head><ns0:p>Owing to the limited explanatory nature of CNN, visualization methods are often used to further understand the CNN features maps in order to decide how CNN's can learn features of the different class evaluated. This experiment is carried out to comprehend better the variations between the feature maps extracted from different diseased rice leaf images. The visualization outcomes are shown in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, which suggest that all the disease spots are clearly identified from the background images. Therefore, the proposed model demonstrates excellent performance in the discrimination of rice leaf diseases. The visualization outcome for the healthy leaf is shown in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> (C). Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref></ns0:p></ns0:div>
<ns0:div><ns0:head>(D) indicates</ns0:head><ns0:p>Hispa that is commonly occurred in a very small region, and the boundaries are not explicit. For brown spot, the spots are divided into two laps, as shown in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>(B). In addition, Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>(A) explores the Rice Blast, which is almost similar to Brown Spot as shown in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>(B). They can still be identified according to their minute differences. This experiment demonstrates the strong performance in the design of the proposed model for disease detection, and it clarifies how the CNNs can differentiate between classes by visualizing the features of different rice leaf diseases.</ns0:p></ns0:div>
<ns0:div><ns0:head>Detection visualization and failure analysis</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The outcomes for the identification of rice leaf disease are shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>. The proposed approach can identify both a single object and multiple objects of a single class, also multiple objects of multiple classes. The proposed method therefore demonstrates high detection performance in both single and multi-class assessments. Although the model is excellent in terms of accuracy, there are inevitable detection failures which occur when the spot region of the leaf is too small. A rice leaf disease example is illustrated in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> (F, H) containing two leaf disease types in a single class. The proposed model is able to detect rice blast and hispa diseases of this class, but the small portion of hispa disease is not detected successfully. The model detects hispa and a healthy portion of the leaf successfully, as shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> (H). On the other hand, from the Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> (F), it is evident that the model can detect the multi-class disease (rice blast and hispa) efficiently; however, it fails to detect the very tiny portion of hispa. The reduction in the detection accuracy is attributed to the similar characteristics of the diseases as shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> (E). Owing to the similar characteristics of brown-spot and rice-blast, the developed model was confused in some cases. Environmental variables including complex background, blurriness and lighting also influence the accuracy of identification. Furthermore, one of the factors contributing to increase the detection failure is the small size of the lesion. Hence, it will be difficult to extract and detect the feature if only a small part of the image is taken by the leaf or the diseased region. Despite of all the limitations, in most of the cases, the proposed model has the ability to detect the leaf spot as shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> (A, B, C, D, G). The detection ability of the leaf spot in a real rice field is presented in Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of pre-network recognition accuracy</ns0:head><ns0:p>Object detection algorithms like Single Shot Detector (SSD), Deconvolutional Single Shot Detector (DSSD) and Rainbow Single Shot Detector (R-SSD) essentially consist of two components. The first element is the pre-network model used to extract the basic features. The other is an auxiliary structure that utilizes multi-scale detection of feature maps. Various deep convolution networks including ResNet-101, ResNet-50, and VGGNet-16 <ns0:ref type='bibr' target='#b47'>(Simonyan & Zisserman, 2015</ns0:ref>) <ns0:ref type='bibr' target='#b25'>(Liu & Deng, 2016)</ns0:ref>, and MobileNET <ns0:ref type='bibr' target='#b21'>(Howard, 2017)</ns0:ref> are trained and tested to compare the recognition performances of traditional networks with that of our proposed Faster R-CNN on RLDD. The stochastic gradient descent (SGD) algorithm is employed during training to learn about the neural network weights and biases, which reduces the loss function. A limited number of training sets are selected randomly by the SGD algorithm, known as the batch size. The batch size is set to 1 where the final number of iterations is fixed at 50965. The learning rate is set at 0.0002, although very small, it contributes towards more reliable results. The momentum, which acts as an additional factor to decide how quickly the SGD algorithm converges to the optimal point, is set at 0.9. The accuracy curve is indicated, as shown in Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Confusion matrix</ns0:head><ns0:p>When dealing with multiple classes of similar shape, classifiers may be confused. Infected rice leaf images on different levels or backgrounds can cause high complexity which leads to lower performance for the patterns displayed in the same class. The classification accuracy of a model can be visually tested using a confusion matrix. The entire dataset of our study is split into a training set and a testing set randomly in order to train and test the model. To evaluate the proposed model, the 50% dataset is used to train and the remaining 50% dataset is used to test. Total 8400 observations are utilized for training the model, whereas another 8400 observations are utilized for testing the model. Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref> displays the final test results confusion matrix. The deeper the color in the visualization results, the greater the model's accuracy in the respective class. All correct predictions are located diagonally, whilst all wrong predictions are off diagonal. The classification accuracy can be visually assessed based on these findings. The study shows that for the above three diseases and healthy leaf. Brown spot and hispa diseases are significantly differentiated from other diseases by their features and by their identification rates with 98.85% and 99.17%, respectively. In the healthy leaf study, the accuracy is achieved by 99.25%. According to the confusion matrix, it is apparent that the detection model is more prone to confusion in distinguishing rice blast and brown spot compared with other classes. Among 2100 images in the testing set of rice blast spot, 31 images have been detected incorrectly as brown spot.</ns0:p><ns0:p>On the other hand, among 2275 images in the testing set of brown spots 20 images have been detected incorrectly as rice blast spots. This misclassification may be caused by the geometrical feature similarities between the two diseases. However, other classes are well distinguished. The confusion matrix describes the low inaccuracies in the identification of different classes in the present investigation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Loss analysis of the proposed model</ns0:head><ns0:p>The research seeks to mitigate the loss of function, thereby reducing errors in the model. In doing so, every machine learning algorithm repeats calculations several times, until the loss is plateaued.</ns0:p><ns0:p>The learning rate plays a significant role in order to minimize the loss function. In the proposed study, the learning rate is set to 0.0002. TensorBoard is a fantastic tool for viewing these metrics and finding possible problems. TensorBoard frequently upgrades the measurements and provides the outcomes to the user. In this purpose, the model trained with 50965 iterations with the help of a training dataset. Figure <ns0:ref type='figure' target='#fig_2'>10</ns0:ref> depicts the generated loss analysis by the TensorBoard, indicating that the total loss is withing the vicinity of 0.1.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>10</ns0:ref>: The classification loss of the proposed system.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Comparison of the proposed model with other related studies The comparison of the proposed model with existing related studies is represented in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Comparison of the proposed model with other related studies.</ns0:p><ns0:p>Most of the studies listed in Table <ns0:ref type='table'>2</ns0:ref> have used either utilized publicly available dataset or own captured dataset to validate their methods. The models validated with publicly available dataset always do not ensure the stability of the model's performance in a real-time approach. To address this issue, our proposed method is validated with both publicly available and own dataset. Moreover, the total observation of our proposed study is higher than other studies tabulated in Table <ns0:ref type='table'>2</ns0:ref>. Despite these facts, the performance of the proposed model is higher than the other models for rice leaf diseases identification. The detection accuracy achieved by Rathore et al. (Narendra Pal Singh Rathore, 2020) is slightly higher than our proposed method, as their dataset consists of only one type of rice leaf disease (leaf blast), hence the discrepancies are acceptable. Therefore, by considering the strong dataset, real-time disease detection ability and detection accuracy, our proposed method is somewhat superior to that of other related approaches for rice leaf disease identification reported in the literature.</ns0:p></ns0:div>
<ns0:div><ns0:head>Uncertainties and limitations</ns0:head><ns0:p>Although the proposed model outperforms state-of-art rice leaf diseases detection methods, some of the drawbacks are also identified. Some limitations of this study with the prospective solution to address these challenges are as follows:</ns0:p><ns0:p> The network looks at the whole image, and not in just one go but sequentially concentrates on part of the image. Thus, the algorithm requires many passes to extract all objects through a single image which is time-consuming. To address this issue, a network should be recommended which can extract objects of an image in a single pass.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Since several processes have been conducted one after the other, the performance of the further system depends on how the previous system performed. Thus, a model should be trained carefully with appropriate datasets to achieve the desired performance.</ns0:p><ns0:p> The misclassification issues could occur as a result of the geometrical feature similarities between the diseases. To overcome this obstacle, it should be required to train the network with more datasets which have similar geometrical features. It also recommended addressing more efficient deep learning algorithm which can classify the diseases containing small dissimilarities in features.</ns0:p><ns0:p> In a real-time approach, the rice leaves conditions vary with the season having different humidity, temperature, and illuminance. Hence, some overfitting problem may emanate when random noise or errors occurs rather than the underlying relationship, as previously described <ns0:ref type='bibr' target='#b20'>(Heisel et al., 2017)</ns0:ref>. During the training stage, the overfitting problem is expected to occur randomly due to the lack of proper images with various conditions. To overcome these problems, in this study we have used data augmentation in the process of the training stage of Faster R-CNN. During the training, the proposed model can learn huge irrelevant patterns through the large amounts of images which is generated by the data augmentation process. This phenomenon helps to reduce the overfitting problem and achieve the higher performance. More approaches such as saturation, hue and Generative Adversarial Networks (GANs) <ns0:ref type='bibr' target='#b4'>(Bowles et al., 2018)</ns0:ref> can be employed to overcome this issue.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The signs of infection appear in various sections of the plant, and leaves are widely used to diagnose the plant disease. The advanced computer vision technology encourages researchers around the world to carry out extensive experiments on plant disease recognition using leaf image analysis techniques. In the past few years, deep learning methods have notably been utilized to recognize plant leaf infection. This paper proposes a real-time rice leaf disease diagnosis framework based on the Faster R-CNN technique. The rice leaf infected image database consists of healthy leaf and three diseases, including rice blast, brown spot, and hispa. In order to enhance the robustness of the proposed system, our own captured rice leaf image is combined with a publicly available online database. Moreover, we have used several image augmentations schemes to enrich the dataset, which familiarizes the model with the different possible conditions of the image. This strategy also enhances the model's performance and generalization capability. The obtained results of the proposed study are very encouraging to diagnose healthily and the different types of infected leaves in both laboratory-based images and real-field images. However, an additional study should be carried out to make segmented the infected portions of the leaf image by minimizing the surrounding interference. The existing rice leaf disease diagnosis systems are designed using laboratory-based captured images. Although we have implemented real-time disease recognition architecture using real field rice leaf images, the proposed system is still not fully automated. Therefore, further study should be carried out to implement a dynamic and automatic system to recognize large-scale rice leaf diseases. This system could be made up of a mobile terminal processor and agricultural Internet of Things that may be favorable to modernize the agricultural industry. </ns0:p></ns0:div>
<ns0:div><ns0:head>The Faster R-CNN technique:</ns0:head><ns0:p>Step 1: To acquire a feature map, the entire image of rice diseases is fed into CNN.</ns0:p><ns0:p>Step 2:</ns0:p><ns0:p>To gain the feature information of the candidate frame and the convolution feature is then fed into the RPN.</ns0:p><ns0:p>Step 3:</ns0:p><ns0:p>To recognize whether the features of rice diseases from the candidate box belongs to a specific disease category and then classify.</ns0:p><ns0:p>Step 4:</ns0:p><ns0:p>To adjust the disease location again by a regression device for the candidate frame belonging to a specific disease feature.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RPN steps for candidate regions:</ns0:head><ns0:p>Step 1: To slide a window on the map of rice disease.</ns0:p><ns0:p>Step 2:</ns0:p><ns0:p>To classify the leaf infections and revert back the location of the frame, a neural network is formed.</ns0:p><ns0:p>Step 3:</ns0:p><ns0:p>To provide approximate distribution details of leaf infection according to the position of the sliding window.</ns0:p><ns0:p>Step 4:</ns0:p><ns0:p>To achieve a better location of leaf infection with the box's regression. The model variables stated in Phase 3 are kept unaltered. The Faster R-CNN architecture is formed and trained the network for the 2nd attempt to optimize the specifications.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, Phadikar et al. proposed a rice disease PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)Manuscript to be reviewed Computer Science fields. The original RLDD is then annotated manually and expanded through the several data augmentation procedures. The entire dataset is subsequently split into two groups: training dataset and testing dataset. To train the Faster R-CNN model, training dataset is employed whereas testing dataset is utilized for performance assessment. The detection outcomes consist of the classes as well as the locations of the identified rice leaf diseases.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Complete architecture of the proposed study.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Data augmentation of rice leaf disease images: (A) original image (B) image rotated by 180-degree (C) high brightness (D) Gaussian noise (E) horizontal flip (F) low brightness (G) vertical flip.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The image annotation outcome in XML file.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>and Fast R-CNN PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>The training processes and loss function The Caffe deep learning approach is used to carry out the experiment. The training set of rice diseases was sent randomly to the neural network for training. The model was tested, and the test results were analyzed after the completion of the training process. The following phases reflect the Faster R-CNN training model: Box 3: Phases of the Training processes (Faster R-CNN training model) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Architecture of Faster R-CNN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Activation visualization results (A) Rice Blast (B) Brown Spot (C) Healthy (D) Hispa.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Types of detection results (Images collected from Online and captured in the lab).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Types of detection results (Real Field Image).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>with the number of training iterations in the X-axis and corresponding Y-axis shows the training accuracy. The comparison of test accuracies of different pre-networks (VGGNet-16, ResNet-50, ResNet-101, MobileNet3 and Faster R-CNN) are defined in terms of accuracy curve, as shown in Figure 8. The VGGNet-16 networks have higher convergence speed but lower accuracy. On the other hand, from the figure, it is evident that the Faster R-CNN model shows high accuracy on the RLDD as compared to other pre-trained models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Performance comparison with other pre-trained models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Confusion matrix of the proposed approach.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Training processes: Different Phases of Faster R-CNN training model: Phase 1: After initializing the RPN structure with the pre-trained framework, the RPN is trained. The model's distinctive value and RPN are revised when the training is finished. Phase 2: The Faster R-CNN architecture is formed. Subsequently the proposal is calculated by utilizing the trained RPN and then the proposal is sent to the Faster R-CNN network. Following this, the network is trained. Then the model and the uniqueness of the Faster R-CNN is updated through the training process. Phase 3: The RPN network is initialized by employing the model that was formed in the Phase 2. Then a second training is carried out on the RPN network. The RPN's distinctive value is altered at the time of the training procedure while the model parameters remain unchanged. Phase 4:</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,453.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,219.37,525.00,261.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,508.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,522.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,249.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,263.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,391.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,429.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,362.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>developed an apple leaf disease identification technique that includes of generating sufficient pathological images and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure4illustrates an entire framework for object detection which is a single and unified network. At first, the feature maps are received from CNN by the Faster RCNN. After that, it passes the collected features to the Region Proposal Network (RPN). Various image sizes can be fed as input to the region proposal network (RPN). The outputs are comprised of a series of rectangular object proposals. RPN is inserted next to the last convolution layer of CNN. A small network is slided over the convolutional feature map output for creating region proposal network by the last shared convolutional layer in CNN. A n × n Spatial Window of the input convolutional feature map is the input for this small network. At the position of each sliding-window, multiple region proposals are predicted simultaneously. RPN transmits the last layer of CNN (sliding window) to a lower dimension into feature map. The proposal from RPN are fed to ROI pooling layer. The fixed-size feature maps are generated from different sizes of inputs by ROI pooling layers. The output fixed 𝑡 𝑥 = (𝑥 -𝑥 𝑎 )/𝑤 𝑎</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(4)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>, 𝑥 = (𝑥 * -𝑥 𝑎 )/𝑤 𝑎 𝑡 𝑤 = 𝑙𝑜𝑔(𝑤/𝑤 𝑎 ) 𝑡 * 𝑡 * 𝑤 = 𝑙𝑜𝑔(𝑤 * /𝑤 𝑎 ), , 𝑡 𝑦 = (𝑦 -𝑦 𝑎 )/ℎ 𝑎 , 𝑡 ℎ = 𝑙𝑜𝑔(ℎ/ℎ 𝑎 ) , , 𝑡 * 𝑦 = (𝑦 * -𝑦 𝑎 )/ℎ 𝑎 𝑡 * ℎ = 𝑙𝑜𝑔(ℎ * /ℎ 𝑎 ),</ns0:cell><ns0:cell>(5) (6) (7) (8) (9) (10) (11)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>where the center coordinates, width and height of the box are represented by x, y, w, and h. The</ns0:cell></ns0:row><ns0:row><ns0:cell>variables x, , and 𝑥 𝑎</ns0:cell><ns0:cell>𝑥</ns0:cell><ns0:cell cols='2'>(similarly applicable for y, w and h also) are for the predicted box, anchor</ns0:cell></ns0:row></ns0:table><ns0:note>dimension of the ROI pooling depends on the parameters of the layer. Finally, this feature is used to fed into two fully connected layer, namely box-classification layer (Classifier) and boxregression layer (regressor). A refined bounding box is utilized as a regressor whereas the objects are classified by the classifier. The following equaltions [Equation 4 -Equation11]<ns0:ref type='bibr' target='#b16'>(Girshick et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b40'>(Ren et al., 2017)</ns0:ref> are used for bounding box regression. , * box, and groundtruth box respectively.</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021)</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54808:2:0:NEW 13 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Editor’s Comment:
The authors should improve the description of the neural network in the revised version of the manuscript.
Author Response:
Thank you for giving us the opportunity to submit a revised version (minor corrections) of our manuscript titled 'A Real-Time Approach of diagnosing Rice Leaf Disease Using Deep Learning-based Faster R-CNN Framework'. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. Specifically, more details on convolutional neural network have been discussed on the subsection “Model Architecture with Faster R-CNN”. Moreover, more mathematical equations have been added with required explanation to enhance the manuscript in the subsection “Model Architecture with Faster R-CNN (The training processes and loss function)”. In addition, we have included a brief discussion on the Figure 4 focused on CNN and ROI pooling in the R-CNN architecture in the subsection “Model Architecture with Faster R-CNN (The training processes and loss function)”.
Here is a point-by-point response to the reviewers’ comments and concerns.
Reviewer’s Comment (Reviewer 2):
Reviewer’s Comment-1
Basic reporting:
This is a well-written paper containing interesting results which merit publication. For the benefit of the reader, however, certain statements require further justification. There are given below.
1. In the Model architecture with Faster R-CNN part, more mathematical inference can help readers understand the network.
2. More detail can be shown about the architecture of CNN and ROI in Figure 4.
Author Response-1
We appreciate the concern raised by the reviewer. In addition to the above comments, more mathematical equations have been added in the subsection “Model Architecture with Faster R-CNN (The training processes and loss function)”. Moreover, we have included more discussion about the architecture of CNN and ROI on the Figures 4 in the subsection “Model Architecture with Faster R-CNN (The training processes and loss function)”.
Reviewer’s Comment-2
Experimental design:
no comment
Author Response-2
-
Reviewer’s Comment-3
Validity of the findings:
no comment
Author Response-3
-
Reviewer’s Comment-4
Comments for the author:
no comment
Author Response-4
-
Reviewer’s Comment (Reviewer 1):
Reviewer’s Comment-1
Basic reporting:
The authors have improved the clarity and the discussions raised showing sufficient knowledge in the field background.
Author Response-1
-
Reviewer’s Comment-2
Experimental design:
Authors have further shown satisfactory and detailed discussion of all the methodology steps in the proposed Faster R-CNN.
Author Response-2
-
Reviewer’s Comment-3
Validity of the findings:
Authors have shown from the comparison table of the experimental results, that the performance of Faster R-CNN framework was effective in the real-time detection of Rice Leaf Diseases when compared with some of the state-of-the-art methods.
Author Response-3
-
Reviewer’s Comment-4
Comments for the author:
Authors have carefully addressed all the issues raised in previous comments.
Author Response-4
-
" | Here is a paper. Please give your review comments after reading it. |
18 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The software product lines (SPL) enable development teams to fully address a systematic reuse of shared assets to deliver a family of similar software products. Mobile applications are an obvious candidate for employing an SPL approach. This paper presents our research outcomes, based on empirical data from an industry-level development project. Two development teams were confronted with the same functionalities set to be delivered through a family of native mobile applications for Android and iOS.</ns0:p><ns0:p>Methods. Empirical data was gathered before, during and after a year of full-time development. The data demonstrate the impact of a SPL approach by comparing the SPL and non-SPL multiple edition development. One family of products (Android apps) was developed using an SPL approach, while another (iOS apps), functionally the same, was developed without employing an SPL approach. The project generated a volume of raw and aggregated empirical data to support our research questions.</ns0:p><ns0:p>Results. The paper reports a positive impact of an SPL approach on product quality (internal and external) and feature output per week. As data shows, it also increases the delivery of functionalities (240% in 6 more editions), while investing the same amount of effort needed for a single-edition development. As a result of system-supported separation of development and production code, developers had a high confidence in further development. On the other hand, second team delivered less new functionalities, only two new application editions, and lower software quality than team that manage multi-edition development by employing an SPL approach.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Reuse is one of the fundamental disciplines in software engineering. It plays an important role in the development of new systems and in maintenance of existing ones. It is an important concept, especially when developing several simultaneous versions of the software. Usually, a software does not result in a single version or edition, specially tailored to certain customers. On the other hand, we are talking about diversity also when we have at the declarative level only one version of the software, i. e. the test version, the production version, etc. Even more, software can also be tailored to a specific set of hardware. Regardless of the domain, reuse plays a crucial role in successfully managing a set of similar software editions and versions. The software product lines (SPL) is an approach to reuse, employed in case where a family of products shares several common functionalities. In addition to rare publicly available empirical evidence, software architects have to rely primarily on theoretical benefits and their lessons learned, while practicing the SPL approach to reuse. In this paper we will present our research outcomes, done during one year of development of mobile applications for two mobile platforms, Android and iOS, sharing a common set of functionalities. Mobile applications are a part of a larger project which also included backend cloud solutions, a web portal, a media streaming server and tablet applications. Several editions of mobile applications, applications for the Android platform, were managed with the introduction and implementation of the Software Product Line (SPL) approach, while other set of applications, applications for the iOS platform, were managed with more traditional methods of reuse, e.g. branches in version management system, sharing the same codebase, but compiling it several times, using compiler directives, runtime checking, etc. Android and iOS development teams shared the same set of functionalities that had to be developed and they were given the same time to finish the implementation. This industry-based setup gave us the opportunity to explore and share interesting pre-, mid-and post-development empirical data, compiled to research observations on SPL approach implications. The rest of the paper is organized as follows. The chapter 'Related Work' gives a general overview on the state-of-the art approaches, emphasizing the SPL approach. Also, related work that deals with the same research is outlined. In the next chapter, the research method is discussed in detail. Research questions are presented. The project setup, methods, tool and SPL realisation are presented. Chapter 4 outlines the domain in which the SPL approach was exercised. The chapter 'Results' provides details on empirical data that is used to address research questions. The paper continues with 'Discussion' providing an interpretation of the empirical data, focusing on comparing Android and iOS products and the velocity of the teams. The differences will be discussed together with long-term implications and both the positive and negative lessons learned. The paper finish with 'Conclusion' that summarizes the most important findings of our research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related work</ns0:head><ns0:p>The software product lines (SPL) approach to reuse in the software engineering area has been discussed and published for several years. It was introduced in the Software Engineering Institute <ns0:ref type='bibr' target='#b14'>(Northrop, 2002)</ns0:ref> and proved to be an adequate solution to reuse in special cases, when several software products share a majority of functionalities, while only a fraction of functionalities are edition-specific. The foundation book 'Software Product Lines: Practices and Patterns' from Clements et.al dates to 2001 <ns0:ref type='bibr' target='#b2'>(Clements, Northrop, 2001)</ns0:ref>. According to the original SPL idea, development efforts are directed towards developing core assets, while product development is a process of aligning core assets into final products. Management activities (including analysis and design) are shared among all products. Northrop <ns0:ref type='bibr' target='#b14'>(Northrop, 2002</ns0:ref>) also proposes several patterns and their variants, to be used for SPL-based development. The SPL approach is explained in detail by the Software Engineering Institute (SEI, 2020). They define software product lines as follows: A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. As explained by Northrop (SEI, 2020) SPL in addition to the existing mechanisms of reuse allow other levels of reuse -reuse at the level of larger software pieces. Besides reusing technical building blocks, these also include reusing procedures and rules, associated with the software. They include single analytics, planning, and management of software development. The SPL approach could be implemented when some of the following issues occur as a result of the complexity of the software (SEI, 2020):</ns0:p><ns0:p> we develop the same functionality for a variety of products and/or customers,  the same change should be made in a number of different software products,  the same functionality should behave differently depending on the final product,  certain functionality can no longer be maintained, and so the customer has to move to a newer version of the software,  we cannot estimate the cost of transferring certain features to different software,  certain basic infrastructure changes lead to unpredictable behaviour of dependent products,  the majority of effort is put into maintenance, and not the development of new functionalities. The additional costs of the SPL approach are also clearly stated: architecture, building blocks and individual tests should include the possibility of variability, while business plans must be made for multiple products, not just one. The long term claimed contributions of SPL are as follows (SEI, 2020): up to 10x improved productivity, up to 10x improved quality, joint development costs reduced by up to 60%, shortened time-to-market by up to 98% and the possibility of moving to new markets is measured in months, not in years. SPL positive effects could, however, only have been observed if SPL approaches were used correctly and in appropriate software development projects. Muthig et.al. <ns0:ref type='bibr'>(Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004</ns0:ref>) lists several possibilities of misusing the SPL approach. These include (Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004) general purpose reuse, using classical reuse techniques such as component-based reuse, having configurable software behaviour and managing versions of the same software. According to many authors, an important aspect in the SPL approach is managing variabilities. Cavalcanti et.al. <ns0:ref type='bibr' target='#b1'>(Cavalcanti, Machado, Anselmo, 2013)</ns0:ref> define the SPL-approach as a tool to effectively cope with variabilities. The authors address three types of variabilities:</ns0:p><ns0:p> Functionality presence: If the functionality is present in all the lines and in all with the same realization, such functionality may be realized in the most general common building block.</ns0:p><ns0:p> The lack of functionality: the functionality is not present in particular lines. In the case that the functionality is required in only one line, the functionality may be realized in the line itself, otherwise it is necessary to introduce a specific building block.</ns0:p><ns0:p> A different realization: the functionality is available, but the realization will be different in different product lines. A different realization can be realized in the line, unless the same feature can be found in multiple lines -in this case, it is reasonable to introduce a new building block, which is a specialization of the existing one. Clements <ns0:ref type='bibr' target='#b3'>(Clements, 2005)</ns0:ref> explains that the technical realization of variabilities is based on already established and well-known concepts in software engineering, including component level reuse, design pattern employment, developing plug-ins, using parameterization, configuration with deployment descriptors and others. SPL approach specifics for modern mobile platforms and the specifics of mobile application development have been little published so far. However, some authors have published papers in this area. Muthig et.al. <ns0:ref type='bibr'>(Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004</ns0:ref>) published a report on The Go Phone Case Study. It is an in-depth report on the practical proof-of-concept development of mobile applications for several platforms; including SPL-specific analysis and design. Usman et.al. <ns0:ref type='bibr' target='#b10'>(Usman, Iqbal, Uzair Khan, 2017)</ns0:ref> shows two case studies employing SPL approach as well. Authors base their case studies on problem of developing and maintaining multiple native variants of mobile applications to support different mobile operating systems, devices and varying application functional requirements. Their approach includes their own tool (Moppet) to automates their approach of model-driven generating mobile applications. Authors deal with three types of variations in mobile applications: variation due to operation systems and their versions, software and hardware capabilities of mobile devices, and functionalities offered by the mobile application. Similarly, Marinho et.al. (Marinho, Fabiana, Andrade, Werner, Werner, Maia, Rocha, Aguiar, 2012) discusses the use of an SPL approach in mobile development for several hardware and context situations. They presented Android applications that resulted from the MobiLine project as well as the approach used to build them. The SPL employment resulted in 57 applications that share 100% common mobility-related functionalities, 87% common message exchange-related functionalities and 8% context-related functionalities. For modelling and building applications they proposed and used their MobiLine development approach. The authors only reported on the lessons learned from technical and project management areas. Alves et.al. <ns0:ref type='bibr' target='#b0'>(Alves, Camara, Alves, 2008)</ns0:ref> presented successful SPL employment in the area of mobile games. They observed an interesting fact -the SPL approach has been used several times in the past, but they did not name it SPL. It emerged naturally. At the same time, they reveal the need for standards or standard approaches in terms of establishing a common SPL architecturewhich would simplify SPL-based development dramatically. Quinton et.al. <ns0:ref type='bibr' target='#b15'>(Quinton, Mosser, Parra, Duchien, 2011)</ns0:ref> reported on the problem of addressing variabilities, while designing mobile applications. The SPL approach is defined by two independent dimensions: mobile device variety and mobile platforms. Their model-driven approach, supported by the Applitude tool, enables the creation of Android, iOS and Windows Phone mobile applications, while considering variabilities. The variabilities are addressed with merging core assets, while the platform dimension is supported by metamodel in Applitude. Dürschmid et.al. <ns0:ref type='bibr' target='#b4'>(Dürschmid, Trapp, Döllner, 2017)</ns0:ref> acknowledges the SPL approach in paper 'Towards architectural styles for Android app software product lines' to be complex in Android development due to inflexible operating system framework. This is why they propose several techniques to achieve appropriate SPL architectures. They include activity extensions, activity connectors, dynamic preference entries, decoupled definition of domain-specific behavior via configuration files, feature model using Android resources. Using their approach, authors manage to demonstrate the benefits via 90% of code reuse in their case study. As we will demonstrate in this paper, our real-life production project goes even beyond techniques mentioned before (we also employ design patterns etc.) and demonstrate benefits not only via code reuse, but also via increased productivity and code quality. As demonstrated in systematic literature reviews by El-Sharkawy et.al. (El-Sharkawy, Yamagishi-Eichler, Schmid, 2019) and Chacón-Luna et.al. (Chacón-Luna, Gutiérrez, Galindo, Benavides, 2020), SPL has gained momentum lately and the interest in empirical data on SPL has increased (Chacón-Luna, Gutiérrez, Galindo, Benavides 2020). In addition to this, Software Product Lines Conference (SPLC, 2020) is organised annually, regularly disseminating the progress of this research domain. We see our papers' role as an important report of industry-scale development with shoulder-to-shoulder comparison of parallel development of the same application -one team with, and one team without SPL approach. Another indicator of SPL approach gaining popularity in the mobile development is also the fact, that in 2019, Android API and Android Studio have a full support for 'product flavours', which is Google's term for SPL -creating different variants of an app (Android Developers, 2020). Product flavours allow the developers to specify different features and device requirements as well as use specific source sets for each flavour, while still using shared code and assets where possible. Each build variant represents a different version of an app built from a single project (Android Developers, 2020). When building the app only the source sets relevant to the selected build variant are included in the resulting end-user package, while all the others are left out. The PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Googles' approach to SPL is similar to the approach, presented in this paper. However, our approach, as demonstrated later in the paper, is based on projects, libraries, proven design patterns and avoids compile-time separation of product lines. In addition, even app resources (such as graphics, multilanguage translations etc.) are addressed by variabilities management, presented in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>3</ns0:head></ns0:div>
<ns0:div><ns0:head>Research method</ns0:head><ns0:p>The 24alife project (presented in section 4) includes two mobile development teams -Android and iOS. Application families are developed from single requirement-design body in separate development teams, resulting in native applications for Android and iOS.</ns0:p><ns0:p>Presented circumstances enabled us to design a research which would quantify development teams design decisions on how to manage product families. The empirical data results from a process, illustrated in figure 1.</ns0:p><ns0:p>To verify whether development teams, separately working on mobile applications for Android and iOS are comparable, a pre-development survey was executed, capturing the developers' experience and perceived knowledge. We designed our questionnaire based on the practices set forth Chen et.al. (Chen, Chen, Ma, Zhou, Zhou, Xu, 2018). We asked them to enter their perceived level of knowledge of programming languages and provide a number of years for their professional experience. Since the knowledge self-assessment can be biased and subjective, the years of experience criterion was added in order to objectify participant's experiences. The development team for the iOS application managed their multiple editions of mobile applications manually. The Android development team did the same during the weeks 1-33.</ns0:p><ns0:p>From week 34, the Android development team switched to an SPL-based approach. In week 55, our research ended. During development weeks 1-55, we continuously measured internal quality metrics and source code size. It resulted in empirical data, used to verify if we can compare iOS and Android source code in the first place. Secondly, if and what is the impact of week 33s' decision on Android source code and further delivery of functionalities. We measured external quality via analysing test reposts, generated by the quality assurance team. In addition to measuring source code size, application size was measured by counting functionalities, offered to users by application editions, which is also the case in function point analysis, which is one of the standard metrics for determining the size of the software product <ns0:ref type='bibr' target='#b20'>(Albrecht, 1979)</ns0:ref>. The source code size (Lines of Code metric) was continuously monitored during the development not only to provide evidence on ability to compare iOS and Android application, but also to support possible differences in delivered functionalities in the second half of the development. Deviations in size-based metrics would signal that the introduction of the SPL approach was not the only change in the development process.</ns0:p><ns0:p>Post-development gathering of empirical data consists of a questionnaire to capture developers' perception of their decision to manage multiple editions. It also included gathering and interpreting empirical metrics from the configuration management system, bug tracking system and release logs. Post-development data gathering aims is to show if introducing the SPL PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>approach returned the investment in terms of changed delivery times and possible quality changes.</ns0:p><ns0:p>Based on the presented research method and compiled empirical data, we would like to conclude while answering the research questions below:</ns0:p><ns0:p> RQ1: Is the source code size comparable between Android and iOS applications, while having the same set of requirements?</ns0:p><ns0:p> RQ2: What are the impacts of introducing a software product lines (SPL) approach to a mobile development?</ns0:p><ns0:p>o RQ2.1: What are the impacts of the SPL approach to application growth? o RQ2.2: What are the impacts of the SPL approach to application quality? o RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? The presented set of research questions was carefully selected to check: a) Whether we can compare Android and iOS applications on source code level, while having the same set of functionalities and investing the same effort amount; b) Internal and external quality of both applications with respect to selected multiple editions approach; c) Delivered application editions and core, optional and alternative functionalities with respect to selected multiple editions approach; d) Developers perception on the selected multiple editions approach. In addition to two questionnaires (pre-and post-development questionnaire), our research relies on software metrics, that support answers on research questions: </ns0:p></ns0:div>
<ns0:div><ns0:head>4</ns0:head><ns0:p>Our approach towards several mobile application editions 24alife is the ecosystem of information solutions, oriented towards an increasing quality of life. This multidisciplinary project includes medical, sports, psychological and nutritional aspects as well as the combined view of an individual through targeted support in detecting, monitoring and eliminating the negative effects of stress. It is intended for individuals in strengthening and upgrading health reserves and, as such, focuses on a healthy lifestyle.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Mobile applications (Android, iOS) are used as a supplement to the primary, web-based, user interface. Their main role is to track and guide sports activities (such as jogging, cycling, fitness training, etc.), to do daily measurements (heart rate, weight, blood pressure etc.), motivate users, offer guidance for portal-created personal programmes, etc. In addition to simple, practical suggestions, analysis and progress indications, they are also tasked with exchanging data with the cloud. There are several publicly available editions of mobile application, such as free and payable version. In addition to this, some editions are available only to project teams and partners and special customers. At the moment, all components support 6 languages and the imperial and metric system. The mobile development teams were combined in week 1. Weeks 1 to 3 were dedicated for preparations, reviewing the initial product backlog, early prototyping, deciding on architecture design, preparing user interface wireframes and designs. Functionality-driven development, i.e. the first iteration, started in week 4. The development practices and process mechanics were organized according to Scrum development method <ns0:ref type='bibr' target='#b22'>(Sutherland et al., 2014</ns0:ref>) by practicing planning, daily and demo meetings, retrospectives, honest sprint commitment via planning poker and others. Both development teams (iOS and Android) consisted of three experienced developers. Both teams share the same scrum master, product owner, graphics designer and quality assurance team.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Managing several product editions</ns0:head><ns0:p>In order to manage several editions efficiently, during requirements gathering, designing and testing, functionalities were collected in a multi-dimensional table. Functionalities were not only listed, but also described in terms of which edition functionality was available and if and what specialities were required for a particular functionality in a particular edition. This is how development team ended with functionalities written in several categories: common (all editions), optional (only in selected editions) and alternative (edition-specific implementation of the same functionality). Such approach enabled development teams to reuse common requirements, design and testing, which is also one of the foundations in order to establish SPL approach (SEI, 2020). In week 33, when multi-edition development was a clear statement from the customer, Android development team decided to invest extra effort to prepare the production-ready appropriate SPL architecture, libraries and projects. iOS developers, on the other hand, decided to create several editions with a straightforward manual solution -preserving a single development project with compiler directives and run-time switches, finally manually building the desired edition while providing an appropriate set of configurations. iOS development teams' approach is the best described as 'ifdef hell', reported by several authors, e.g. <ns0:ref type='bibr' target='#b6'>(Feigenspan, 2013)</ns0:ref>. A Set of core Android mobile development assets and available product lines is presented in Figure <ns0:ref type='figure'>2</ns0:ref>. Figure <ns0:ref type='figure'>2</ns0:ref> shows available assets (components implemented as Android libraries), from which 7+1(Core Module) are fully functional Android applications, combined from other assets. Figure <ns0:ref type='figure'>2</ns0:ref> does not capture variabilities in functionalities (which can be seen in Table <ns0:ref type='table'>1</ns0:ref>), rather it shows available components. A set of functionalities is present in particular application edition by appropriate library in edition. Functionality absence in achieved by not including the library. The alternative implementation is achieved by including library and overriding (a part) of its implementation by employing appropriate design pattern. A code-generation approach to introduce common, optional or alternative functionalities was not employed in the presented SPL architecture. No additional source code was automatically generated in order to support SPL approach. The '24alife Core Module' is the Android library (at the same time a fully running Android application). It realizes functionalities that are common to all editions (product lines). The 'Core Module' also contains common architecture of mobile applications and the variability points of additional functionalities (see figure <ns0:ref type='figure'>3</ns0:ref>). Applications within the product line are built into the final products with standard build tools, available in Android Studio. Core product assets (Android libraries) contain, in addition to the source code, also XML-written user interfaces, graphics, multilingual messages, dependant libraries, static content, etc. The library structure is shown in Figure <ns0:ref type='figure'>3</ns0:ref>. Android development team implemented variabilities using industry-proven best practices and approaches. These include the use of object-oriented design, proven design patterns, extensions, and component parameterization. Design patterns are used heavily, especially: factory, abstract factory, factory method, bridge, bean, adapter and others (Gamma, Helm, Johnson, Vlissides, 1998). As shown in Figure <ns0:ref type='figure'>3</ns0:ref>, Android developers created a common SPL architecture as a set of concrete and abstract classes. Their primary role is to handle the orchestration of newly introduced functionalities from other libraries via prepared extension points. Component-based development is heavily used in the presented SPL realisation. All components are managed (created, initiated, cleaned etc.) via the Component Manager class. The Component contains the implementation of predefined interfaces and specific interfaces with their implementation as well. Operations, such as preparing the database structure, upgrading the database, exchanging data with the cloud (synchronizing) and others are all part of a component. This is how certain editions (product lines) without a certain component, will not even have database structure for unsupported functionalities. The component manager is also responsible for component-to-component communication (The Component Manager would provide a reference to a component in the same product line). The Configuration class constitutes another concept in the presented architecture. Configuration, provided by the Core Module, includes general purpose configurations, common to all product lines. In a particular edition (the product line), developers would typically extend the Configuration class in order to do necessary adoptions or overriding (e.g. introduce new operations to be run when the mobile application starts, add new synchronization points, turn off some components, set data for accessing a particular cloud -whether it be production or test deployment). This is how the inclusion of a specialized functionality in the individual line is, technically, achieved using several approaches:  preparation of the extension point in the core module (in terms of components or inside components using factory method or abstract factory design patterns),  using inheritance and adding new methods and/or method invocations in the product line,  using an abstract factory pattern, which combines the functionality of the new line and its own user interface. The exclusion of unwanted features is achieved mainly through inheritance and the exclusion of unwanted features (such as not downloading programs for free products), as well as with the parameterization of the basic building blocks. However, in most cases, exclusion was not necessary, since product line-specific functionalities were introduced in the product lines themselves (e.g. 24alife PRO library). Changing behavior (e.g. the demo edition expires one month after construction), are achieved also with inheritance and/or by employing appropriate design patterns, such as: a bridge, factory method, or builder. Based on presented technical SPL architecture, 7 application editions were managed. Table <ns0:ref type='table'>1</ns0:ref> summarizes functionality-based differences between them. E.g. final 'Alpha' edition is based on 'Core Module', having 45 common functionalities, 9 additional functionalities are implemented only in 'Alpha'. Which results in total of 54 functionalities. In addition to this, 5 functionalities in 'Alpha' are adopted to different behaviour. Which resulted in 14 functionalities implementation for 'Alpha' (optional + alternative). That is 26% difference with base edition, which is 'Core Module' in case of 'Alpha'. Differences in terms of functionalities for other editions are demonstrated in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Results</ns0:head><ns0:p>In order to answer the research questions, we will outline the most important empirical research data. This includes an in-depth analysis of version control logs, source code metrics for comparative quantitative and qualitative data, querying bug-management logs and production logs and analysing developer questionnaires. Several tools were used in order to capture, prepare and aggregate raw data, including Subversion CLI, CLOC tool, Javancss, XClarify, ProjectCodeMeter and shell scripts for batch processing and data aggregation.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.1</ns0:head><ns0:p>Developer profiles Based on the pre-development questionnaire, the mobile developer's profile was as follows:</ns0:p><ns0:p> Android developers average perceived developments skills: 2.3/5,  Android developers average perceived Android Studio skills: 4/5,  iOS developers average perceived developments skills: 2/5,  iOS developers average perceived XCode skills: 3/5,  Android developers average experience in developing mobile applications: 4 years,  iOS developers average experience in developing mobile applications: 3.3 years.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.2</ns0:head><ns0:p>Edition and functionality growth During the research, presented in this paper, we observed the development of 55 weeks. During this time, the final version of the released mobile application is 3.0. Versions 1.0, 1.1, and 2.0 were released in a two-edition manner (free, pro) while version 2.1 and later were required in several editions (including demo, alpha, test, blackberry-optimised free and blackberry-optimised pro). Please see Table <ns0:ref type='table'>2</ns0:ref> for the growth in terms of new functionalities. Please also see table 1 for a complete functionality count per particular edition. Besides functionality growth, edition growth is also an important data regarding our research. Figure <ns0:ref type='figure'>4</ns0:ref> shows the number of released products during project time (editions and versions combined) for iOS and Android. Since Android developers used an SPL approach, the chart provides additional insight into core SPL asset numbers over time. The chart clearly shows only version-based growth until week 41. After week 33, the Android application becomes a core asset, and the iOS application stays monolithic. Please note that Figure <ns0:ref type='figure'>4</ns0:ref> does not capture internally available editions. The chart includes 7 different Android editions across several versions and 2 iOS editions across several versions.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.3</ns0:head><ns0:p>Source code size The product's quantitative metrics are as follows. One of the commonly used metrics is LOC (Lines Of Code). Figure <ns0:ref type='figure'>5</ns0:ref> displays how LOC (without blanks and comments) changed over time for Android and iOS products. In both, an important segment of code is done in XML (examples would be user interfaces, navigation rules, animations, etc.) This is why we show this distribution also in Figure <ns0:ref type='figure'>5</ns0:ref>. In the case of Android, the chart captures LOC for the entire Android projects (all editions, all core SPL assets). The chart includes the iOS initial application PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>as well as the Free and Pro iOS editions. The internally available experimental iOS HRV/Corporate is not included, since it is a separate branch on its own. Please note, that week 4 was the first official iteration, while weeks 1-3 were preparations and prototyping. This is why the subversion repository is starting the codebase with week 4. While LOC can give concrete insight into software size, we included the number of classes for both development projects in Figure <ns0:ref type='figure'>6</ns0:ref> in order to give more accurate insight into code distribution. The measured codebase is the same as in the chart in Figure <ns0:ref type='figure'>5</ns0:ref> -all Android editions and versions with core SPL assets, iOS initial, Free and Pro editions. We measured LOC values for both iOS and Android with the same tool (cloc), so that the same rules would apply, thus making the results comparable. We also used the ProjectCodeMeter tool to measure the final state of subversion repository for both projects as well as the final singleedition project for both projects. This also makes the results directly comparable.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.4</ns0:head><ns0:p>Source code quality In addition to size-oriented metrics, we also applied quality-oriented source code metrics. Since we are interested in internal quality change, measuring mid-and final-version of source code is sufficient. The results that the ProjectCodeMeter tool produces are summarized in Table <ns0:ref type='table'>3</ns0:ref>. Code to Comment ratio (CCR) grabs logical LOC per one comment line. Code Structure Modularity (CSM) aggregates modularity of the source code (value: <100 -low modularity, >100 fragmented code). Logic Density (LD) assesses number of lines to capture a certain logic.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.5</ns0:head><ns0:p>Product quality During the research, we also measured the product's external quality. The quantity of reported failures was used as a comparable measure. The quality assurance team was using the following classification of failures (based on IEEE standard 1044-2009):</ns0:p><ns0:p> errors (behaviour, that is different from what is specified in software requirements specifications or unexpected crashes),  inconsistencies (not really errors, but disturbing quality failures -e.g. inappropriate input checks, displaying values in inaccurate measures, e.g. the length of a run in metres instead of kilometres, etc.) and  imperfections (mostly visual quality failures, e.g. using wrong colours, screen appearing incorrect when rotated, etc.). We preserved this classification also in Figures <ns0:ref type='figure'>7 and 8</ns0:ref>. Please note that all failures were addressed (fixed or closed as false failures) during the development. This is why the chart does not capture the frequency of closing failures. The testing efforts were the same for both platforms, so the results are comparable.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.6</ns0:head><ns0:p>Post-development survey After the development, we finalized gathering empirical data to execute a post-development survey among developers. The answers were as follows:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> The development was time framed to 8 hours per day. We asked developers about their assessment with regard to how many hours they spent daily on development (allowing for the fact that they might have done extra work at home). iOS developers reported that they spent 9 hours a day with the project, Android developers reported working on average 8.7 hours daily.</ns0:p><ns0:p> We asked developers how they liked the implemented mechanism of managing several editions. On a scale of 1-5 (1 -do not like it at all; 3 -neutral opinion; 5 -extremely like it) they responded: iOS developers: 2.7, Android developers: 4.7</ns0:p><ns0:p> We asked developers about the fear they had in cases where they would have to change some random code in a project -did they worry that they would cause some failures with it? On a scale of 1-5 (1 -no fear at all; 5 -very afraid) they responded: iOS developers: 3.3, Android developers: 1.7.</ns0:p><ns0:p> We asked developers how they would welcome the need to establish a new edition of a mobile application. On a scale of 1-5 (1 -no problem; 5 -possible only with high investment) they responded: iOS developers: 4, Android developers: 2. Finally, developers had opportunity to express their positive and negative experience with managing several editions. The answers include:  An Android developer highlighted an event as a positive experience. The requirement was stated for the fully functional Pro edition that could be available as a 'Demo' edition. Based on Pro, the Demo edition included a special logo and functionality for expiration within 1 month with the possibility of buying the Pro edition on the Google Play market. The edition was prepared and turned over to production, as claimed, in just 3 hours.  As a negative aspect, an iOS developer highlighted fast development, and a lot of requirements changing the already implemented functionalities. Changing/maintaining the code was problematic, he explained.</ns0:p><ns0:p> An iOS developer suggested, that instead of having many editions of the same applications, it would be better (in terms of development) to have several smaller applications with limited functionalities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>Discussion</ns0:head><ns0:p>In this chapter we will provide answers to the research questions presented in the 'Research method' chapter: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science o RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? Based on empirical data from the previous chapter, we will also provide further discussions on the implications that arise from the data. Based on the gathered data, let us answer the fundamental research questions.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.1</ns0:head><ns0:p>RQ1: Is the source code size comparable between Android and iOS applications, while having the same set of requirements?</ns0:p><ns0:p>The final codebase for Android products included 47,259 Java LOC and 32,828 XML LOC; 390 classes (see Figures <ns0:ref type='figure'>5 and 6</ns0:ref>). The final codebase for iOS products included 50,739 Objective LOC and 22,744 XML LOC; 290 classes (see Figures <ns0:ref type='figure'>5, 6</ns0:ref>). Based on the CLOC tool, the final iOS codebase was larger by 7% (Java vs. Objective C); the combined (XML + programming language) LOC is smaller in the case of iOS by 8%. With regard to the LLOC measure (Logical LOC), the final Android LLOC was larger by 7% (see Figure <ns0:ref type='figure'>5</ns0:ref>), based on the ProjectCodeMeter tool. The class count was larger in the final Android codebase by 26% (see Figure <ns0:ref type='figure'>6</ns0:ref>). The complete analysed iOS codebase includes only the Pro and Free editions. On the other hand, the analysed Android codebase also includes additional editions (Alpha, Test, Demo, BB Pro, BB Free). This is why the Android codebase actually includes 10 more functionalities (see Table <ns0:ref type='table'>1</ns0:ref>) compared to iOS codebase functionalities (53 functionalities in Free and Pro), which is 16%. Taking this into account, the effective (normalized to functionalities) LOC difference between iOS and Android is 9% and the LLOC difference is 8% (iOS is higher). The effective class count (normalized to functionalities) difference is 14% (Android is higher). The class difference is easily justified: the Android codebase includes SPL architecture, which is rich in terms of classes. There are also 13 alternative functionalities, where the implementation heavily depends on design patterns and inheritance (which results in introducing new classes -see chapter 4). To compare product sizes, regardless of SPL approach, we can easily compare codebases at the point of releasing a last single-edition product. This was done in week 33 for Android and week 37 for iOS (see Table <ns0:ref type='table'>3</ns0:ref>). Android LOC at that point was 49,664, while iOS LOC was 50,418. The difference is as low as 1.5%. The classes count for Android was 277, while the iOS codebase had 238 classes. Please note, that the difference is 14%, which is the same as the final difference, normalized to functionalities. Based on the presented calculations, we can confidently claim, that having the same set of functionalities, implemented using our project boundaries and rules would result in comparable codebase size for both Android and iOS. This is how we answer the research question 1 as positive.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.2</ns0:head><ns0:p>RQ2.1: What are the impacts of the SPL approach to application growth? Observing the data, presented in Figure <ns0:ref type='figure'>5</ns0:ref>, we can see that LOC is rising at an almost constant rate of circa 1,400 LOC (pure code, blanks and comments emitted) per week for both the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Android and iOS projects. If we analyse the LOC per week coefficient at release weeks or before and after introducing more editions, the data does not show any significant change in LOC velocity. This clearly indicates, that development teams continued to invest unchanged development efforts. Based on the data presented in Figure <ns0:ref type='figure'>6</ns0:ref> we can see an almost constant coefficient classes/week value of 6. However, in the case of Android, the classes introduction velocity rises to 9 per week after introducing the SPL approach. On the other hand, in the case of iOS, the several edition approach resulted in the dropping of some classes <ns0:ref type='bibr' target='#b21'>(21)</ns0:ref>. Implications of quantity metrics are as follows: the developers output in terms of LOC is obviously not affected by multi-edition development. Since the development was functionalitybased, it means, that the only driver of LOC velocity were functionalities. However, the structure of the created program code changes if SPL is applied (148 Java LOC per class before SPL, 98 Java LOC per class with the SPL approach in our case). On the other hand, we can observe even more Objective C LOC per class after ad-hoc multi-edition development in the iOS project. From 153 Objective C LOC per class before multi-edition development, to as high as 189 LOC per class (in week 50) and the final coefficient is 170 LOC per class. Improvement in terms of LOC per class in the iOS project in the last 5 weeks is a result of refactoring. Positive effect on code structure after employing SPL approach is also captured in the CSM measurement (see Table <ns0:ref type='table'>3</ns0:ref>). Based on the presented calculations we can conclude that multi-edition development does not affect development output velocity in terms of LOC, but it does affect the product static structure: SPL-based development in a positive way, while ad-hoc multi-edition development in a rather negative manner. However, the SPL approach largely affects velocity in terms of functionality and released editions. Based on Tables <ns0:ref type='table'>1 and 2</ns0:ref>, we can see that the delivered functionalities and editions per given time frame were comparable in the single-edition period for both Android and iOS development teams. 33 functionalities in single-edition software were delivered in 39 weeks by the iOS development team and in 37 weeks by the Android development team; which is 0.9 functionality per week. With the same amount of effort input (see post-development survey), this translates into 1.7 new functionalities per week in 2 editions for the iOS development team; and 2.2 new functionalities per week and 1 adopted functionality per week across 7 different editions. While combining public releases with available editions (see Figure <ns0:ref type='figure'>6</ns0:ref>), we can see that after multi-edition development, the Android development team delivered 13 (0.93 per week) new editions and versions of software, while iOS delivered 6 (1 per 2 weeks). Based on presented data, we can answer research question 2.1 as follows. In our case, SPL approach results in 126% higher functionality-based velocity (or as much as 240% higher, compared to single-edition development). At the same time, the SPL approach enabled developers to adopt an additional 100% of new functionalities across several editions with the same effort level. <ns0:ref type='table'>3</ns0:ref>. In the case of iOS, single-edition and multi-edition development do not change Objective C code quality, which is an expected observation (managing editions was done mainly with compiler directives -see Chapter 4). Modularity improves by 6%, logic density is affected by 5% in a negative manner. The commenting ratio stays the same. In the case of the Android single-edition and multi-edition development, Java code quality also did not change. Modularity changed by 5% (towards fragmented code), which is expected as a result of SPL implementation (see <ns0:ref type='bibr'>Chapter 4)</ns0:ref>. The LLOC to capture certain logic increased by 32% (from 45 to 66), which also makes sense: SPL-related (architecture, optional and alternative functionalities management) code does not capture a lot of business logic. However, logic density is, even after being changed a lot, lower by 19%, compared to Objective C code. Data shows, that internal code quality did not change importantly. This is a reasonable outcome: the same development skills and effort was used during the whole project. SPL-enabled multiedition approach affects code in terms of a lot of additional code fragmentation, while the ad-hoc approach does not have this effect. Which is also expected result. However, Figures <ns0:ref type='figure'>7 and 8</ns0:ref> demonstrate SPL impact on external quality. As implied from the charts showed on Figures <ns0:ref type='figure'>7 and 8</ns0:ref> there are three distinctive periods during the project: singleedition development (weeks 1-33), first multi-edition intensive development (weeks 33-37), and stable multi-edition development.</ns0:p><ns0:p>In the case of Android development, we can observe an almost constant rise of reported failures (5.5 per week; 6.4 if we remove the first 4 weeks of prototyping). During the introduction of the SPL approach, the failure number rose from 160 to 240, which is 11.4 per week. In the third period, the final failure count rose from 240 to 405, constant failures per week rate in this period is 9.1. The final failure percentage, according to classifications, is as following: 64% errors, 16% inconsistencies, 20% imperfections. An important observation from Figure <ns0:ref type='figure'>7</ns0:ref> (Android failures) is this: the constant weekly failure rate in the single-edition development was followed by a rise in failures during the introduction of the SPL approach. During the stable SPL approach-enabled multi-edition development, the weekly reported failure rate is linear and stable again. The rate rose from 6.4 to 9.1 (30%), which is reasonable when considering not only additional functionalities, but also 6 additional editions (from an external point of view completely separate products) with 10 edition-specific and 13 edition-adopted functionalities. The failure increase is significantly smaller than feature and editions increases. Failure per edition on a weekly basis in the last period is as low as 1.3. Similar situation is seen in the case of reported failures in iOS development (6.5 per week in first period; 7.8 if we remove the first 4 weeks). During the introduction of the multi-edition approach, the failure number rose from 195 to 280, which is 12 per week. In the third period, the final total failure count rose from 280 to 631, the constant failures per week rate is 25. We can see how the weekly failure rate in this period is not linear. 25 is the average rate during stable multi-edition development. If we split this period in half, the first half has a rate of 18, while the second half is 33. The final failure percentage is 64% errors, 18% inconsistencies, 18% imperfections. An important observation from Figure <ns0:ref type='figure'>8</ns0:ref> (iOS failures) is that constant weekly failure rate in single-edition development is followed by an increase in failures during the introduction of the SPL approach. In the period of stable multi-edition development, the weekly failure rate is not linear. The average weekly rate in this period, rose by a factor of 3.2 (from 7.8 do 25). This is, using ad-hoc multi edition development, more failures per edition (25 per week for two editions -12.5 failures per week per edition) than in the single-edition development (7.8 per editions per week). The ad-hoc multi edition approach failed in terms of rising or at least maintaining an external quality level. Based on presented data, let us answer the research question 2.2. In our case, internal quality is not affected by introducing SPL approach. Since SPL approach promotes faster development (see chapter 6.2) and failure rate stays the same, number of failures per functionality drops. This is how SPL approach enhances external quality. However, using non-SPL multiple edition development results in our case in reduced external quality (reported failures rose by factor 3.2).</ns0:p></ns0:div>
<ns0:div><ns0:head>6.4</ns0:head><ns0:p>RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? Both the post-development survey and post-development data analysis revealed that developers are more confident in using the SPL approach than using a multi-edition approach, designed and crafted on their own. In the case of SPL-supported multi-edition development, developers created complex software architecture and maintained it while considering strict rules of reuse. They did not have an opportunity to create any shortcut or workaround solution to their approach, as the other development team had. Multi-edition development without an SPL approach relied on developer-chosen and freely implemented approaches, such as compiler directives, runtime conditions, etc. Therefore, the post-development survey revealed that developers in the SPL-supported development team liked the multi-edition development approach more: 4.7 / 5, while on the other hand only 2.7 / 5 in the case of iOS developers. The SPL-supported development team was also more confident in maintaining and developing new code in production software (fear of introducing failures while changing code is as low as 1.7 / 5 in case of Android developers and as high as 3.3 / 5 in the case of iOS developers). As a consequence, iOS developers would not be happy with demands for a new edition of the application (4.0 / 5), while Android developers would welcome it (2.0/ 5). Although surprising, we believe that the survey results have a foundation in source code organisation: in the case of the SPL approach, the source code of different editions is physically separated. Common functionalities are reused with merging separate source code projects into final products. Therefore, it is clear that developers can be sure that certain new code or code changes will not be included in a certain product edition. On the other hand, ad-hoc multi-edition development has a common code base, where all source code is processed when creating any edition of the final product. Software components are therefore more coupled and less cohesive. As a result, as proven also by the post-development survey, it makes developers not so confident in creating new code or changing existing code. Higher developer confidence in creating new source code and maintaining existing code was also shown in the post-development data analysis. The failure rate is importantly higher when the SPL approach is not used. The failure introduction velocity is also not linear. It shows how fixing failures in some cases results in introducing new ones with the ad-hoc multi-edition approach. Which finally answers our research question 2.3: the SPL approach have a positive impact on developers' confidence in delivering releases (and new functionalities as a consequence) frequently.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.5</ns0:head><ns0:p>Key findings and limitations Based on presented results, visualizations and in-depth analysis, let us summarize the main outcomes while answering research questions. Using our research setup, methods and results, we showed that:  Having the same set of functionalities would result in comparable codebase size for both Android and iOS;  The SPL approach results in 126% higher functionality-based velocity (240% higher, compared to single-edition development);  The SPL approach enabled developers to adopt an additional 100% of new functionalities across several editions with the same effort level;  Internal quality is not affected by introducing SPL approach;  SPL approach enhances external quality.</ns0:p><ns0:p> Managing several editions using non-SPL approach would reduce external quality.</ns0:p><ns0:p> The SPL approach have a positive impact on developers' confidence in delivering new functionalities and releases frequently. The presented research method, results and discussion also include limitations which represent possible threats to validity. In ideal world, one would design presented experiment in terms of developing the same application for the same platform twice -with and without a SPL approach to manage several editions. This is how it would be possible to eliminate every doubt, caused by developing for two different operating systems with two different programming languages. However, we created the research method, gather results and interpret them in mind to minimize any doubt in our conclusions. The main measure to address this threat lies in answering research question RQ1. In addition to this, we believe that while we scientifically investigated large, real-world industry project, validity of the results is higher than any laboratory-set experimenting environment. A question also remains, whether we would end with the same results and conclusion, if in week 33 iOS development team would chose to implement SPL and Android developers would go on PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with manual approach to manage multiple editions. We addressed this threat by monitoring internal quality of source code and invested effort in terms of delivered LOC. Since internal quality and LOC velocity remains the same after week 33 for both development teams, this indicates that the only difference between products lies in the SPL-enabled architecture in Android application.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusions</ns0:head><ns0:p>This paper highlighted our research during two development teams that created production-ready mobile application families for two different platforms (Android, iOS). They shares the same set of functionalities and were done with the same effort input. Both families share 85% of their common functionalities. Other functionalities are optional or alternative. One product family was developed with the SPL approach to manage reuse, while other product family was developed with more traditional and ad-hoc reuse techniques (single codebase, no special architecture to manage variabilities, employing compiler directives, etc.). As shown in this paper, this was the only difference in approach between the development of two functionally equal software families.</ns0:p><ns0:p>The results of this research show that two product families not only share the same set of functionalities and effort rate, but, surprisingly, product growth and end products size were also the same. Comparing multi-edition with single-edition development, we showed that development velocity, in terms of code quantity, did not change. However, code structure improved after introducing the SPL approach. The SPL approach also had positive implications for feature-based output: after the introduction of the SPL approach, functionality output per week increased by 240%, which is 124% higher than cases of iOS development with non-SPL approach to multi-edition development. At the same time, the SPL approach employment resulted in 6 new editions, while the non-SPL approach resulted in only one. An important aspect, directly impacted by the SPL approach, is also software quality. With the SPL approach, the failure introduction velocity did not increase, but remained linear. After dividing failures across all editions, failure rates fell. On the other hand, we showed how the failure introduction rate in the case of the non-SPL approach not only increased but was also not linear anymore. In our case, the failure count, divided by editions, is even higher than in the single-edition development. The derived statement from this observation is that doing multiedition development without an SPL approach is something that will have a very bad impact in terms of software quality. On the other hand, the SPL approach has a very positive impact on software quality. Our research revealed that the SPL approach enables development teams to produce more functionalities in several product editions with the same effort as they would use while working on a single-edition product. Not only productivity, but even more importantly, software quality, rises. Developers' confidence in the maintenance and developing of new code is also higher, when using the SPL-based product structure.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>Gathering research empirical data during the development process</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>a) Source code size and class-based metrics (Lines of Code, Logical Lines of Code, Number of Classes) in combination with product size metrics (Number of Editions, Number of Functionalities) support RQ1; b) Source code size-based metrics support RQ2.1 as an implicit control of invested effort to the development; c) Product size metrics support RQ2.1; d) Source code internal quality metrics (Code to Comment Ratio, Logic Density, Code Structure Modularity) support RQ2.2; e) Product quality metrics (Number of reported errors, imperfections and inconsistencies) support RQ2.2;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>: 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>RQ1Is the source code size comparable between Android and iOS applications, while having the same set of requirements?  RQ2: What are the impacts of introducing a software product lines (SPL) approach to a mobile development? o RQ2.What are the impacts of the SPL approach to application growth? o RQ2.2: What are the impacts of the SPL approach to application quality? PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021) What are the impacts of the SPL approach to application quality? Internal code quality comparison of the products in terms of single-edition development versus multi-edition development and Android development versus iOS development is shown in Table</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,262.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,412.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,347.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,315.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,317.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:1:1:NEW 4 Jan 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
A product quality impacts of a mobile software product line - an empirical study
Rebuttal Letter
Dear Editor, dear reviewers,
Thank you for investing your valuable time and efforts to review our manuscript, providing us with your concern and improvement suggestions. This is how you give us an opportunity to clarify our standpoints and improve the manuscript.
We reviewed your comments and carefully addressed all of the open issues. We appreciate your contribution, which enabled us to raise the quality of the manuscript.
We have prepared improved version of manuscript, in addition to it we are providing detailed log of the changes and our justifications. It is found below as an appendix to this letter.
The manuscript is now, as we believe, not only suitable for publication in PeerJ, but also, more importantly, an added value for PeerJ readers.
Maribor, 22. 12. 2020 Luka Pavlič, Tina Beranič, Marjan Heričko.
Change log and answers
Response: #01 (for editor, reviewer 1 and reviewer 2)
Your comment:
Please include more data about the process followed to develop the software product line and justify choosing the RQs as the metrics to measure the benefits of such SPL. Furthermore, please add more details in your experiment's description because now it is impossible to conduct a replication study due to the lack of information provided. The contribution of this work should also be clarified.
Our response:
Thank you for your comment. This, and comments that follow from both reviewers showed us, that the context of our research should be clarified. Let as briefly explain our motivation and the nature of our research – which will also be a response to your comments.
The 24alife mobile development was, from the beginning, not planned as an experiment setting to verify software product lines approach. A client decided to develop a mobile application for Android and iOS operating systems, both using native approach (Android using Java programming language, iOS using Objective C programing language). Hybrid development was not an option from clients’ point of view in order to lower the risk of introducing workarounds that might occur (Bluetooth connections, heavily dependent on GPS signal, a lot of video content etc.) Two agile development teams of the same sizes, one for Android another for iOS were given with the same set of requirements during the development. Both teams were supported by the same graphical designer. Product owner was also the same person for both teams. So, effectively, the goal was that they end up with two identical native mobile applications with the same set of functionalities and the same look – the only planned difference was expected behavior, aligned with iOS and Android design guidelines. The context is now clarified in manuscript (see lines 280-288 in revised manuscript).
As demonstrated in the manuscript (original manuscript line 210, figure 1) both teams initially managed multiple editions of their applications (daily built, test version) in ad-hoc, manual manner. The breaking point, as described in manuscript (original manuscript, line 264, figure 1) was at week 33 with a clear clients’ demand do provide application in freely available (“Free”) and payable editions (“Pro”). Since the iOS development team was confident, that their established techniques to manage several editions will continue to prove itself to be adequate, they did not change the design of their application. As described in manuscript (original manuscript, lines 266-270) it includes conditional compiling, several built targets, manual edition building etc. On the other hand, Android development team decided to change the architecture of the application slightly in order to become editions-friendly by implementing SPL approach. Based on your suggestion, we included more details on the development process and technical solution to SPL – see revised manuscript lines 338-363 and 308-317.
As explained in the manuscript, this was also the only difference between Android and iOS development – two teams of the same sizes, comparable background in target platform, the same amount of effort, the same set of required functionalities, common graphical design. The context gave us a rare opportunity to scientifically verify if and how SPL approach impact the development. It is not an everyday opportunity to have two full-time development teams, working on the same application for two different platform in a 55-week time-span. Therefore, we, the authors, addressed also the scientific contribution, since we believe, reporting the clear and sound results would greatly benefit software engineering scientific community by giving the insight in this real-world setting. Which is in practice unable to fully replicate using the laboratory environment. However, in addition to existing data (functionalities, editions, SPL architecture, complete metrics logs are published online) we included more insights in details (see revised manuscript lines 280-288, 308-317, 338-363), if someone would like to replicate complete research. However, please understand, that the client, in addition to functionality list, editions, metric data and similar, did not allow to publish some assets in complete details (e.g. complete requirements, complete SPL architecture etc.), since they are part of a production system and in some cases could expose some business secrets. However, a deeper look in details will be, as explained in manuscript (section “associated data”) available upon request.
The explained setting leads us to justification of our research questions selection. Out biggest concern was, that having the context (functionalities, effort, development skills etc.) identical in both teams, we would cope with two variables - one is the approach to manage multiple editions, and another is target platform (with different development language). Since we did not find any research on comparing the code-base in terms of different programming languages/platforms for the same set of requirements, we started with RQ1, which in its essence says: do the same set of requirements result in comparable source code size while having Objective C- and Java-based mobile applications for iOS and Android. The intention of the RQ1 is therefore to check if we can even compare Android and iOS products on source code level. This is why we rely on measuring source code (LOC and LLOC – code and descriptors separately) and classes (number of classes) – complete numbers and number normalized to editions to answer this research question. As advised, we include additional justification and explanation on selected research questions in lines 243-250 of the revised manuscript.
The research question RQ2 and its three sub questions are intended to compare outcomes in the second half of development (weeks 33-55), where Android developers introduced their SPL architecture. We were interested in application growth, internal/external quality and developer’s confidence. To answer those questions, appropriate metrics were chosen. Possible changes in size: LOC velocity, class-introduction velocity, functionality-based velocity, editions velocity. Possible changes in internal quality: modularity, logic density, comment-to-code ratio. Additional measurements were also performed in Android source code (e.g. cyclomatic complexity), but not in Objective C, which is the reason that we did not report on them. The reason lies in the lack of appropriate tools, that would enable us to observe both Java and Objective C code in more metrics. We believe that selected set of metrics justify our conclusions. External quality metrics are based on reported errors (complete numbers; velocity, normalized to functionalities and editions), classified as errors, imperfections and inconsistencies (as proposed by IEEE 1044). Based on your suggestion we included this explanation to the revised manuscript, lines 251-262.
Thank you for stressing out that we need to clarify the contributions of our research. This is why we a) outline main contributions in bold in Discussion section and b) introduced a subsection 6.5 - Key findings and limitations with clearly stating our findings (see revised manuscript, lines 666-697). In addition, this subsection also clarifies some possible threats to validity, as issued by reviewers.
Manuscript changes:
• Figure 2 renamed: Components (Android libraries) in Android software product line
• Revised Manuscript 224-227: LOC is constantly monitored
• Revised Manuscript 246-253: RQ justification
• Revised Manuscript 254-265: Metrics selection
• Revised Manuscript 267: Typo in title
• Revised Manuscript 282-290: Development process is described
• Revised Manuscript 298-300: Statement on common requirements, design and quality assurance
• Revised Manuscript 308-318: Figure 2 detailed description
• Revised Manuscript 334-367: Additional explanation on SPL development process and SPL technology
• Revised Manuscript 526: bold
• Revised Manuscript 534-537: LOC velocity stays the same --> effort stays the same
• Revised Manuscript 569: bold
• Revised Manuscript 625: bold
• Revised Manuscript 627: bold
• Revised Manuscript 663: bold
• Revised Manuscript 667-698: New subsection - 6.5 Key findings and limitations
• Revised Manuscript 797: new reference
• Revised Manuscript document-wide changes: et.al. in references changed to all authors
Response: #02 (reviewer 1)
Your comment:
The paper is technically sound and well-written. There is a typo in section four’s title “An project’s…” (line 239).
The background and literature review provided is wide and enough to understand the context in which this work is framed. Also, the paper is well organized.
Our response:
Thank you for your acknowledgement. We changed the title.
Manuscript changes:
• Revised Manuscript 267: Typo in title
Response: #03 (reviewer 1)
Your comment:
The experiment’s design is interesting, however, I think a section for the SPL development process (the development of core assets, variability points, etc.) with deeper details would be beneficial to understand both the approach and the experiment’s results.
Our response:
Thank you for your concern. We initially did not go into much technical details on Android SPL architecture deliberately. However, we demonstrated core ideas in manuscript. Based on your concern we now included additional text (see revised manuscript lines 338-363 and 308-317) to clarify how SPL is technically achieved.
Manuscript changes:
• Revised Manuscript 308-318: Figure 2 detailed description
• Revised Manuscript 334-364: Additional explanation on SPL development process and SPL technology
Response: #04 (reviewer 1)
Your comment:
The results are well discussed and all the data supporting them provided, however, I have a few questions that need to be justified in the paper:
- Why the Android app as a SPL? Could the findings be different if the SPL was developed for the iOS app? Why?
- Is there an estimate on how much time, resources and effort the change of methodology from manual development to a SPL consumed?
Our response:
1. Thank you for rising this one. We would appreciate if you can also take a look at our response #01, where we explain the context in where Java-based Android app ended up having proper SPL architecture, while Objective C-based iOS app not. Additional explanation can now also be found in revised manuscript, lines 300-306. As we demonstrate throughout the manuscript, both codebases show comparable source code size and internal quality metrics. Interestingly, internal quality metrics and source code growth seems not to be affected by introducing SPL architecture in Android app (see sections 6.1 and 6.3). Which leads us to conclusion and our strong belief that it would not make any change if SPL would have been introduced in Objective C and not Java. However, thank you for rising this doubt – this is why we have now extended threads to validity and discussion with clearly stating above doubt and our interpretation. Please see revised document, lines 691-697.
2. We did not directly explore the effort and source code size in order to implement SPL in Android team. All metrics in Android app from week 33 to week 55 therefore include SPL architecture and its construction effort as well. One can see in figures 5 and 6 that LOC and class number rises consistently even after week 33 in Android team. The delivered functionality also does not seem to be affected by investing effort to implement SPL. Obviously, effort to introduce SPL architecture was, compared to long-term development effort, negligible. However, we agree it would be interesting to investigate deeper in this real-world scenario in order to report actual cost in terms of invested effort. Thank you. On the other hand – what is important is, that the effort of introducing SPL returned its investment well, which is obvious, when we compare delivered editions and functionalities in Android team and iOS team after week 33.
Manuscript changes:
• Revised Manuscript 667-698: New subsection - 6.5 Key findings and limitations
Response: #05 (reviewer 1)
Your comment:
The paper provides an interesting experiment regarding the introduction of a SPL within the mobile applications’ domain. A visual example of the workflow followed regarding the application engineering process would be appreciated. I would like to know if the authors plan to carry out usability experiments with both apps to test if the SPL introduction had any impact for the user.
Our response:
Thank you for your comment. In addition to bird-eye-view development visualization (see figure1) we did not initially include the whole development process, since it was a by-the-book Scrum setting and its standard mechanics. However, now we clearly stated this and give appropriate reference to a standard Scrum-based software delivery. Please see revised document, lines 280-288.
Although we would be more than happy to help the customer with usability testing, it did not happen so far. However, as explained in our response #01 (single product owner, single graphical designs, incorporating expected behavior for target platform) both teams deliver identical applications, so we do not expect to find any deviations in terms of usability. But it is true, though, that after week 55, Android app boasted more editions and functionalities.
Manuscript changes:
• Revised Manuscript 282-290: Development process is described
• Revised Manuscript 797: new reference
Response: #06 (reviewer 2)
Your comment:
1) Why the decision of making a comparison using two different operating systems and programming languages? The results may be biased by the important differences between these two systems. For me, it would have more sense to develop the SPL-based and the non-SPL versions of the system in the same operating system, e.g. Android. In this case, it would be much easier to compare the results and to identify the benefits of using an SPL. Even if the Android project were using both approaches in different periods of the project the results are difficult to be compared.
Our response:
Thank you for your comment. We agree absolutely and completely share your standpoint. However, please keep in mind, that our manuscript is about an industry-level 55 week two-teams full-time development of a production applications. Therefore, it was not feasible to develop the same application twice for the same platform. In addition, during the literature review, we did not find such setting (excluding laboratory-based projects). Regardless, we believe that performing a research on such a real-life industrial development project with two teams with the same sets of functionalities – one with and one without SPL approach, could compensate with rich and with experiment unencumbered participants and settings. For more detailed explanation, we invite you to take a look at our response #01, where we explain a belief that reporting research in such setting can be valuable asset in the whole mobile engineering community. This is also why we invested a lot of effort during the research in order to determine if Android app and iOS app can be compared at the first place (covered with RQ1). As the data shows, both apps are comparable - from size-based metrics and internal quality metrics point of view.
We also completely share your thoughts on comparing SPL and non-SPL approach if one would have done it in a different time-span. This is also why we believe that our experiment design has a really strong component exactly in observing parallel development of both teams. We also demonstrate comparable development experience in target platform via initial developers’ questionnaire.
Based on your comment, we now clearly state and defend a threat to validity that you mentioned – please see revised document, started with line 683.
Manuscript changes:
• Revised Manuscript 667-698: New subsection - 6.5 Key findings and limitations
Response: #07 (reviewer 2)
Your comment:
2) I do not completely understand how in RQ1 the lines of code can be comparable. Even if the application is the same, the operating system and the programming language are different and therefore this has to be also taken into account in order to do the comparison. Once again this RQ would have much more sense if the comparison were between the same applications, being developed with and without a SPL approach.
Our response:
Thank you for your comment. Even before we started with the experiment, it was clear to us, that our experiment design “holds or fails” exactly in the point that you are mentioning. If size-based metrics would not be comparable, we would not been able to compare quality-based metrics also. From this standpoint, we also raised our eyebrows while performing initial data inquiry, which showed us almost perfectly aligned LOC in both platforms, for the same set of functionalities, by two different teams. Since it is a constantly comparable throughout a 55-week span, it is definitely not a coincidence. The only difference is, that Android app shows relatively more LOC in descriptors (XML), while iOS app shows relatively more LOC in programming language (Objective C). Even when we count classes, products are a match. Please see figures 5 and 6.
However, please keep in mind, that comparing sizes of the product is a kind of control variable in our experiment. One could argue, that increased functionality and editions delivery in SPL-based product might have been a result of investing more effort in this team, while other team did not do their best in weeks 33-55. Having a data of internal product growth demonstrate, that the efforts, invested by both teams stayed the same throughout the whole project time-span. So, the difference in functionality and editions delivery was clearly the result of having/not having proper SPL architecture inside product.
Thank you for rising this one – now we have incorporated additional explanation. Please check revised manuscript lines 221-226, 251-262.
Manuscript changes:
• Revised Manuscript 224-229: LOC is constantly monitored
• Revised Manuscript 254-265: Metrics selection
Response: #08 (reviewer 2)
Your comment:
3) There is not enough information in the paper about the system under development. The description of the system is too informal. A more formal and detailed description of the system requirements, design, etc... would be needed.
Our response:
Thank you for your comment. We are, again, referring to our response #01. Please understand, that the client, in addition to functionality list, editions, metric data and similar, did not allow to publish some assets (e.g. complete requirements, complete SPL architecture etc.), since they are part of a production system and in some cases could expose some business secrets. However, a deeper look in details will be, as explained in manuscript (section “associated data”), available upon request.
But since you raised the concern, we also share your idea on additional technical explanation, which would be beneficial. This is why we now include additional details that clarify the introduced SPL mechanics. Please see revised manuscript lines 308-317, 338-363.
Manuscript changes:
• Revised Manuscript 308-318: Figure 2 detailed description
• Revised Manuscript 338-364: Additional explanation on SPL development process and SPL technology
Response: #09(reviewer 2)
Your comment:
4) Figure 2 is not a representation of an SPL. Which notation are you using? Which is the mandatory/optional part? Which are the constraints? I mean an SPL implies certain phases that are not considered in the paper or at least not documented. For instance, there is a differentiation between the Problem Space and the Solution Space. In the Problem Space the variability is modeled using traditional a feature model. This feature model includes optional features, mandatory features, groups and constraints. From this feature model, the number of different valid products can be counted and configurations for each product are automatically generated. In the solution space the code of each feature is developed and connected somehow with the features in the feature model. In case you are using another approach, this should be well documented in the paper.
Our response:
Thank you for your comment. We can now see, that figure label might be misleading. This is why we renamed it. As you already stated in your previous comment, we described technical details in more informal way. The figure is basically capturing a bird-eye-view on reusable assets, that are combined together to several editions, as demonstrated in figure 3. Figure 2 demonstrated that Android development team established 7 editions (alpha, test, free, pro, demo, BB-tailored free, BB-tailored pro), while having 5 separate assets (libraries), from which 1 (maps) provide 3 alternative implementations.
You are correct, the Figure do not grab functionality and/or variability dimension (which is demonstrated in table 1), but the implementation components. We did not choose to use UML for figure 2 in order to give more visual representation of components in the application. Based on your comment we extended the description part of the figure, which we renamed. At the same time, we also included details on SPL implementation: functionality presence is achieved by including a component in edition; its absence in achieved by not including the component. The alternative implementation is achieved by including component and overriding (a part) of its implementation by employing appropriate standard OO design patterns.
Since a SPL is a conceptual approach to reuse, which basically do not imply any particular implementation, we share your thought that it would make sense to document this aspect more in the manuscript. This is why we also extended it (in revised manuscript please see lines 308-317 and 338-363) with the descriptions.
Manuscript changes:
• Figure 2 renamed: Components (Android libraries) in Android software product line
• Revised Manuscript 308-318: Figure 2 detailed description
• Revised Manuscript 337-364: Additional explanation on SPL development process and SPL technology
Response: #10 (reviewer 2)
Your comment:
The findings would need to be better organized and better explained.
Organization. There are too much textual information when it would be more readable to have several tables organizing the information.
Explanation. I do not completely understand how the reasoning about the LoC reveals that there is a benefit of using an SPL. I have the same impression with the answer to the other RQs. I do not completely see how the analyzed results help to conclude that the use of a SPL has benefits.
Our response:
Thank you for your input. The PeerJ template forces author to be more text-oriented, which is obvious in Discussion section of our manuscript. Based on your suggestion, we now a) outline main contributions in bold in Discussion section and b) introduced a subsection 6.5 - Key findings and limitations with clearly stating our findings (see revised manuscript, lines 666-697).
About LOC metric, we agree - you are completely right. The LOC metric itself do not reveal benefits of SPL. This is actually what we demonstrate in our manuscript also – LOC velocity is not changed after introducing SPL. We are happy that we share the same standpoint also on this one. Please see our response #07, where we explained the role of measuring size of both products during the whole project timespan. Size metrics were also used so we can control if the delivered functionalities and editions were a result of increased/decreased efforts of development teams. Since both product size is rising constantly throughout the project time, the delivered functionality and edition count can only be a consequence of proper SPL-enabled architecture. And this is exactly what we demonstrate with our research – ability to deliver more (and better) functionalities as a result of having SPL- enabled architecture. We now included this explicit description also in the manuscript – see lines 221-226 in the revised manuscript.
Manuscript changes:
• Revised Manuscript 224-229: LOC is constantly monitored
• Revised Manuscript 526: bold
• Revised Manuscript 534-537: LOC velocity stays the same --> effort stays the same
• Revised Manuscript 569: bold
• Revised Manuscript 625: bold
• Revised Manuscript 627: bold
• Revised Manuscript 663: bold
• Revised Manuscript 669-698: New subsection - 6.5 Key findings and limitations
Response: #11 (reviewer 2)
Your comment:
- There is a lack of information about how the SPL has been defined. What does authors understand by applying an SPL to their project? I think that this should be clarified because Figures 2 and 3 are not a representation of a traditional SPL. Thus, authors need to make clearer the approach they have followed to define their software product line.
Our response:
Thank you for expressing your concern. We are more than happy to elaborate on this matter more. Having common requirements, design, and testing activities for the whole product family is now explained in lines 297-299 of the revised manuscript. Technical approach to handle common, optional and alternative functionalities is now explained in details in lines 308-317 and 338-363. The realization of the SPL is completely based on standard OO design patterns. Although, we believe, the technical realization of the SPL is not a decisive factor in our research, we share your standpoint that is of great added value for the readers. A complete UML diagram (I addition to Figure 3 part, which grabs the essence of the architecture) of the architecture is unfortunately not something that we are allowed to share. But will be more than happy to show, discuss and exchange experience with every curious reader upon request.
Manuscript changes:
• Revised Manuscript 298-300: Statement on common requirements, design and quality assurance
• Revised Manuscript 308-318: Figure 2 detailed description
• Revised Manuscript 334-364: Additional explanation on SPL development process and SPL technology
Response: #12 (reviewer 2)
Your comment:
- The criteria used to quantify the benefits of using a SPL need to be reviewed. For instance, in some implementations of a SPL the number of lines of code can be considerably increased but this is because the automatic generation of code for the different products introduces some benefits that compensate for the increment in the lines of code. Since here it is not clear how the SPL has been developed it is very difficult to reason about the benefit regarding the number of line of code. The same can be said for the other RQs.
Our response:
Thank you for stating this. We have to admit: we did not think about this issue while writing and you are completely right. LOC-based discussion would have been of questionable validity in case of source-code generation. This is why, in addition to technical approaches to the SPL, we now clearly state that code generation was not used in terms of SPL (see lines 314-317 of revised manuscript). Source code was generated only in terms of IDE-wizards (new Activity etc.) where a few lines of code is generated to form an asset skeleton. But no SPL-related source code generation was used. This is why using LOC-based metrics is not questionable.
We once again thank you for expressing your concerns and requests to improve our manuscript. We hope that we improved the manuscript well. In case of any issues, that might remain opened, we will be more than happy to improve it even more.
Manuscript changes:
• Revised Manuscript 315-318: A statement on code generation
" | Here is a paper. Please give your review comments after reading it. |
19 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The software product lines (SPL) enable development teams to fully address a systematic reuse of shared assets to deliver a family of similar software products. Mobile applications are an obvious candidate for employing an SPL approach. This paper presents our research outcomes, based on empirical data from an industry-level development project. Two development teams were confronted with the same functionalities set to be delivered through a family of native mobile applications for Android and iOS.</ns0:p><ns0:p>Methods. Empirical data was gathered before, during and after a year of full-time development. The data demonstrate the impact of a SPL approach by comparing the SPL and non-SPL multiple edition development. One family of products (Android apps) was developed using an SPL approach, while another (iOS apps), functionally the same, was developed without employing an SPL approach. The project generated a volume of raw and aggregated empirical data to support our research questions.</ns0:p><ns0:p>Results. The paper reports a positive impact of an SPL approach on product quality (internal and external) and feature output per week. As data shows, it also increases the delivery of functionalities (240% in 6 more editions), while investing the same amount of effort needed for a single-edition development. As a result of system-supported separation of development and production code, developers had a high confidence in further development. On the other hand, second team delivered less new functionalities, only two new application editions, and lower software quality than team that manage multi-edition development by employing an SPL approach.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Reuse is one of the fundamental disciplines in software engineering. It plays an important role in the development of new systems and in maintenance of existing ones. It is an important concept, especially when developing several simultaneous versions of the software. Usually, a software does not result in a single version or edition, specially tailored to certain customers. On the other hand, we are talking about diversity also when we have at the declarative level only one version of the software, i. e. the test version, the production version, etc. Even more, software can also be tailored to a specific set of hardware. Regardless of the domain, reuse plays a crucial role in successfully managing a set of similar software editions and versions. The software product lines (SPL) is an approach to reuse, employed in case where a family of products shares several common functionalities. In addition to rare publicly available empirical evidence, software architects have to rely primarily on theoretical benefits and their lessons learned, while practicing the SPL approach to reuse. In this paper we will present our research outcomes, done during one year of development of mobile applications for two mobile platforms, Android and iOS, sharing a common set of functionalities. Mobile applications are a part of a larger project which also included backend cloud solutions, a web portal, a media streaming server and tablet applications. Several editions of mobile applications, applications for the Android platform, were managed with the introduction and implementation of the Software Product Line (SPL) approach, while other set of applications, applications for the iOS platform, were managed with more traditional methods of reuse, e.g. branches in version management system, sharing the same codebase, but compiling it several times, using compiler directives, runtime checking, etc. Android and iOS development teams shared the same set of functionalities that had to be developed and they were given the same time to finish the implementation. This industry-based setup gave us the opportunity to explore and share interesting pre-, mid-and post-development empirical data, compiled to research observations on SPL approach implications. The rest of the paper is organized as follows. The chapter 'Related Work' gives a general overview on the state-of-the art approaches, emphasizing the SPL approach. Also, related work that deals with the same research is outlined. In the next chapter, the research method is discussed in detail. Research questions are presented. The project setup, methods, tool and SPL realisation are presented. Chapter 4 outlines the domain in which the SPL approach was exercised. The chapter 'Results' provides details on empirical data that is used to address research questions. The paper continues with 'Discussion' providing an interpretation of the empirical data, focusing on comparing Android and iOS products and the velocity of the teams. The differences will be discussed together with long-term implications and both the positive and negative lessons learned. The paper finish with 'Conclusion' that summarizes the most important findings of our research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related work</ns0:head><ns0:p>The software product lines (SPL) approach to reuse in the software engineering area has been discussed and published for several years. It was introduced in the Software Engineering Institute <ns0:ref type='bibr' target='#b14'>(Northrop, 2002)</ns0:ref> and proved to be an adequate solution to reuse in special cases, when several software products share a majority of functionalities, while only a fraction of functionalities are edition-specific. The foundation book 'Software Product Lines: Practices and Patterns' from Clements et.al dates to 2001 <ns0:ref type='bibr' target='#b2'>(Clements, Northrop, 2001)</ns0:ref>. According to the original SPL idea, development efforts are directed towards developing core assets, while product development is a process of aligning core assets into final products. Management activities (including analysis and design) are shared among all products. Northrop <ns0:ref type='bibr' target='#b14'>(Northrop, 2002</ns0:ref>) also proposes several patterns and their variants, to be used for SPL-based development. The SPL approach is explained in detail by the Software Engineering Institute (SEI, 2020). They define software product lines as follows: A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. As explained by Northrop (SEI, 2020) SPL in addition to the existing mechanisms of reuse allow other levels of reuse -reuse at the level of larger software pieces. Besides reusing technical building blocks, these also include reusing procedures and rules, associated with the software. They include single analytics, planning, and management of software development. The SPL approach could be implemented when some of the following issues occur as a result of the complexity of the software (SEI, 2020):</ns0:p><ns0:p> we develop the same functionality for a variety of products and/or customers,  the same change should be made in a number of different software products,  the same functionality should behave differently depending on the final product,  certain functionality can no longer be maintained, and so the customer has to move to a newer version of the software,  we cannot estimate the cost of transferring certain features to different software,  certain basic infrastructure changes lead to unpredictable behaviour of dependent products,  the majority of effort is put into maintenance, and not the development of new functionalities. The additional costs of the SPL approach are also clearly stated: architecture, building blocks and individual tests should include the possibility of variability, while business plans must be made for multiple products, not just one. The long term claimed contributions of SPL are as follows (SEI, 2020): up to 10x improved productivity, up to 10x improved quality, joint development costs reduced by up to 60%, shortened time-to-market by up to 98% and the possibility of moving to new markets is measured in months, not in years. SPL positive effects could, however, only have been observed if SPL approaches were used correctly and in appropriate software development projects. Muthig et.al. <ns0:ref type='bibr'>(Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004</ns0:ref>) lists several possibilities of misusing the SPL approach. These include (Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004) general purpose reuse, using classical reuse techniques such as component-based reuse, having configurable software behaviour and managing versions of the same software. According to many authors, an important aspect in the SPL approach is managing variabilities. Cavalcanti et.al. <ns0:ref type='bibr' target='#b1'>(Cavalcanti, Machado, Anselmo, 2013)</ns0:ref> define the SPL-approach as a tool to effectively cope with variabilities. The authors address three types of variabilities:</ns0:p><ns0:p> Functionality presence: If the functionality is present in all the lines and in all with the same realization, such functionality may be realized in the most general common building block.</ns0:p><ns0:p> The lack of functionality: the functionality is not present in particular lines. In the case that the functionality is required in only one line, the functionality may be realized in the line itself, otherwise it is necessary to introduce a specific building block.</ns0:p><ns0:p> A different realization: the functionality is available, but the realization will be different in different product lines. A different realization can be realized in the line, unless the same feature can be found in multiple lines -in this case, it is reasonable to introduce a new building block, which is a specialization of the existing one. Clements <ns0:ref type='bibr' target='#b3'>(Clements, 2005)</ns0:ref> explains that the technical realization of variabilities is based on already established and well-known concepts in software engineering, including component level reuse, design pattern employment, developing plug-ins, using parameterization, configuration with deployment descriptors and others. SPL approach specifics for modern mobile platforms and the specifics of mobile application development have been little published so far. However, some authors have published papers in this area. Muthig et.al. <ns0:ref type='bibr'>(Muthig, John, Anastasopoulos, Forster, Doerr, Schmid, 2004</ns0:ref>) published a report on The Go Phone Case Study. It is an in-depth report on the practical proof-of-concept development of mobile applications for several platforms; including SPL-specific analysis and design. Usman et.al. <ns0:ref type='bibr' target='#b11'>(Usman, Iqbal, Uzair Khan, 2017)</ns0:ref> shows two case studies employing SPL approach as well. Authors base their case studies on problem of developing and maintaining multiple native variants of mobile applications to support different mobile operating systems, devices and varying application functional requirements. Their approach includes their own tool (Moppet) to automates their approach of model-driven generating mobile applications. Authors deal with three types of variations in mobile applications: variation due to operation systems and their versions, software and hardware capabilities of mobile devices, and functionalities offered by the mobile application. Similarly, Marinho et.al. (Marinho, Fabiana, Andrade, Werner, Werner, Maia, Rocha, Aguiar, 2012) discusses the use of an SPL approach in mobile development for several hardware and context situations. They presented Android applications that resulted from the MobiLine project as well as the approach used to build them. The SPL employment resulted in 57 applications that share 100% common mobility-related functionalities, 87% common message exchange-related functionalities and 8% context-related functionalities. For modelling and building applications they proposed and used their MobiLine development approach. The authors only reported on the lessons learned from technical and project management areas. Alves et.al. <ns0:ref type='bibr' target='#b0'>(Alves, Camara, Alves, 2008)</ns0:ref> presented successful SPL employment in the area of mobile games. They observed an interesting fact -the SPL approach has been used several times in the past, but they did not name it SPL. It emerged naturally. At the same time, they reveal the need for standards or standard approaches in terms of establishing a common SPL architecturewhich would simplify SPL-based development dramatically. Quinton et.al. <ns0:ref type='bibr' target='#b15'>(Quinton, Mosser, Parra, Duchien, 2011)</ns0:ref> reported on the problem of addressing variabilities, while designing mobile applications. The SPL approach is defined by two independent dimensions: mobile device variety and mobile platforms. Their model-driven approach, supported by the Applitude tool, enables the creation of Android, iOS and Windows Phone mobile applications, while considering variabilities. The variabilities are addressed with merging core assets, while the platform dimension is supported by metamodel in Applitude. Dürschmid et.al. <ns0:ref type='bibr' target='#b5'>(Dürschmid, Trapp, Döllner, 2017)</ns0:ref> acknowledges the SPL approach in paper 'Towards architectural styles for Android app software product lines' to be complex in Android development due to inflexible operating system framework. This is why they propose several techniques to achieve appropriate SPL architectures. They include activity extensions, activity connectors, dynamic preference entries, decoupled definition of domain-specific behavior via configuration files, feature model using Android resources. Using their approach, authors manage to demonstrate the benefits via 90% of code reuse in their case study. As we will demonstrate in this paper, our real-life production project goes even beyond techniques mentioned before (we also employ design patterns etc.) and demonstrate benefits not only via code reuse, but also via increased productivity and code quality. As demonstrated in systematic literature reviews by El-Sharkawy et.al. (El-Sharkawy, Yamagishi-Eichler, Schmid, 2019) and Chacón-Luna et.al. (Chacón-Luna, Gutiérrez, Galindo, Benavides, 2020), SPL has gained momentum lately and the interest in empirical data on SPL has increased (Chacón-Luna, Gutiérrez, Galindo, Benavides 2020). In addition to this, Software Product Lines Conference (SPLC, 2020) is organised annually, regularly disseminating the progress of this research domain. We see our papers' role as an important report of industry-scale development with shoulder-to-shoulder comparison of parallel development of the same application -one team with, and one team without SPL approach. Another indicator of SPL approach gaining popularity in the mobile development is also the fact, that in 2019, Android API and Android Studio have a full support for 'product flavours', which is Google's term for SPL -creating different variants of an app (Android Developers, 2020). Product flavours allow the developers to specify different features and device requirements as well as use specific source sets for each flavour, while still using shared code and assets where possible. Each build variant represents a different version of an app built from a single project (Android Developers, 2020). When building the app only the source sets relevant to the selected build variant are included in the resulting end-user package, while all the others are left out. The PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Googles' approach to SPL is similar to the approach, presented in this paper. However, our approach, as demonstrated later in the paper, is based on projects, libraries, proven design patterns and avoids compile-time separation of product lines. In addition, even app resources (such as graphics, multilanguage translations etc.) are addressed by variabilities management, presented in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>3</ns0:head><ns0:p>Research method</ns0:p><ns0:p>24alife is the ecosystem of information solutions, oriented towards an increasing quality of life. This multidisciplinary project includes medical, sports, psychological and nutritional aspects as well as the combined view of an individual through targeted support in detecting, monitoring and eliminating the negative effects of stress. It is intended for individuals in strengthening and upgrading health reserves and, as such, focuses on a healthy lifestyle. Mobile applications (Android, iOS) are used as a supplement to the primary, web-based, user interface. Their main role is to track and guide sports activities (such as jogging, cycling, fitness training, etc.), to do daily measurements (heart rate, weight, blood pressure etc.), motivate users, offer guidance for portal-created personal programmes, etc. In addition to simple, practical suggestions, analysis and progress indications, they are also tasked with exchanging data with the cloud. There are several publicly available editions of mobile application, such as free and payable version. In addition to this, some editions are available only to project teams and partners and special customers. At the moment, all components support 6 languages and the imperial and metric system. The 24alife project includes two mobile development teams -Android and iOS. Application families are developed from single requirement-design body in separate development teams, resulting in native applications for Android and iOS. Two agile development teams of the same sizes, one for Android another for iOS, were given with the same set of requirements during the development. Both teams were supported by the same graphical designer. Product owner was also the same person for both teams. So, effectively, the goal was that they deliver two identical native mobile applications with the same set of functionalities and the same look. The only planned difference was expected behavior, aligned with iOS and Android design guidelines. Presented circumstances enabled us to design a research which would quantify development teams design decisions on how to manage product families. The empirical data results from a process, illustrated in figure 1.To verify whether development teams, separately working on mobile applications for Android and iOS are comparable, a pre-development survey was executed, capturing the developers' experience and perceived knowledge. We designed our questionnaire based on the practices set forth Chen et.al. (Chen, Chen, Ma, Zhou, Zhou, Xu, 2018). We asked them to enter their perceived level of knowledge of programming languages and provide a number of years for their professional experience. Since the knowledge selfassessment can be biased and subjective, the years of experience criterion was added in order to objectify participant's experiences.</ns0:p><ns0:p>The development team for the iOS application managed their multiple editions of mobile applications manually. The Android development team did the same during the weeks 1-33.</ns0:p><ns0:p>From week 34, the Android development team switched to an SPL-based approach. In week 55, our research ended. During development weeks 1-55, we continuously measured internal quality metrics and source code size. It resulted in empirical data, used to verify if we can compare iOS and Android source code in the first place. Secondly, if and what is the impact of week 33s' decision on Android source code and further delivery of functionalities. We measured external quality via analysing test reposts, generated by the quality assurance team. In addition to measuring source code size, application size was measured by counting functionalities, offered to users by application editions, which is also the case in function point analysis, which is one of the standard metrics for determining the size of the software product <ns0:ref type='bibr' target='#b21'>(Albrecht, 1979)</ns0:ref>. The source code size (LOC -Lines of Code metric) was continuously monitored during the development not only to provide evidence on ability to compare iOS and Android application, but also to support possible differences in delivered functionalities in the second half of the development. In the first half of the development project, LOC metric was primarily used to verify if we can compare products (development effort and delivered functionalities were the same in both development teams). In the second half of the project, LOC was primarily a control variable to see if changes in delivered functionalities was a result of possible changes in particular team's effort. Deviations in size-based metrics would signal that the introduction of the SPL approach was not the only change in the development process.</ns0:p><ns0:p>Post-development gathering of empirical data consists of a questionnaire to capture developers' perception of their decision to manage multiple editions. It also included gathering and interpreting empirical metrics from the configuration management system, bug tracking system and release logs. Post-development data gathering aims is to show if introducing the SPL approach returned the investment in terms of changed delivery times and possible quality changes.</ns0:p><ns0:p>Based on the presented research method and compiled empirical data, we would like to conclude while answering the research questions below:</ns0:p><ns0:p> RQ1: Is the source code size comparable between Android and iOS applications, while having the same set of requirements? </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Our approach towards several mobile application editions</ns0:head><ns0:p>The 24alife project (see section 3) includes two mobile development teams (Android and iOS), driven by the same requirement-design body and resulting in native applications for Android and iOS.</ns0:p><ns0:p>The mobile development teams were combined in week 1. Weeks 1 to 3 were dedicated for preparations, reviewing the initial product backlog, early prototyping, deciding on architecture design, preparing user interface wireframes and designs. Functionality-driven development, i.e. the first iteration, started in week 4. The development practices and process mechanics were organized according to Scrum development method <ns0:ref type='bibr'>(Sutherland et al., 2014</ns0:ref>) by practicing planning, daily and demo meetings, retrospectives, honest sprint commitment via planning poker and others. Both development teams (iOS and Android) consisted of three experienced developers. Both teams share the same scrum master, product owner, graphics designer and quality assurance team.</ns0:p><ns0:p>In order to manage several editions efficiently, during requirements gathering, designing and testing, functionalities were collected in a multi-dimensional table. Functionalities were not only listed, but also described in terms of which edition functionality was available and if and what specialities were required for a particular functionality in a particular edition. This is how development team ended with functionalities written in several categories: common (all editions), optional (only in selected editions) and alternative (edition-specific implementation of the same functionality). Such approach enabled development teams to reuse common requirements, design and testing, which is also one of the foundations in order to establish SPL approach (SEI, 2020).</ns0:p><ns0:p>Both teams initially managed multiple editions of their applications (daily built, test version) in manual manner. The breaking point (see Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) was at week 33 with a clear clients' demand do provide application in freely available ('Free') and payable editions ('Pro'). Since the iOS development team was confident, that their established techniques to manage several editions will continue to prove itself to be adequate, they did not change the design of their application (preserving a single development project with compiler directives and run-time switches, finally manually building the desired edition while providing an appropriate set of configurations). iOS development teams' approach is the best described as 'ifdef hell', reported by several authors, e.g. <ns0:ref type='bibr' target='#b7'>(Feigenspan, 2013)</ns0:ref>.</ns0:p><ns0:p>On the other hand, Android development team decided to invest extra effort to prepare the production-ready appropriate SPL architecture, libraries and projects. . A Set of core Android mobile development assets and available product lines is presented in Figure <ns0:ref type='figure'>2</ns0:ref>. Figure <ns0:ref type='figure'>2</ns0:ref> shows available assets (components implemented as Android libraries), from which 7+1(Core Module) are fully functional Android applications, combined from other assets. Figure <ns0:ref type='figure'>2</ns0:ref> does not capture variabilities in functionalities (which can be seen in Table <ns0:ref type='table'>1</ns0:ref>), rather it shows available components. A set of functionalities is present in particular application edition by appropriate library in edition. Functionality absence in achieved by not including the library. The alternative implementation is achieved by including library and overriding (a part) of its implementation by employing appropriate design pattern. A code-generation approach to introduce common, optional or alternative functionalities was not employed in the presented SPL architecture. No additional source code was automatically generated in order to support SPL approach.</ns0:p><ns0:p>The '24alife Core Module' is the Android library (at the same time a fully running Android application). It realizes functionalities that are common to all editions (product lines). The 'Core Module' also contains common architecture of mobile applications and the variability points of additional functionalities (see figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>). Applications within the product line are built into the final products with standard build tools, available in Android Studio. Core product assets (Android libraries) contain, in addition to the source code, also XML-written user interfaces, graphics, multilingual messages, dependant libraries, static content, etc. The library structure is shown in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. operations to be run when the mobile application starts, add new synchronization points, turn off some components, set data for accessing a particular cloud -whether it be production or test deployment). This is how the inclusion of a specialized functionality in the individual line is, technically, achieved using several approaches:</ns0:p><ns0:p> preparation of the extension point in the core module (in terms of components or inside components using factory method or abstract factory design patterns),</ns0:p><ns0:p> using inheritance and adding new methods and/or method invocations in the product line,  using an abstract factory pattern, which combines the functionality of the new line and its own user interface. The exclusion of unwanted features is achieved mainly through inheritance and the exclusion of unwanted features (such as not downloading programs for free products), as well as with the parameterization of the basic building blocks. However, in most cases, exclusion was not necessary, since product line-specific functionalities were introduced in the product lines themselves (e.g. 24alife PRO library). Changing behavior (e.g. the demo edition expires one month after construction), are achieved also with inheritance and/or by employing appropriate design patterns, such as: a bridge, factory method, or builder. Based on presented technical SPL architecture, 7 application editions were managed. Table <ns0:ref type='table'>1</ns0:ref> summarizes functionality-based differences between them. E.g. final 'Alpha' edition is based on 'Core Module', having 45 common functionalities, 9 additional functionalities are implemented only in 'Alpha'. Which results in total of 54 functionalities. In addition to this, 5 functionalities in 'Alpha' are adopted to different behaviour. Which resulted in 14 functionalities implementation for 'Alpha' (optional + alternative). That is 26% difference with base edition, which is 'Core Module' in case of 'Alpha'. Differences in terms of functionalities for other editions are demonstrated in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Results</ns0:head><ns0:p>In order to answer the research questions, we will outline the most important empirical research data. This includes an in-depth analysis of version control logs, source code metrics for comparative quantitative and qualitative data, querying bug-management logs and production logs and analysing developer questionnaires. Several tools were used in order to capture, prepare and aggregate raw data, including Subversion CLI, CLOC tool, Javancss, XClarify, ProjectCodeMeter and shell scripts for batch processing and data aggregation.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.1</ns0:head><ns0:p>Developer profiles Based on the pre-development questionnaire, the mobile developer's profile was as follows:</ns0:p><ns0:p> Android developers average perceived developments skills: 2.3/5,  Android developers average perceived Android Studio skills: 4/5,  iOS developers average perceived developments skills: 2/5,  iOS developers average perceived XCode skills: 3/5,  Android developers average experience in developing mobile applications: 4 years,  iOS developers average experience in developing mobile applications: 3.3 years.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.2</ns0:head><ns0:p>Edition and functionality growth During the research, presented in this paper, we observed the development of 55 weeks (see Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). During this time, the final version of the released mobile application is 3.0. Versions 1.0, 1.1, and 2.0 were released in a two-edition manner (free, pro) while version 2.1 and later were required in several editions (including demo, alpha, test, blackberry-optimised free and blackberry-optimised pro). Please see Table <ns0:ref type='table'>2</ns0:ref> for the growth in terms of new functionalities. Please also see table 1 for a complete functionality count per particular edition. Besides functionality growth, edition growth is also an important data regarding our research. Figure <ns0:ref type='figure'>4</ns0:ref> shows the number of released products during project time (editions and versions combined) for iOS and Android. Since Android developers used an SPL approach, the chart in Figure <ns0:ref type='figure'>4</ns0:ref> provides additional insight into core SPL asset numbers over time. The chart clearly shows only version-based growth until week 41. After week 33, the Android application becomes a core asset, and the iOS application stays monolithic. Please note that Figure <ns0:ref type='figure'>4</ns0:ref> does not capture internally available editions. The chart includes 7 different Android editions across several versions and 2 iOS editions across several versions.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.3</ns0:head><ns0:p>Source code size The product's quantitative metrics are as follows. One of the commonly used metrics is LOC (Lines Of Code). Figure <ns0:ref type='figure'>5</ns0:ref> displays how LOC (without blanks and comments) changed over time for Android and iOS products. In both, an important segment of code is done in XML (examples would be user interfaces, navigation rules, animations, etc.) This is why we show this distribution also in Figure <ns0:ref type='figure'>5</ns0:ref>. In the case of Android, the chart captures LOC for the entire Android projects (all editions, all core SPL assets). The chart includes the iOS initial application PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>as well as the Free and Pro iOS editions. The internally available experimental iOS HRV/Corporate is not included, since it is a separate branch on its own. Please note, that week 4 was the first official iteration, while weeks 1-3 were preparations and prototyping. This is why the subversion repository is starting the codebase with week 4 and charts in Figure <ns0:ref type='figure'>5</ns0:ref> and 6 also start at week 4. While LOC can give concrete insight into software size, we included the number of classes for both development projects in Figure <ns0:ref type='figure'>6</ns0:ref> in order to give more accurate insight into code distribution. The measured codebase is the same as in the chart in Figure <ns0:ref type='figure'>5</ns0:ref> -all Android editions and versions with core SPL assets, iOS initial, Free and Pro editions. We measured LOC values for both iOS and Android with the same tool (cloc), so that the same rules would apply, thus making the results comparable. We also used the ProjectCodeMeter tool to measure the final state of subversion repository for both projects as well as the final singleedition project for both projects. This also makes the results directly comparable.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.4</ns0:head><ns0:p>Source code quality In addition to size-oriented metrics, we also applied quality-oriented source code metrics. Since we are interested in internal quality change, measuring mid-and final-version of source code is sufficient. The results that the ProjectCodeMeter tool produces are summarized in Table <ns0:ref type='table'>3</ns0:ref>. Code to Comment ratio (CCR) grabs logical LOC per one comment line. Code Structure Modularity (CSM) aggregates modularity of the source code (value: <100 -low modularity, >100 fragmented code). Logic Density (LD) assesses number of lines to capture a certain logic. As Table <ns0:ref type='table'>3</ns0:ref> shows, CCR value stays the same <ns0:ref type='bibr' target='#b19'>(19)</ns0:ref> till the end of the project for the iOS application. On the other hand, we can see significant improvement (from 9 to 15) in week 55 in the case of the Android application. Code structure (CSM metric) is reduced in case the of iOS application source code (152 to 143). It is, however, the opposite in the case of Android application source code -CSM value is increased (145 to 153). Logic density (metric LD) is increased in both cases: Android application from 45 to 66 and iOS application from 77 to 88.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.5</ns0:head><ns0:p>Product quality During the research, we also measured the product's external quality. The quantity of reported failures was used as a comparable measure. The quality assurance team was using the following classification of failures (based on IEEE standard 1044-2009):</ns0:p><ns0:p> errors (behaviour, that is different from what is specified in software requirements specifications or unexpected crashes),  inconsistencies (not really errors, but disturbing quality failures -e.g. inappropriate input checks, displaying values in inaccurate measures, e.g. the length of a run in metres instead of kilometres, etc.) and  imperfections (mostly visual quality failures, e.g. using wrong colours, screen appearing incorrect when rotated, etc.). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We preserved this classification also in Figures <ns0:ref type='figure'>7 and 8</ns0:ref>. Please note that all failures were addressed (fixed or closed as false failures) during the development. This is why the charts in Figures <ns0:ref type='figure'>7 and 8</ns0:ref> does not capture the frequency of closing failures. The testing efforts were the same for both platforms, so the results are comparable. As chart in Figure <ns0:ref type='figure'>7</ns0:ref> shows, we can observe an almost constant rise of reported failures (5.5 per week; 6.4 if we remove the first 4 weeks of prototyping) in the case of Android development. During the introduction of the SPL approach, the failure number rose from 160 to 240, which is 11.4 per week. In the third period, the final failure count rose from 240 to 405, constant failures per week rate in this period is 9.1. The final failure percentage, according to classifications, is as following: 64% errors, 16% inconsistencies, 20% imperfections. Similar situation is reported in Figure <ns0:ref type='figure'>8</ns0:ref> for the case of reported failures in iOS development (6.5 per week in first period; 7.8 if we remove the first 4 weeks). During the introduction of the multiedition approach, the failure number rose from 195 to 280, which is 12 per week. In the third period, the final total failure count rose from 280 to 631, the constant failures per week rate is 25. We can see how the weekly failure rate in this period is not linear. 25 is the average rate during stable multi-edition development. If we split this period in half, the first half has a rate of 18, while the second half is 33. The final failure percentage is 64% errors, 18% inconsistencies, 18% imperfections.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.6</ns0:head><ns0:p>Post-development survey After the development, we finalized gathering empirical data to execute a post-development survey among developers. The answers were as follows:</ns0:p><ns0:p> The development was time framed to 8 hours per day. We asked developers about their assessment with regard to how many hours they spent daily on development (allowing for the fact that they might have done extra work at home). iOS developers reported that they spent 9 hours a day with the project, Android developers reported working on average 8.7 hours daily.</ns0:p><ns0:p> We asked developers how they liked the implemented mechanism of managing several editions. On a scale of 1-5 (1 -do not like it at all; 3 -neutral opinion; 5 -extremely like it) they responded: iOS developers: 2.7, Android developers: 4.7</ns0:p><ns0:p> We asked developers about the fear they had in cases where they would have to change some random code in a project -did they worry that they would cause some failures with it? On a scale of 1-5 (1 -no fear at all; 5 -very afraid) they responded: iOS developers: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> An Android developer highlighted an event as a positive experience. The requirement was stated for the fully functional Pro edition that could be available as a 'Demo' edition. Based on Pro, the Demo edition included a special logo and functionality for expiration within 1 month with the possibility of buying the Pro edition on the Google Play market. The edition was prepared and turned over to production, as claimed, in just 3 hours.  As a negative aspect, an iOS developer highlighted fast development, and a lot of requirements changing the already implemented functionalities. Changing/maintaining the code was problematic, he explained.</ns0:p><ns0:p> An iOS developer suggested, that instead of having many editions of the same applications, it would be better (in terms of development) to have several smaller applications with limited functionalities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>Discussion</ns0:head><ns0:p>In this chapter we will provide answers to the research questions presented in the 'Research method' chapter:</ns0:p><ns0:p> RQ1: Is the source code size comparable between Android and iOS applications, while having the same set of requirements?</ns0:p><ns0:p> RQ2: What are the impacts of introducing a software product lines (SPL) approach to a mobile development?</ns0:p><ns0:p>o RQ2.1: What are the impacts of the SPL approach to application growth? o RQ2.2: What are the impacts of the SPL approach to application quality? o RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? Based on empirical data from the previous chapter, we will also provide further discussions on the implications that arise from the data. Based on the gathered data, let us answer the fundamental research questions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>RQ1: Is the source code size comparable between Android and iOS applications, while having the same set of requirements?</ns0:head><ns0:p>The final codebase for Android products included 47,259 Java LOC and 32,828 XML LOC; 390 classes (see Figures <ns0:ref type='figure'>5 and 6</ns0:ref>). The final codebase for iOS products included 50,739 Objective LOC and 22,744 XML LOC; 290 classes (see Figures <ns0:ref type='figure'>5, 6</ns0:ref>). Based on the CLOC tool, the final iOS codebase was larger by 7% (Java vs. Objective C); the combined (XML + programming language) LOC is smaller in the case of iOS by 8%. With regard to the LLOC measure (Logical LOC), the final Android LLOC was larger by 7% (see Figure <ns0:ref type='figure'>5</ns0:ref>), based on the ProjectCodeMeter tool. The class count was larger in the final Android codebase by 26% (see Figure <ns0:ref type='figure'>6</ns0:ref>). The complete analysed iOS codebase includes only the Pro and Free editions. On the other hand, the analysed Android codebase also includes additional editions (Alpha, Test, Demo, BB Pro, BB Free). This is why the Android codebase actually includes 10 more functionalities (see Table <ns0:ref type='table'>1</ns0:ref>) compared to iOS codebase functionalities (53 functionalities in Free and Pro), which is 16%. Taking this into account, the effective (normalized to functionalities) LOC difference between iOS and Android is 9% and the LLOC difference is 8% (iOS is higher). The effective class count (normalized to functionalities) difference is 14% (Android is higher). The class difference is easily justified: the Android codebase includes SPL architecture, which is rich in terms of classes. There are also 13 alternative functionalities, where the implementation heavily depends on design patterns and inheritance (which results in introducing new classes -see chapter 4). To compare product sizes, regardless of SPL approach, we can easily compare codebases at the point of releasing a last single-edition product. This was done in week 33 for Android and week 37 for iOS (see Table <ns0:ref type='table'>3</ns0:ref>). Android LOC at that point was 49,664, while iOS LOC was 50,418. The difference is as low as 1.5%. The classes count for Android was 277, while the iOS codebase had 238 classes. Please note, that the difference is 14%, which is the same as the final difference, normalized to functionalities. Based on the presented calculations, we can confidently claim, that having the same set of functionalities, implemented using our project boundaries and rules would result in comparable codebase size for both Android and iOS. This is how we answer the research question 1 as positive.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.2</ns0:head><ns0:p>RQ2.1: What are the impacts of the SPL approach to application growth? Observing the data, presented in Figure <ns0:ref type='figure'>5</ns0:ref>, we can see that LOC is rising at an almost constant rate of circa 1,400 LOC (pure code, blanks and comments emitted) per week for both the Android and iOS projects. If we analyse the LOC per week coefficient at release weeks or before and after introducing more editions, the data does not show any significant change in LOC velocity. This clearly indicates, that development teams continued to invest unchanged development efforts. Based on the data presented in Figure <ns0:ref type='figure'>6</ns0:ref> we can see an almost constant coefficient classes/week value of 6. However, in the case of Android, the classes introduction velocity rises to 9 per week after introducing the SPL approach. On the other hand, in the case of iOS, the several edition approach resulted in the dropping of some classes <ns0:ref type='bibr' target='#b22'>(21)</ns0:ref>. Implications of quantity metrics are as follows: the developers output in terms of LOC is obviously not affected by multi-edition development. Since the development was functionalitybased, it means, that the only driver of LOC velocity were functionalities. However, the structure of the created program code changes if SPL is applied (148 Java LOC per class before SPL, 98 Java LOC per class with the SPL approach in our case). On the other hand, we can observe even more Objective C LOC per class after ad-hoc multi-edition development in the iOS project. Manuscript to be reviewed Computer Science code structure after employing SPL approach is also captured in the CSM measurement (see Table <ns0:ref type='table'>3</ns0:ref>). Based on the presented calculations we can conclude that multi-edition development does not affect development output velocity in terms of LOC, but it does affect the product static structure: SPL-based development in a positive way, while ad-hoc multi-edition development in a rather negative manner. However, the SPL approach largely affects velocity in terms of functionality and released editions. Based on Tables <ns0:ref type='table'>1 and 2</ns0:ref>, we can see that the delivered functionalities and editions per given time frame were comparable in the single-edition period for both Android and iOS development teams. 33 functionalities in single-edition software were delivered in 39 weeks by the iOS development team and in 37 weeks by the Android development team; which is 0.9 functionality per week. With the same amount of effort input (see post-development survey), this translates into 1.7 new functionalities per week in 2 editions for the iOS development team; and 2.2 new functionalities per week and 1 adopted functionality per week across 7 different editions. While combining public releases with available editions (see Figure <ns0:ref type='figure'>6</ns0:ref>), we can see that after multi-edition development, the Android development team delivered 13 (0.93 per week) new editions and versions of software, while iOS delivered 6 (1 per 2 weeks). Based on presented data, we can answer research question 2.1 as follows. In our case, SPL approach results in 126% higher functionality-based velocity (or as much as 240% higher, compared to single-edition development). At the same time, the SPL approach enabled developers to adopt an additional 100% of new functionalities across several editions with the same effort level.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.3</ns0:head><ns0:p>RQ2.2: What are the impacts of the SPL approach to application quality? Internal code quality comparison of the products in terms of single-edition development versus multi-edition development and Android development versus iOS development is shown in Table <ns0:ref type='table'>3</ns0:ref>. In the case of iOS, single-edition and multi-edition development do not change Objective C code quality, which is an expected observation (managing editions was done mainly with compiler directives -see Chapter 4). Modularity improves by 6%, logic density is affected by 5% in a negative manner. The commenting ratio stays the same. In the case of the Android single-edition and multi-edition development, Java code quality also did not change. Modularity changed by 5% (towards fragmented code), which is expected as a result of SPL implementation (see <ns0:ref type='bibr'>Chapter 4)</ns0:ref>. The LLOC to capture certain logic increased by 32% (from 45 to 66), which also makes sense: SPL-related (architecture, optional and alternative functionalities management) code does not capture a lot of business logic. However, logic density is, even after being changed a lot, lower by 19%, compared to Objective C code. Data shows, that internal code quality did not change importantly. This is a reasonable outcome: the same development skills and effort was used during the whole project. SPL-enabled multi-edition approach affects code in terms of a lot of additional code fragmentation, while the ad-hoc approach does not have this effect. Which is also expected result. However, Figures <ns0:ref type='figure'>7 and 8</ns0:ref> demonstrate SPL impact on external quality. As implied from the charts showed on Figures <ns0:ref type='figure'>7 and 8</ns0:ref> there are three distinctive periods during the project: singleedition development (weeks 1-33), first multi-edition intensive development (weeks 33-37), and stable multi-edition development. As explained in section 5.5 and shown in Figure <ns0:ref type='figure'>7</ns0:ref>, in Android application, the constant failures per week is dropped from 11.4 to 9.1 in SPL-enabled period -resulting in 405 failures found throughout the development project. An important observation from Figure <ns0:ref type='figure'>7</ns0:ref> (Android failures) is this: the constant weekly failure rate in the single-edition development was followed by a rise in failures during the introduction of the SPL approach. During the stable SPL approach-enabled multi-edition development, the weekly reported failure rate is linear and stable again. The rate rose from 6.4 to 9.1 (30%), which is reasonable when considering not only additional functionalities, but also 6 additional editions (from an external point of view completely separate products) with 10 edition-specific and 13 edition-adopted functionalities. The failure increase is significantly smaller than feature and editions increases. Failure per edition on a weekly basis in the last period is as low as 1.3. As explained in section 5.5 and shown in Figure <ns0:ref type='figure'>8</ns0:ref>, in iOS application, the constant failures per week is elevated from 7.8 to 12 in multi-edition period -resulting in 631 failures found throughout the development project. An important observation from Figure <ns0:ref type='figure'>8</ns0:ref> (iOS failures) is that constant weekly failure rate in single-edition development is followed by an increase in failures during the introduction of the SPL approach. In the period of stable multi-edition development, the weekly failure rate is not linear. The average weekly rate in this period, rose by a factor of 3.2 (from 7.8 do 25). This is, using ad-hoc multi edition development, more failures per edition (25 per week for two editions -12.5 failures per week per edition) than in the singleedition development (7.8 per editions per week). The ad-hoc multi edition approach failed in terms of rising or at least maintaining an external quality level. Based on presented data, let us answer the research question 2.2. In our case, internal quality is not affected by introducing SPL approach. Since SPL approach promotes faster development (see chapter 6.2) and failure rate stays the same, number of failures per functionality drops. This is how SPL approach enhances external quality. However, using non-SPL multiple edition development results in our case in reduced external quality (reported failures rose by factor 3.2).</ns0:p></ns0:div>
<ns0:div><ns0:head>6.4</ns0:head><ns0:p>RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? Both the post-development survey and post-development data analysis revealed that developers are more confident in using the SPL approach than using a multi-edition approach, designed and crafted on their own. In the case of SPL-supported multi-edition development, developers created complex software architecture and maintained it while considering strict rules of reuse. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>They did not have an opportunity to create any shortcut or workaround solution to their approach, as the other development team had. Multi-edition development without an SPL approach relied on developer-chosen and freely implemented approaches, such as compiler directives, runtime conditions, etc. Therefore, the post-development survey revealed that developers in the SPL-supported development team liked the multi-edition development approach more: 4.7 / 5, while on the other hand only 2.7 / 5 in the case of iOS developers. The SPL-supported development team was also more confident in maintaining and developing new code in production software (fear of introducing failures while changing code is as low as 1.7 / 5 in case of Android developers and as high as 3.3 / 5 in the case of iOS developers). As a consequence, iOS developers would not be happy with demands for a new edition of the application (4.0 / 5), while Android developers would welcome it (2.0/ 5). Although surprising, we believe that the survey results have a foundation in source code organisation: in the case of the SPL approach, the source code of different editions is physically separated. Common functionalities are reused with merging separate source code projects into final products. Therefore, it is clear that developers can be sure that certain new code or code changes will not be included in a certain product edition. On the other hand, ad-hoc multi-edition development has a common code base, where all source code is processed when creating any edition of the final product. Software components are therefore more coupled and less cohesive. As a result, as proven also by the post-development survey, it makes developers not so confident in creating new code or changing existing code. Higher developer confidence in creating new source code and maintaining existing code was also shown in the post-development data analysis. The failure rate is importantly higher when the SPL approach is not used. The failure introduction velocity is also not linear. It shows how fixing failures in some cases results in introducing new ones with the ad-hoc multi-edition approach. Which finally answers our research question 2.3: the SPL approach have a positive impact on developers' confidence in delivering releases (and new functionalities as a consequence) frequently.</ns0:p></ns0:div>
<ns0:div><ns0:head>6.5</ns0:head><ns0:p>Key findings and limitations Based on presented results, visualizations and in-depth analysis, let us summarize the main outcomes while answering research questions. Using our research setup, methods and results, we showed that:  Having the same set of functionalities would result in comparable codebase size for both Android and iOS;  The SPL approach results in 126% higher functionality-based velocity (240% higher, compared to single-edition development);  The SPL approach enabled developers to adopt an additional 100% of new functionalities across several editions with the same effort level;  Internal quality is not affected by introducing SPL approach; PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> SPL approach enhances external quality.</ns0:p><ns0:p> Managing several editions using non-SPL approach would reduce external quality.</ns0:p><ns0:p> The SPL approach have a positive impact on developers' confidence in delivering new functionalities and releases frequently. The presented research method, results and discussion also include limitations which represent possible threats to validity. In ideal world, one would design presented experiment in terms of developing the same application for the same platform twice -with and without a SPL approach to manage several editions. This is how it would be possible to eliminate every doubt, caused by developing for two different operating systems with two different programming languages. However, we created the research method, gather results and interpret them in mind to minimize any doubt in our conclusions. The main measure to address this threat lies in answering research question RQ1. In addition to this, we believe that while we scientifically investigated large, real-world industry project, validity of the results is higher than any laboratory-set experimenting environment. A question also remains, whether we would end with the same results and conclusion, if in week 33 iOS development team would chose to implement SPL and Android developers would go on with manual approach to manage multiple editions. We addressed this threat by monitoring internal quality of source code and invested effort in terms of delivered LOC. Since internal quality and LOC velocity remains the same after week 33 for both development teams, this indicates that the only difference between products lies in the SPL-enabled architecture in Android application. During the research we have used LOC metric carefully and systematically in both development teams in order to minimize the risk that comes with possible disconnection between LOC metric and delivered functionalities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusions</ns0:head><ns0:p>This paper highlighted our research during two development teams that created production-ready mobile application families for two different platforms (Android, iOS). They shares the same set of functionalities and were done with the same effort input. Both families share 85% of their common functionalities. Other functionalities are optional or alternative. One product family was developed with the SPL approach to manage reuse, while other product family was developed with more traditional and ad-hoc reuse techniques (single codebase, no special architecture to manage variabilities, employing compiler directives, etc.). As shown in this paper, this was the only difference in approach between the development of two functionally equal software families.</ns0:p><ns0:p>The results of this research show that two product families not only share the same set of functionalities and effort rate, but, surprisingly, product growth and end products size were also the same. Comparing multi-edition with single-edition development, we showed that development velocity, in terms of code quantity, did not change. However, code structure improved after introducing the SPL approach. The SPL approach also had positive implications for feature-based output: after the introduction of the SPL approach, functionality output per Manuscript to be reviewed</ns0:p><ns0:p>Computer Science week increased by 240%, which is 124% higher than cases of iOS development with non-SPL approach to multi-edition development. At the same time, the SPL approach employment resulted in 6 new editions, while the non-SPL approach resulted in only one. An important aspect, directly impacted by the SPL approach, is also software quality. With the SPL approach, the failure introduction velocity did not increase, but remained linear. After dividing failures across all editions, failure rates fell. On the other hand, we showed how the failure introduction rate in the case of the non-SPL approach not only increased but was also not linear anymore. In our case, the failure count, divided by editions, is even higher than in the single-edition development. The derived statement from this observation is that doing multiedition development without an SPL approach is something that will have a very bad impact in terms of software quality. On the other hand, the SPL approach has a very positive impact on software quality. Our research revealed that the SPL approach enables development teams to produce more functionalities in several product editions with the same effort as they would use while working on a single-edition product. Not only productivity, but even more importantly, software quality, rises. Developers' confidence in the maintenance and developing of new code is also higher, when using the SPL-based product structure.</ns0:p><ns0:p>Our finding, that is even more taught full is, that when development teams have to manage multiple editions, doing it manually can have severe consequences in terms of drop in quality and worse developers' confidence while introducing new functionalities or upgrading existing ones.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>:o RQ2. 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>RQ2What are the impacts of introducing a software product lines (SPL) approach to a mobile development? What are the impacts of the SPL approach to application growth? o RQ2.2: What are the impacts of the SPL approach to application quality? o RQ2.3: What are the impacts of the SPL approach to developers' confidence in frequent releases? The presented set of research questions was carefully selected to check: a) Whether we can compare Android and iOS applications on source code level, while having the same set of functionalities and investing the same effort amount; PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021) Manuscript to be reviewed Computer Science b) Internal and external quality of both applications with respect to selected multiple editions approach; c) Delivered application editions and core, optional and alternative functionalities with respect to selected multiple editions approach; d) Developers perception on the selected multiple editions approach. In addition to two questionnaires (pre-and post-development questionnaire), our research relies on software metrics, that support answers on research questions: a) Source code size and class-based metrics (Lines of Code, Logical Lines of Code, Number of Classes) in combination with product size metrics (Number of Editions, Number of Functionalities) support RQ1; b) Source code size-based metrics support RQ2.1 as an implicit control of invested effort to the development; c) Product size metrics support RQ2.1; d) Source code internal quality metrics (Code to Comment Ratio, Logic Density, Code Structure Modularity) support RQ2.2; e) Product quality metrics (Number of reported errors, imperfections and inconsistencies) support RQ2.2;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Android development team implemented variabilities using industry-proven best practices and approaches. These include the use of object-oriented design, proven design patterns, extensions, and component parameterization. Design patterns are used heavily, especially: factory, abstract factory, factory method, bridge, bean, adapter and others<ns0:ref type='bibr' target='#b6'>(Gamma, Helm, Johnson, Vlissides, 1998)</ns0:ref>. As shown in Figure3, Android developers created a common SPL architecture as a set of concrete and abstract classes. Their primary role is to handle the orchestration of newly introduced functionalities from other libraries via prepared extension points. Component-based development is heavily used in the presented SPL realisation. All components are managed (created, initiated, cleaned etc.) via the Component Manager class. The Component contains the implementation of predefined interfaces and specific interfaces with their implementation as well. Operations, such as preparing the database structure, upgrading the database, exchanging data with the cloud (synchronizing) and others are all part of a component. This is how certain editions (product lines) without a certain component, will not even have database structure for unsupported functionalities. The component manager is also responsible for component-to-component communication (The Component Manager would provide a reference to a component in the same product line). The Configuration class constitutes another concept in the presented architecture. Configuration, provided by the Core Module, includes general purpose configurations, common to all product lines. In a particular edition (the product line), developers would typically extend the Configuration class in order to do necessary adoptions or overriding (e.g. introduce new</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>3. 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Android developers: 1.7.  We asked developers how they would welcome the need to establish a new edition of a mobile application. On a scale of 1-5 (1 -no problem; 5 -possible only with high investment) they responded: iOS developers: 4, Android developers: 2. Finally, developers had opportunity to express their positive and negative experience with managing several editions. The answers include: PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>From 153 Objective C LOC per class before multi-edition development, to as high as 189 LOC per class (in week 50) and the final coefficient is 170 LOC per class. Improvement in terms of LOC per class in the iOS project in the last 5 weeks is a result of refactoring. Positive effect on PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,262.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,412.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,347.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,315.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,317.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54018:2:0:NEW 8 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
A product quality impacts of a mobile software product line - an empirical study
Rebuttal Letter
Dear Editor, dear reviewers,
Thank you for getting back to us with positive response to our major changes in the manuscript and investing more of your valuable time and efforts to providing us with your final improvement suggestions. We are more than happy to address them.
We reviewed your comments and carefully addressed all of your proposals.
In addition to our final manuscript, we are also providing detailed log of the changes along with our answers below.
Reviewer 2 is proposing that we should include both questionnaire and answers as Annexes. Please be informed, that during formality checks, in addition to native language versions, we also provided questionnaire and responses translated to English as part of Associated data.
We believe that the manuscript is now, after we addressed all the issues that were raised in reviews, not only suitable for publication in PeerJ, but also, more importantly, an added value for PeerJ readers.
Maribor, 8. 2. 2021 Luka Pavlič, Tina Beranič, Marjan Heričko.
Change log and answers
Response: #2.01 (reviewer 2)
Your comment:
Response #01 and #06. Comparison between and Android and IoS applications.
The response given by the authors to this comment made me understand much better the motivation and value of the work. However, I think that the authors explained this motivation much better in their responses #01 and #06 than in the paper itself. In the paper, this description is split between section 3 and section 4 and I think it should be all explained in the same section.
Recommendation to improve the paper: check again how the work is motivated in the paper and try to move some of the discussion in responses #01 and #06 to the paper. The information is in the paper but slightly reduced and organized and explained in a different way and I think it is much more clear in their response. The research method discusses the two mobile applications but they are not introduced yet (they are introduced at the beginning of section 4). I would move the description before section 4.1 to section 3. The reader needs to understand the context of what has been developed before describing the research method.
Our response:
Thank you for your comment. We are happy that the efforts, we have made during the major revision and resulted in additional 3 manuscript pages and new subsections, were fruitful in terms of our manuscript understandability. We agree that our response #1 (also #6) was quite extensive. We did our best to include it into the manuscript. We believed that we include everything from the comment to manuscript, but the text from our response is stretched throughout the whole paper, as recorded in manuscript changes log. It is, however, possible what you mentioned - that we did not include the whole answers to the paper. Thank you for rising this. We now checked the paper, especially motivation part, and our responses side-by-side to include everything. Additions are (please see change log for complete changes):
• Complete context description (Two agile development teams…) is now included in section 3.
• Detailed steps, that lead from manual editions to SPL in Android is included from our response to section 4 now.
We also agree with you that it would make sense to reveal the context of the research in the beginning of section 3 instead of in section 4. This is why we moved the description in beginning of section 3 to be an intro to a research method. As a result, we had to restructure section 4 also – we merged subsection 4.1 (Managing several product editions) to a single section 4 (Our approach towards several mobile application editions).
We believe that now everything from our answers is in the manuscript. Context is also described centrally in section 3 – thank you for this proposal.
Manuscript changes:
• Revised Manuscript 201-214: Context moved from section 4 to section 3,
• Revised Manuscript 218-226: Additional context description from reviewer response #1 and #6,
• Revised Manuscript 292: Deleted context; moved from section 4 to section 3,
• Revised Manuscript 313-323: Additional context description from reviewer response #1 and #6 (reason to introduce SPL support),
• Revised Manuscript 334: Subsection 4.1 heading removed.
Response: #2.02 (reviewer 2)
Your comment:
Response #7. Line of Codes comparison.
I kind of understand why the authors monitor the line of codes in both projects, but still do not completely understand how the number of line of codes, by themselves, can provide authors with such amount of information: whether products are comparable, that functionalities are the same, that using an SPL does not imply additional changes. For instance, it may easily happen, based on my experience, that for the same functionalities the number of lines of codes may simply be completely different and that doesn´t mean that the two systems offer different functionalities.
Our response:
Thank you for your comment on additional 16 manuscript lines in order to explain the role of the LOC metric. Since we share a strong background not only in research, but also industrial development, we can only agree on your note on possible threads to validity. In addition to heavily elaborated LOC role in the manuscript, done during mayor revision, we even extended Research method section with an additional statement:
In the first half of the development project, LOC metric was primarily used to verify if we can compare products (development effort and delivered functionalities were the same in both development teams). In the second half of the project, LOC was primarily a control variable to see if changes in delivered functionalities was a result of possible changes in particular team’s effort.
We additionally also clearly articulated the concern in Limitations section:
During the research we have used LOC metric carefully and systematically in both development teams in order to minimize the risk that comes with possible disconnection between LOC metric and delivered functionalities.
Manuscript changes:
• Revised Manuscript 248-252: Additional statement to clarify LOC,
• Revised Manuscript 727-729: Additional statement in Key findings and limitations.
Response: #2.03 (reviewer 2)
Your comment:
Response #9. SPL representation.
Well, I have to say that I disagree that the SPL is 'something conceptual and does not imply an implementation'. It is not only a concept, and it is not only to have a set of components that are added/removed from a system. Software Product Line Engineering is much more than this and provides clear phases of how to develop a software product line and clear artifacts that have to be defined/implemented in each phase, etc.
Said that I understand that authors have better explain now the mechanism that Android developers used to implement the variation points in the SPL.
Our response:
Thank you for your comment. We are happy that our explanation in additional 37 manuscript lines and improved figure is recognized as an important added value in explaining the nature of used SPL approach.
Response: #2.04 (reviewer 2)
Your comment:
Comment to Response #10. Better organize the findings.
I was not aware that the journal requested more text than tables or graphical elements. I any case, in my opinion, it is important that the results are presented and discussed in a format that helps to understand the conclusions. I still consider that the discussion about the obtained result is difficult to follow and it is very easy to be lost in the details.
Recommendation: Better explain some sections. For instance, in sections 5.4 and 5.5 authors merely refer to the table or figures, but an explanation of the results and the meaning of that results are missed. In general, when a table or figure is included this improves the readability so I completely agree with them, but they need to be referenced and explained in the text.
Our response:
Thank you for the comment. We are happy that you acknowledge our improvement in the discussion, made during the major review changes (conclusion reorganization, new subsection “Key findings”, bold emphasizes etc.). After re-reading we also understand why it is even after these changes sometimes hard to follow.
This is why we implemented your suggestions into the manuscript. They include:
• Results section (5 - 5.2, 5.3, 5.4 and 5.5 ) is referencing figures and tables even more strictly.
• In section 5.4 we now state and summarize the result data, that is found in referenced Table 3. We also double checked, that section 6.3 does not repeat the data heavily.
• We extended section 5.5 with clearly stating and summarizing data from Figures 7 and 8. The data was, however, stated and discussed in conclusions section 6.3 – which gave us an opportunity to shorten this subsection and reference the results to Results subsection 5.3. This is how Conclusion section is now shorter and more focused; all the results are gathered in section 5.
Manuscript changes:
• Revised Manuscript 410, 416, 434-435 – figure/table references,
• Revised Manuscript 452-457 – table 3 explained, result data are shown and described,
• Revised Manuscript 474-487 – figure references, result data are shown and described (moved from section 6),
• Revised Manuscript 633-635: Result data is moved to section 5, reference is made,
• Revised Manuscript 644-646: Result data is moved to section 5, reference is made.
" | Here is a paper. Please give your review comments after reading it. |
20 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As a promising next-generation network architecture, named data networking (NDN) supports routing based on names and in-network caching to retrieve content in an efficient, fast, and reliable manner. However, there are vulnerabilities in NDN architecture, which an attacker can exploit to perform a content poisoning attack (CPA). The poisoned data pollutes the in-network caches and, consequently isolates the legitimate content in the network. There are many state-of-the-art mitigation strategies for the content poisoning attack, but a new attack-surface like interest flooding attack has emerged with these schemes' advent. This attack-surface needs to be reduced to make content poisoning attack mitigation schemes more effective, secure, and robust. This article's main contribution is the addition of a security mechanism in the CPA mitigation scheme which blocks the malicious face of compromised consumers by monitoring the Cache-Miss Ratio values and the Queue Capacity at the Edge Routers.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Named Data Networking (NDN) is a well-known and well-researched architecture for the next generation of the Internet, based on a data-centric approach. While the legacy network is based on a host-centric system, the NDN architecture has changed the Internet's communication model altogether (Jacobson et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>). It allows the distribution of data that can be acquired from any content router from the network. A content provider can produce the data in advance and place it as auxiliary storage that can be accessed by any consumer anytime, even if the producer gets offline. A producer does not have to be online, and a consumer does not have to be connected to the producer to fetch the data; Instead, the Consumer can acquire data through in-networking caches. While NDN increases content availability in the network via in-network caching, the integrity of content becomes critical, given NDN's nature (Tarkoma et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>). Hence, NDN opens several security-related issues that are not relevant to the legacy network communication. It includes some new types of data integrity attacks where a malicious or compromised node provides a corrupted copy of the content. These issues are often ignored in NDN-related communication and caching mechanisms and are our main focus in the paper.</ns0:p><ns0:p>One of the most critical attack vectors in NDN is the Content Poisoning Attack. The attacker compromises the Content Router(CR), and this compromised CR sends a reply to the legit request with totally bogus or corrupted content. This poisoned content pollutes the in-network caches of intermediate NDN routers and thus deprives the consumers of the requested content's legitimate copy. Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed a comprehensive scheme to mitigate Content Poisoning Attack (CPA). A special interest packet is generated by the Consumer, which contains the hash of the poisoned data. This article is all about the identification and mitigation of security flaws that can be exploited by the attacker during this CPA mitigation process.</ns0:p><ns0:p>The research problem lies in the CPA mitigation scheme proposed by Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. A consumer PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with malicious intent can flood the network with the Interest packet containing the hash digest of a legit or un-poisoned data. This hash is stored in its exclude filter field. During CPA mitigation, this packet can flood the network, which will enable multipath forwarding and on-demand verification of hash at the router. This flooding attack can severely affect the throughput of the network or even cause a denial of service for other legitimate consumers. So it's essential to mitigate and add this additional security feature along with CPA mitigation <ns0:ref type='bibr'>(Qureshi and</ns0:ref> Anjum <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>).</ns0:p><ns0:p>In this article, we proposed a scheme to detect the flooding attack generated by the compromised Consumer. A satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet. If the cache miss ratio (of the excluded interest packet) reaches the threshold value, it is considered an attack. A lightweight parameter is added to the Content Store data structure, which stores cache miss counter value. This value is compared with the specified threshold value. When the cache miss counter reaches near that threshold value, an event is raised that blocks the incoming malicious face. Also, in our scheme, we made the threshold value adaptable. At first initial threshold value is calculated by taking the total buffer size and divided it by the verification rate. The proposed idea is that when cache miss ratio avg crosses 50%, and queue capacity saturates, the threshold value is reduced to half. This process continues until the value is thrashed to one.</ns0:p><ns0:p>The articles' main contribution is the addition of a security feature that fills up the attack surface that can be exploited by the malicious Consumer. Our contributions are:</ns0:p><ns0:p>• Adjustment of the threshold value dynamically by monitoring the cache-miss ratio value and queue capacity.</ns0:p><ns0:p>• Detection and mitigation of the flooding attack of special interest packets generated while mitigating the content poisoning attack.</ns0:p><ns0:p>Further, this article is organized into five sections; the second section emphasizes the literature review and related work. The third section is the proposed approach, and in the fourth section, experiments and results are highlighted along with the conclusion in the fifth section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>Any network's primary goal is to share web content, including photographs, texts, and videos.</ns0:p><ns0:p>Implementing security standards and goals such as confidentiality, integrity, and accessibility can ensure robust and flawless communication. Privacy guarantees that only the approved individual shall access the data. Integrity means that the receiver's received data must be similar to the one sent by the sender. Availability ensures that network infrastructure should be available for an authorized user whenever he needs the service (Wein et al. <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>). Kumar et al. <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> and Hassanein and Zulkernine <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> explained some of the most common attacks within the existing TCP / IP model such as Denial of Service (DoS) attack, Distributed Denial of Service (DDoS) attack, eavesdropping (snooping), masquerading, TCP Replay Attack, Man in the Middle Attack, repudiation, and traffic analysis attack. These legacy attacks are not possible in NDN because of the absence of the host, but with the advent of this new architecture, some new attack surfaces has emerged which need to be addressed and it is an active research area.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>NDN's Data-Centric Security and Security Issues in NDN</ns0:head><ns0:p>At the network layer of NDN, data-centric security is mandated via a digital signature on each data packet. A digital signature is added by the content provider (producer) to every data packet associating the data to the packet name when data is being generated. Authentication can be performed by the consumer on the data packet by verifying the signature using the Content Providers' public key. This authentication can be performed even if the data is retrieved from some other entity other than the content provider, <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>.</ns0:p><ns0:p>Zhang et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> stated that If a content providers' public key is not distributed or the consumer has no information about this public key, in that case, the data producer places the signing key name into the specific field of the data packet. It is known as the KeyLocator field. A consumer can acquire a public key by following this field of KeyLocator and can retrieve it just like a normal attack, eavesdropping (snooping), masquerading, TCP Replay Attack, Man in the Middle Attack, repudiation, and traffic analysis attack. In a modification attack, the attacker does not only compromise the confidentiality of the data by accessing it but also compromises the integrity of the data by trying to alter it. However, this attack is not possible in NDN, because each piece of data is signed by the publisher, which the consumer can verify. However, if the router itself is compromised and alters the data packet, then a corrupted data packet may be sent to the consumer. Consumers after receiving the publishers' public key can validate this corrupted data.</ns0:p><ns0:p>In a masquerading attack, the attacker masks his identity and impersonate to be another person so he/she can acquire some useful information about that person. However, this attack is also not possible in NDN because every piece of data chunk is signed by the publisher using his/her private key. In a replay attack, the attacker performs Man in the Middle attack and tries to get a copy of the message from the sender, then after modifying the message and he/she sends it to the receiver. The recipient assumes that the actual sender has forwarded the message but in fact, it is the modified message from the attacker with malicious intent. This type of attack is also not possible in NDN because the interest packet is identified by the name and for the uniqueness of the namespace in the network, a nonce is used. When the same interest packet reaches the router (with the same name and nonce), the router assumes the packet is duplicate and it is replayed; it will, therefore, be purged from the PIT table. NDN, therefore, protects itself at the network layer level from the replay attack. In NDN architecture, some inherent security features protect us from some of the legacy security attacks by default but still there are some emerging security concerns in this new architecture that needs to be addressed. Security, privacy, and access control are the three major domains that need to be covered in NDN architecture. Several attacks are possible in NDN such as Content Poisoning attack, Content pollution attack, Naming Attack, and Denial of Service attack. In privacy, it can be classified into five categories such as content privacy, signature privacy, client privacy, name privacy, and cache privacy, <ns0:ref type='bibr' target='#b4'>[5,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b6'>7]</ns0:ref>.</ns0:p><ns0:p>In access control, there is some mechanism that needs to be addressed are content encryption, content attributes, clients' identity, and authorized sessions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Attack Types in NDN</ns0:head><ns0:p>In NDN there are four main types of security threats that are briefly discussed in the coming sections, <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b10'>11]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.1'>Flooding Attack of Interest Packet</ns0:head><ns0:p>Benmoussa et al. <ns0:ref type='bibr' target='#b11'>[12,</ns0:ref><ns0:ref type='bibr' target='#b12'>13]</ns0:ref> explained in details the effects of interest flooding attack, in which, an attacker can deplete the network resources by flooding the network with a large number of interest packets batches. PIT, network bandwidth, and producer resources availability to the legit users will be compromised with this attack. This attack consumes NDN resources that restrict legitimate users from accessing them.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.2'>Cache Pollution Attack</ns0:head><ns0:p>Wang et al. <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> discussed the anatomy of cache pollution attack, the attacker attempts to fill the cache with unwanted content in the NDN router by demanding the data packets which are unpopular and not in demand. As a result, the NDN routers' impact ratio decreases. Therefore, the cache hit ratio of the interest packet of the legitimate user will thrash. This will increase the latency and reduce the throughput of the network.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.3'>Cache Privacy Attack</ns0:head><ns0:p>During an assault on cache privacy, the attacker wants to figure out whether or not the sensitive data has been accessed recently. A newly accessed item lies in the routers' cache and the requester gets a quick response of these types of data. The intruder compiles a list of content that is vulnerable to privacy and asks them one by one to know whether it is cached or not by noticing the delay in retrieving the content. If the content is identified, the attacker can conclude that a user or a group of users has recently accessed the content. The adversary will know the user's access pattern using this technique. The content type that is accessed and other related information will also be vulnerable to privacy. One of the most crucial attack vector in NDN is Content Poisoning Attack. In CPA, the attacker compromises the router, and this malicious router sends a reply to the legitimate request with totally bogus or corrupted content. These contents of intermediate routers that are involved in NDN communication are stored in CS. This poisoned content spreads when other legitimate consumers request the same content. Content in NDN are of three types, i.e., legit contents, fake or poisonous contents, and corrupted contents. A valid signature of valid content is generated through the private key of a legit publisher. Similarly, a valid signature of fake content can also be generated with any private key that is not associated with the publisher's namespace. Whereas the corrupted content does not have a valid signature. (Ullah et al. <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>) In a Content Poisoning Attack, an attacker takes over a router and replies to incoming interests with corrupted content.</ns0:p><ns0:p>Wu et al. <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> explained that if a consumer requests this corrupted content, it will spread this malicious content on intermediate routers' content stores. It will result in the spreading of this poisonous content all over the network. This verification is usually performed by Consumers who use the content verification process using the content's signature. In NDN, every router can verify the arriving contents on its own, but this verification at line speed takes resources, and it is impractical. P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> described two ways through which a content poisoning attack can be carried out. The first way is that the attacker compromises the routers, spreading the poisoned content while satisfying the requested interest packets. The second way is that poisoned content is distributed via compromised publishers. Compromised publishers can anticipate the Data that will be in high demand, e.g., highlight a famous football match, and create malicious content. So in this way, a compromised producer or router can reply with a malicious data packet against a legitimate interest packet.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3'>CPA Detection and Mitigation Approaches</ns0:head><ns0:p>Content Poisoning Attack can be detected and mitigated through two major approaches, Collaborative Signature Verification and Consumer Dependent approach. The former method is those in which NDN routers collaborate to verify the content's signature. The latter method uses extra fields in the Interest and Data packets or uses clients' feedback.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.1'>Mitigation of CPA using Consumer Dependent Approach</ns0:head><ns0:p>As per NDN specification, a consumer verifies all the signatures of the requested data packets. So a feedback-based approach is used to verify the content at the router (P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>).</ns0:p><ns0:p>This approach is the extended version of the NVF technique, as discussed in the previous section.</ns0:p><ns0:p>However, this approach has some new challenges, such as there is no trust relationship between the router and the consumers. Consumers can also be compromised, and in this way, false feedback can consume network resources. Ghali et al. <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> proposed a technique for content ranking calculation and stored in the exclude field of the interest packet, and the range of the values are between 0 and 1. New content is ranked 1, which gets downgraded if rated by consumers and included in the excluded field of the Consumer. This approach is somewhat similar to the technique mentioned in P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>, so it has the same limitations. Ghali et al. <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> highlighted some of the NDN architecture vulnerabilities, such as the PPKD field and name's digest are not the essential components of the Interest packet. Also, no such trust model is adopted unanimously by the Consumer's applications to fetch the content's hash securely.</ns0:p><ns0:p>Based on these vulnerabilities, a technique is proposed, which enables an IKB rule to ensure trust.</ns0:p><ns0:p>According to this rule, the Interest packet must include the producer's (content publisher's) public key. It is also implied that producers should also have the public key in the Data Packets' KeyLocator field. Its implication on the router is that it should calculate the hash of the public key of the content received and compare it with the PPKD field against its PIT entry. Upon mismatch, the content is discarded but otherwise verified. Upon successful verification, content is forwarded and stored in the content store of that particular router. Yue et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> stated that IKB implication for consumers is that it has to acquire and validate the content provider's public key before initiating the Interest packet for that specific data packet. Trust model can be acquired using three approaches: public keys of the content provider should be installed in the client application, the second one is the universal key name service, and the third one is global Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>search-based service. Also, to reduce the core routers' workload, the author has proposed that an Interest Key Binding check on the data packet should be performed at the edge routers. In contrast, core routers should perform this check probabilistically. The cons of this approach are that it is assumed that verifying the router is trusted, but it can verify the bogus IKB to be correct if it is malicious. So this scheme lacks scalability and has overhead. DiBenedetto and Papadopoulos <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> proposed an approach in which consumers, upon verification failure, send a report packet, which will act as feedback to the other entities of the NDN Network.</ns0:p><ns0:p>When consumers detect a poisoned content, a special interest packet is generated by the network stack, and the information regarding the poisoned content is stored in this special report packet.</ns0:p><ns0:p>When the router receives this special interest packet, it acts as one of the two proposed mitigation options that the author proposed. One is Immediate Failover, and the second one is Probe First.</ns0:p><ns0:p>In the first approach, the malicious face is marked with a low priority value for the future. And in the probe first technique, the node, upon receiving the special interest packet known as report packet, stops forwarding the interest packets of the namespaces on which the attack is underway. Also, that particular node informs their next-hop routers about this malicious namespace.</ns0:p><ns0:p>Nguyen et al. <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> explained three vulnerabilities in NDN architecture; the first one is unregistered remote provider, then multicast forwarding and the last one is the best route forwarding. The first vulnerability is that the interest packet can be satisfied with any data packet received from any of the faces. Therefore, a malicious producer can induce malicious content and satisfy it before it gets satisfied by the legit producer. In NDN, faces are registered in the FIB table's corresponding values, so while doing multicast forwarding, the interest packet is forwarded to all these faces. So, it is quite possible that malicious producers can satisfy the interest packet with its malicious content. A router ignores a similar interest packet in the best route forwarding with the same name and selectors but different nonce when received during the suppression interval of retransmission. The interest received after this interval shall be transferred via the next lowest possible cost; thus, an interest packet can be satisfied with a malicious producer's poisoned contents.</ns0:p><ns0:p>Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed a comprehensive system to mitigate CPA, and this article is all about identifying security flaws and proposing a mitigation strategy to address this flaw in this system.</ns0:p><ns0:p>In the following sections, this base system is elaborated in detail. This system is comprised of three phases. First is the route building phase, then there is a normal content retrieval phase, and the last one is the recovery phase in chase content poisoning. It is required that NDN routers should enable name-key-based forwarding to forward interest towards registered content sources, and to specify legitimate content sources, every route advertisement should be authenticated with a trust management system. If content poisoning occurs on intermediate routers, then a mechanism of 'Multipath Forwarding based Inband Probe' is performed. In this approach, an interest packet with an exclude filter (poisoned content) is reissued and forwarded via alternate paths. When this packet reaches a particular router, it enables cached contents' on-demand signature verification. Verification of cached content is performed between the malicious payload included in the interest packets' exclude filter or in the Data Packet that is returned and gets matched with the reissued interest packet. There are two benefits of this approach; first, with multipath forwarding, there is a great chance that consumers will acquire the legitimate content while legitimate content can be restored on the intermediate router via alternative forwarding options. This way, poisoned contents will be purged, and for future requests, legitimate contents will be returned from the routers' cache. Thus it'll increase the overall throughput of the network.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4'>Comparisons of CPA Mitigation Approaches</ns0:head><ns0:p>1 is a summarized view of the CPA Mitigation approaches, as discussed in previous sections:</ns0:p><ns0:p>Based on the analysis of existing techniques and work to detect and mitigate CPA, there is still a need to sort out some challenges while developing a CPA mitigation strategy (Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>).</ns0:p><ns0:p>Energy management in routers is an important issue. Gao et al. <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> evaluated that CPA and caching issues can consume a considerable amount of routers' energy, which can add instability to the whole system. In Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> have implemented a robust and efficient mechanism to mitigate the CPA. In Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, we have identified vulnerabilities in content poisoning attack mitigation schemes discussed in previous sections of this article. In the following section, we explore how these vulnerabilites can be exploited and a mitigation strategy is proposed, which is Manuscript to be reviewed During the CPA, the reissued Interest Packet by the consumer stores the hash of the poisoned data in the excluded filter field but a compromised Consumer can also store hash of and un-poisoned data has in the same field. Consequently, It will result in a cache miss. The on-demand signature verification at the router will also be enabled during this process, consuming a lot of router processing power. When a consumer with malicious intent bombards these excluded Interest packet, although they get discarded at the next router upon verification, it'll drastically increase the router's processing overhead. Other legitimate consumers will face a denial of service from this router. This attack vector should be taken into account, and a mitigation strategy should be devised for such attacks. This way, the process of CPA mitigation will be severely affected. otherwise, it gets purged from the router's CS. In the case of poisoned Data is found in the CS, the normal process is initiated, and content poisoning mitigation will commence. When a consumer is compromised, and it starts flooding the NDN Network with the excluded filter enabled Interest packet, it will trigger On-Demand Signature verification for each bogus packet, and the next NDN router will get saturated. The queue will be occupied, and after a while, there will be less space for the legitimate excluded filter Interest packet. It will hamper the CPA Mitigation mechanism badly. So this scenario is considered as an attack and needs mitigation. In this article, a reactive approach is proposed to mitigate this attack. A virtual queue is utilized in NDN Routers for incoming reissued Interest packets from the consumers. FIFO (FCFS) queue is shared among all the incoming faces for reissued interest packets. It is a temporary place holder for these packets until they get verified. The allotted memory for the transmitting packets should be different from the one used for caching. If the same CS is used to transmit packets and data chunk, then the CS will be congested with the data chunks that are waiting to be satisfied by the pending Interest packets. To prevent the Malicious Consumer from sending a fake 'excluded Interest packet,' a satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet in the cache. In case a cache miss (of the excluded interest packet) occurs, and the ratio reaches near the threshold value, i.e., it is set by Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>PROPOSED APPROACH</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the operator, it is considered an attack. On-demand verification at the router is not enabled unless there is a cache hit of the excluded interest packet; this will reduce the overhead of content verification at each data packet's router. However, in case of a cache miss, this excluded interest packet is discarded. Still, if a consumer with malicious intent floods the edge router with the fake interest packet with the excludes filter, it'll degrade that particular edge router's performance.</ns0:p><ns0:p>The NDN-router service manager at the NDN Router, especially at the edge of the network in the consumer domain, maintains the stats and looks at them. The router will drop the future reissued interests coming from this face with the excluded data packet as it is considered a malicious consumer upon hitting the threshold value. It will be done temporarily and delisted at the discretion of the network operator. A new lightweight parameter is added in the CS Data Structure to retain the cache miss counter of invalid reissued Interest packet with excluded filter field. This value is compared with the threshold value. The Block diagram can show the birds-eye view of this proposed mechanism in figure <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>.</ns0:p><ns0:p>We have introduced a block of the proposed approach. The reissued Interest packet upon several caches misses, and hitting the specified threshold value will trigger an event and blocks this malicious face. On the next iteration, this reissued packet from the malicious consumer face will be blocked. In algorithm 4 (Figure <ns0:ref type='figure' target='#fig_9'>3</ns0:ref>) PPKD, ContentName, nonce, incoming face, excluded filter field value, Threshold value and cache miss counter value is passed as an argument. At statement 1, hash comparison is performed and if result is a cache miss then the cache miss counter value gets incremented. If that value reaches the threshold value then event to block that specific malicious incoming face gets triggered. In case the result is a cache hit then normal NDN communication process will commence. This approach helps the Network Operators set up the threshold value automatically during the special interest packet flooding attack by a malicious consumer. This approach aims to select the threshold value in an automated fashion based on the statistical monitoring of buffer capacity and cache miss ratio. In this approach, a Network Management software continuously Manuscript to be reviewed Computer Science monitors the cache miss ratio and buffer capacity when a special interest packet is initiated. When the cache miss ratio average over a while results in a buffer overflow, the threshold value is thrashed to half. This process continues unless the threshold value becomes 1. This mechanism is elaborated in Algorithm 5 (Figure <ns0:ref type='figure' target='#fig_10'>4</ns0:ref>). At this stage, the incoming face causing the flooding attack will get blocked till the particular timeout.</ns0:p><ns0:p>InitT H = QueueSize/Veri f icationRate (1)</ns0:p><ns0:formula xml:id='formula_0'>Cache_Miss_Ratio = CM/(CM +CH)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Network Management Software will continuously monitor the Cache_Miss_Ratio and Buffer Size of the queue.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Benefits of Dynamic Threshold Values over Static Threshold Values</ns0:head><ns0:p>The mitigation of flooding attack of special interest packets works on two approaches, first one uses the static threshold values which is set by network operators during the router initial configuration. The second approach is the dynamic approach, in which the threshold value is adjusted adaptively by monitoring the Queue size and Cache Miss Ratio value.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>EXPERIMENTAL RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Simulation Environment</ns0:head><ns0:p>For proof of concept and to run this scenario, a custom-built NDN Simulator is developed in C# language in Visual Studio 2019. The network parameters used in simulation scenarios are mentioned in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. In scenario 1, our simulations' network topology consists of two routes from the Consumer to the producer. Two paths routes that are used in this scenario are 0-1-2-4-6-7-8 and 0-1-3-5-7-8; these paths are between the consumer and a producer (Spring et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>). In this scenario, it is evident that consumers with malicious intent can flood the network with unwanted interest packet with excluded field occupied by the non-malicious or legit payload. If not mitigated at the edge router, all the routers will enable the on-demand verification, and this way, router performance will degrade with time. This problem can be mitigated by enabling a mechanism at edge routers of NDN and setting a threshold value that if it hits this value, block that interface through which these malicious excluded interest packets are coming. This way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. So to handle this issue Network Manager at NDN Edge Router enables this mechanism in which malicious interest packet with exclude field is dropped in case of a cache miss, and upon hitting the threshold value, the interface from which these excluded interest packets are received is blocked and added to the delist data structure.</ns0:p><ns0:p>The timeout to get out of this delist data structure is at the desecration of the network operator. In scenario 2, our simulations' network topology consists of two routes from two consumers (i.e., Consumer 1 and Consumer 2) of the same domain to the producer via Router 8 (edge router).</ns0:p><ns0:p>The routes that are used in this scenario is 0-1-2-4-6-7-8 and 0-1-3-5-7-8; these paths are between the consumer and a producer (Spring et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>) The main thing to note in this scenario is that Consumer 1 and Consumer 2 are in the same domain. Router 8, the virtual queue for Incoming Reissued Interests, is shared between these two consumers. The Queuing mechanism used in this scenario is FIFO. There are two consumers with malicious intent in this scenario and can flood the network with unwanted interest packet with excluded field occupied by the non-malicious or legit payload. If not mitigated at the edge router, the virtual queues will be fully occupied for the legit reissued interest packet, and consequently, packets will drop. This problem can be mitigated by enabling a mechanism at edge routers of NDN and setting a threshold value that if it hits this value, block that interface through which these malicious excluded interest packets arrive. This way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. In this experiment as shown in Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>, we have calculated the cache miss ratio of the interest packet containing the exclude filter and compared it with the Queue Length. Upon flooding the router with fake interest packet, the verification process takes time, and meanwhile, the queue of interest packets will start increasing. After every second, 25% fake packet will drop, and 75% will be added to the queue. Initially, no threshold value is set. After some time, congestion at the router's incoming interest packet queue will occur, resulting in a drop of other future packets at this router. In the second experiment as shown in Figure <ns0:ref type='figure' target='#fig_15'>6</ns0:ref>, our proposed scheme is enabled at the edge routers in Network Management software. After several cache misses and upon hitting the threshold value to 3 according to the simulation settings, it'll block the incoming face of the Consumer, and further, no more interest packets will be received from this malicious consumer face. After hitting the specified threshold value, the face is blocked and fake packets begins to drop from the queue. At 12 seconds the queue will be completely empty and router is no more saturated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Experiments and Result</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.3'>Experiment 3 (Scenario 2): With No Threshold Values</ns0:head><ns0:p>In the third experiment as shown in Figure <ns0:ref type='figure' target='#fig_17'>7</ns0:ref>, Consumer 1 starts flooding the network with fake interest packets with the excluded filter; the queue will begin to saturate as the verification rate Manuscript to be reviewed Initially no threshold value is set, and at 9th second congestion at router's incoming interest packet queue will occur which will result in drop of other future packets at this router. It is evident in the experiment that with the increase in the Cache_Miss_Ratio, the Queue size will increase because the flooding rate is greater than the verification rate. Also, Cache_Miss has penality on the processor of the router, which can increase the processing overhead. In the graph (Figure <ns0:ref type='figure'>9</ns0:ref>), the initial threshold value is set to buffer size divided by the packet verification rate.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>At 7th second the queue is filled up to 100 percent. At this stage, the new packets will start to Manuscript to be reviewed Here, the system should act prudently and reduce the threshold value to half of the current value, and if flooding continues threshold value is reduced to half as shown in Figure <ns0:ref type='figure' target='#fig_20'>10</ns0:ref>, and so on till the value is reduced to 1. At this stage, the incoming face is blocked as it is considered as an attack. The queue will not be saturated, and memory will be available for other interest packets to get processed. If the flooding attack continues, we will multiplicatively decrease the Figure <ns0:ref type='figure'>9</ns0:ref>. Without Dynamic Threshold Value threshold value to another half. This mechanism will continue against that particular flooding malicious face until the threshold value reaches 1. At this stage, that particular face will be blocked and considered as a malicious face. The face will be blocked until the timeout, whose value will be at the network operator's discretion. The simulation scenario is depicted in Manuscript to be reviewed Manuscript to be reviewed of the total capacity which is the desired result. But in the subsequent seconds, the Malicious packets entered the router. Queue capacity started to saturate, then the proposed solution gets activated and blocks the malicious face when the cache miss counter value reached the threshold value. Then we can see that according to our simulation environment after 3rd-second malicious packets didn't enter the router queue and throughput of the normal interest packet will start to raise and other factors like processing overhead and queue capacity ratio gets into the normal working range. This scenario is depicted in Figure <ns0:ref type='figure' target='#fig_22'>12</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>2000 Malicious Packets bombarded were detected and dropped successfully by our system. System accuracy proved to be 100%. Also 1000 legitimate special Interest packets were processed and no packet was dropped.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Qualitative Comparison</ns0:head><ns0:p>Two major categories are defined for mitigating content poisoning attacks, one is using a Collaborative Approach and the other is Consumer Dependent Approach. Our proposed mechanism belongs to the consumer dependent category, so we'll compare the security and performance aspects of our approach with the other approaches that fall in the Consumer Dependent category of CPA. Please refer to Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> for qualitative comparisons.</ns0:p><ns0:p>DiBenedetto et al. in <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> used an evasion scheme to mitigate a content poisoning attack called 'Immediate Failover', this scheme reduces the ratings of the next-hop node which brought back the malicious content for future interest packets. This scheme has a flaw, it blocks the legitimate producers as well which are located adjacent to the malicious producer. Also, the Report packet generated by the consumer is a heavyweight packet that carries a complete payload along with cryptographic keys. During the evasion process, it verifies the signature on every node which has a great performance impact on the whole NDN system.</ns0:p><ns0:p>Similarly, Gasti et al. in <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> and Ghali et al. in <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> also Neighboring Verification Feedback scheme which may involve consumer as well. If a consumer is compromised, this approach will induce some new challenges such as there is no trust relationship between the router and the consumers, so it can generate false feedback that can consume network resources and disrupt the normal NDN operations.</ns0:p><ns0:p>Also, the special interest packet is very lightweight as it contains only the hash of the bogus Manuscript to be reviewed</ns0:p><ns0:p>Computer Science data which gets verified against the data with the PPKD value of the content store. Also, only one verification is required at the router as PPKD in the content store and the special interest packet is compared.</ns0:p><ns0:p>In the end, our scheme also handles the malicious interest packet with exclude filter field which is dropped in case of cache miss as malicious data is not present in the content store, the threshold value against the face gets incremented upon the cache hit. So the interface from which these excluded interest packets are received gets blocked once it hits the threshold value and this face is added to the delist data structure.</ns0:p><ns0:p>So our proposed approach provides and covers most of the attack surfaces which other schemes failed to achieve. Also as compared to the approach by DiBenedetto et al. in <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>, our special interest packet is lightweight and it keeps the mitigation process lightly loaded over the network.</ns0:p><ns0:p>Only a PPKD value is added to the NDN Stack.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSION AND FUTURE DIRECTION</ns0:head><ns0:p>The main contribution of this work is to devise a mechanism that identifies and prevents the compromised consumers from flooding the network with special Interest packets that are generated during the mitigation process of Content Poisoning Attack. The compromised consumers place the hash of an un-poisoned content in the excluded filter field of the special interest packet which causes cache miss at the edge router. Owing to bombardment of these special Interest packets, it'll tremendously increase the processing overhead on the NDN Router. The cost is in terms of Cache-Miss penalty and verification overhead. Also, the queue capacity of the NDN Router gets saturated. Consequently, the legitimate requests from the other consumers get dropped or face a substantial amount of delays. We also observed the damaging effect of multiple malicious consumers flooding the edge router which was also well handled by using proposed technique. After the implementation of our scheme in the Network Service manager at the NDN Edge Router, the malicious face will be blocked when the cache-miss ratio value reaches the specified threshold value. We also have made the threshold value dynamic by adjusting the initial threshold according to cache-miss ratio and queue capacity values. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2 / 19 PeerJ</ns0:head><ns0:label>219</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science data packet. Kumar et al. [6] explained some of the most common attacks within the existing TCP / IP model such as Denial of Service (DoS) attack, Distributed Denial of Service (DDoS)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>3 / 19 PeerJ</ns0:head><ns0:label>319</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science 1.2.4 Content Poisoning Attack</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>4 / 19 PeerJ</ns0:head><ns0:label>419</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>5 / 19 PeerJ</ns0:head><ns0:label>519</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>2. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Introduction The name-key based forwarding and Multipath forwarding based Inband probe is very comprehensive scheme for mitigation of the CPA. It fills most of the attack surfaces regarding the Content Poisoning Attack. However, with the advent of the NDN architecture's structural changes, it has induced a new attack vector that can be exploited by the adversary. With this attack, the whole system can collapse. So it is very crucial to highlight this aspect. One of the important attack vectors that have emerged with this technique is the flooding of the reissued 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Detection of Flooding Attack during Content Poisoning Attack Mitigation</ns0:figDesc><ns0:graphic coords='8,158.28,63.78,380.49,310.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>The block diagram of the flooding scenario is elaborated in Figure1. The block diagram depicts the scenario of flooding attack of Interest Packet with excluded filter. The first block shows that the Consumer generates the normal Interest packet. Then a decision is taken in the next block that whether it is a normal Interest packet or an Interest packet with an excluded filter field. In case it is a normal Interest packet, it is directed towards normal NDN operations; otherwise, it is passed to the next module of On-Demand Signature Verification. Here signature verification 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science is performed against PPKD in the Content Store. If validation fails, this packet is discarded;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Mitigation of Flooding Attack</ns0:figDesc><ns0:graphic coords='9,141.73,169.72,413.58,390.58' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>8 / 19 PeerJ</ns0:head><ns0:label>819</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Detection and Mitigation of Flooding Attack of Reissued Interest Packet</ns0:figDesc><ns0:graphic coords='10,141.73,94.90,413.59,580.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Dynamic Threshold Value</ns0:figDesc><ns0:graphic coords='11,141.73,337.19,413.59,294.34' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>10 / 19 PeerJ</ns0:head><ns0:label>1019</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>11 / 19 PeerJ</ns0:head><ns0:label>1119</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science 3.2.2 Scenario 2: Two Malicious Consumers</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>3. 3 . 1</ns0:head><ns0:label>31</ns0:label><ns0:figDesc>Experiment 1 (Scenario 1): With No Threshold Values</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Flooding Attack With No Threshold Value And With 1 Malicious Consumer</ns0:figDesc><ns0:graphic coords='13,172.75,377.99,351.54,162.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flooding Attack With Threshold Value And With 1 Malicious Consumer</ns0:figDesc><ns0:graphic coords='14,172.75,63.78,351.55,179.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>3. 3 . 4</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Experiment 4 (Scenario 2): With Threshold ValuesIn the fourth experiment as shown in Figure8, our proposed scheme is enabled at the edge routers in Network Management software. Upon cache miss threshold value reaches 3, it'll block the incoming face of the consumer1 after three failed verification at 4th second. Further, no interest will be received from this malicious consumer face. At 6th second, Malicious consumer 2 starts to saturate the queue which will, and similarly, after three failed attempts, this face gets blocked as well, and queues start to thrashed after both of the malicious consumer faces are blocked.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Flooding Attack With No Threshold Value And With 2 Malicious Consumers</ns0:figDesc><ns0:graphic coords='14,172.75,447.72,351.55,163.33' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Flooding Attack With Threshold Value And With 2 Malicious Consumers</ns0:figDesc><ns0:graphic coords='15,172.75,63.78,351.55,163.33' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>3. 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Quantitative Analysis 3.5.1 Effectiveness and Accuracy of Proposed Solution by Comparing the Throughput of the Normal Special Interest Packets</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. With Dynamic Threshold Value</ns0:figDesc><ns0:graphic coords='16,172.75,63.78,351.55,222.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Throughput of the Normal Special Interest Packets In Flooding Attack Scenario</ns0:figDesc><ns0:graphic coords='16,172.75,464.93,351.55,175.60' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Throughput of the Normal Special Interest Packets In Flooding Attack Scenario With proposed Mitigation Strategy</ns0:figDesc><ns0:graphic coords='17,172.75,63.78,351.55,170.03' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>CPA Detection and Mitigation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Ref P.Gasti et al. [17] Ghali et al. [18] Ghali et al. [19] Nam et al. [23] Kim et al. [24] Wu et al. [16] Kim et al. [25] DiBenedetto and Papadopoulos [21] Hu et al. [3]</ns0:cell><ns0:cell>NDN Node Consumer, Router Consumer Router Router Router Consumer, Router Router Consumer, Router Consumer, Router</ns0:cell><ns0:cell>Detection Signature, PPKD Signature PPKD and Signature Signature Signature Signature Signature in case of cache-hit Signature PPKD and Signature</ns0:cell><ns0:cell>Mitigation SSCIC, DSCIC Content Ranking Interest Key Bind-ing SLRU Extension SLRU Extension Reputation Based Forwarding SLRU Extension Modified Forward-ing Strategy Name-key based for-warding and mul-tipath forwarding based Inband probe</ns0:cell><ns0:cell>Overhead Verification of Random Signatures Content Ranking Calcula-tion Signature Verification Signature Verification Signature Verification Signature Verification Signature Verification Complete Bogus Packet in Reissued Interest Packet Signature Verification (Hash matching is fast due to PPKD entry)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>the main research area of this article.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Vulnerabilities in CPA Mitigation Schemes (Compromised Consumers can flood Routers)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Gasti et al [17] Ghali et al [18] DiBenedetto and Papadopoulos [21] Hu et al. [3]</ns0:cell><ns0:cell>Checked by Consumer Router Consumer First Consumer and and then router First Consumer and then Router</ns0:cell><ns0:cell>Proposed Solution SSCIC & DSCIC Content Ranking Algorithm Modifying Forward-ing Strategy Name-Key Based Forwarding and Multipath Forward-ing Based Inband Probe</ns0:cell><ns0:cell>Energy Efficient Yes No -Overhead of calculating the con-tent ratings No -uses Complete bogus packet in re-port Yes -Only use a PPKD extra field and use bogus/cor-rupted data hash in of interest packet excluded filter field</ns0:cell><ns0:cell>Security Features Cannot detect cor-rupted content Do not handle ma-licious consumer in case it reports false content rating. Only handles the malicious consumer identity but do not handle the corrupted data Can prevent poison-ing of content by generating special interest packets.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Simulation Parameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Request Rate Max Queue Length Verification of Interest Packet No. of Malicious Consumers Threshold Value</ns0:cell><ns0:cell>Default Value 100 Interests/second/Consumer (Interest with Exclude Parameter) 500 (Experiment 1 and Experiment 2) 1000 (Experiment 3 and Experiment 4) 25 Interest/second 1 (Experiment 1 and Experiment 2) 2 (Experiment 3 and Experiment 4) x</ns0:cell></ns0:row><ns0:row><ns0:cell>3.2 Network Topology 3.2.1 Scenario 1: One Malicious Consumer</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>In the first scenario, 2000 malicious interest packets are bombarded by one compromised consumer. 1000 Normal Interest Packets were also induced in the system by a legitimate consumer in the same domain. In this scenario, no</ns0:figDesc><ns0:table /><ns0:note>14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Simulation Parameters For Effectiveness of Proposed Approach</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Request Rate Interest Packet Max Queue Length Verification of Interest Packet Number of Malicious Special Interest Packets Number of Normal Special Interest Packets Number of Malicious Consumers Threshold Value</ns0:cell><ns0:cell>Default Value 100 Interests/second/Consumer (Special Interest Packets) 500 25 Interest/second 2000 pkts 1000 pkts 1 x</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Qualitative Comparison</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category Malicious Content Detection Rate Legitimate Content Retrieval Rate Reporting Packet Size Trust Anchor Verification Overhead Compromised Consumer De-tection Bogus Report Packet Detec-tion</ns0:cell><ns0:cell>Proposed Approach DiBenedetto et al Very High High Very High Very Low Lightweight (Hash) Heavyweight (Com-plete Bogus Packet) Yes Yes Very Low Very High Yes Yes Yes Partial</ns0:cell><ns0:cell>Gasti et al. High N/A N/A No Very High No No</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>An improvement in this technique can be done by incorporating Quality of Service solutions in NDN Routers. Multiple Virtual queues for special Interest packets can be maintained in NDN Routers to handle the flooding of these packets. Different queuing disciplines and algorithms like Adaptive Virtual Queue (AVQ), Credit-Based Fair Queuing, Weighted Fair Queuing, Quick Fair Queueing, andClass-Based Queuing can be tested to augment our approach. Also, traffic shaping and rate control mechanism can be used to hold back the malicious face.</ns0:figDesc><ns0:table /><ns0:note>17/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:1:2:NEW 21 Jan 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Date: 17th January, 2021
Dear Prof. Dr Vicente Alarcon-Aquino,
Greetings!
We thank you and the reviewers for the valuable and generous comments/feedback on the manuscript.
We have addressed all the comments point-by-point in the following pages.
We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Nadeem Anjum
Senior Member, IEEE
Assistant Professor, Department of Computer Science,
Capital University of Science and Technology
On behalf of all authors.
Reviewer 1
Basic reporting
1.1 Abstract needs to be revised.
We have revised the abstract (please see page 1).
1.2 References are not sufficient
We have incorporated a few more references to make them sufficient (please see References 8, 9, 10, 11, 12, 13 and 14).
1.3 Figure 6. Mitigation of Flooding Attack-There is some ambiguity in the figure
We have removed the ambiguity by changing the first block text from “Interest Packet Generated” to “Generation of Consumers’ Packet”. Please see Figure 2 on page 8.
1.4 The article should include sufficient introduction and background to demonstrate how the work fits into the broader field of knowledge. Relevant prior literature should be appropriately referenced.
We have modified the text comprehensively in order to demonstrate how the work fits into the broader field of knowledge. All prior literature is appropriately referenced (Please see Page 2-3, Section 1.1 and 1.2, Line 87 - 150)
Experimental design
1.5 The investigation must have been conducted rigorously and to a high technical standard.
We have put a great effort in discovering the special interest packet flooding attack during the CPA Mitigation phase. We have performed several experiments to prove that there is a vulnerability of flooding attack within CPA Mitigation phase. Then we have proposed its mitigation phase and also proved its effectiveness by performing two new experiments using different simulation scenarios. (please see Section 3.5 and Section 3.6, Line 427- 484)
1.6 The proposed method should be well discussed.
First we have discussed a brief introduction of the CPA Mitigation scheme. We have discovered the vulnerability in the CPA Mitigation scheme and expressed it using a flow diagram. Then its mitigation strategy is discussed and also expressed it as a flow diagram and an algorithm. However, we have elaborated the algorithm showing the mitigation strategy (please see Page No 10, Line No. 328-333).
1.7 Comparison with the existing approaches need to be highlighted
We have performed a more detailed qualitative and quantitative comparison (please see Page 17, Table 5)
Validity of the findings
1.8 conclusions need to be revised.
We have revised the conclusions (please see Page.17, Line No. 486-505)
1.9 Using the mitigation technique mentioned in this paper, the Network Service Manager at NDN Edge router can enable a mechanism in which upon reaching a certain threshold value; it blocks that interface temporarily from which the excluded interest packets are being generated---Revise this statement
We have revised this statement (please see Page 17, Line No. 496-498)
Comments for the Author
1.10 There are many state-of-the-art mitigation strategies for the content poisoning attack, but some new attack-surfaces have emerged with these schemes’ advent. Mention what are some attacks.
We have mentioned the Interest Flooding Attack which has emerged with these schemes’ advent (please see Page 1, Line No. 17-19).
1.11 Clearly mention what is the outcome of the proposed method, Revise the abstract
We have clearly mentioned the outcomes of the proposed method. Also, the abstract has been modified as per reviewer’s comments. (Please see page 1)
1.12 Making the threshold value dynamic, which was initially set by the network operator statically? But in many networks, it is set dynamically –Comment
The statement is rephrased to: “Adjustment of the threshold value dynamically by monitoring the cache-miss ratio value and queue capacity” (please see page 2, Line No. 67-68).
1.13 CPA Detection and Mitigation Approaches-need more discussion, Comparisons of CPA Mitigation Approaches- give the summary in a tabular form. Readers will easily understand
We have provided more discussions on the detection and mitigation approaches (Please see page 6, Table 2).
1.14 In Figure 3. Detection of Flooding Attack during Content Poisoning Attack Mitigation-interest packet generated and after again there is a block Interest Packet or Excluded Interest Packet Discard Interest Packet with Exclude Filter and Exit-what is the role of this block.
The first block depicts that consumer generates a packet. Then, the decision is taken in the next block whether it is a normal interest packet or a special interest packet. We have removed the ambiguity and modified the figure accordingly (please see page 7, figure 1).
1.15 There is no clear discussion on EXPERIMENTAL RESULTS – How the proposed method is superior to other methods. There is no comparison with other existing approaches. When there is no good comparison how can authors claim the method works fine?
We have performed a more detailed qualitative and quantitative comparison with existing approached (please see Page 14-17, Section 3.5, 3.6 and Table 5).
1.16 There is no discussion on vulnerabilities.
We have added the discussion on vulnerabilities (please see Page 2, Section 1.1 and Table 2).
1.17 What are the various attack surfaces and how can they be resolved.
Discuss various types of attacks.
We have added the discussion on various attacks (please see Page. 3, Section 1.2)
1.18 Clearly mention the difference between the manual threshold and dynamic threshold. Give in a tabular form how can the proposed methods overcome the drawbacks of the manual method.
We have clearly mentioned the difference between the manual threshold and dynamic threshold. The mitigation of flooding attack of special interest packets works on two approaches, first one uses the static threshold values which is set by network operators during the router initial configuration. The second approach is the dynamic approach, in which the threshold value is adjusted automatically by monitoring the Queue size and Cache Miss Ratio value (please see Page 11, Section 2.2, Line No. 338-342).
1.19 The threshold value is kept as 3. What does it mean by 3 and how can it be restricted.
Threshold value is just a configuration setting for the network operators. We have selected the value 3 as a simulation parameter and the proposed scheme is orthogonal to this configuration setting.
1.20 The number of References needs to be increased
We have increased the number of references to comprehensively cover the literature (Please see References 8, 9, 10, 11, 12, 13 and 14).
Reviewer 2
Basic reporting
2.1 Authors' main contribution is to provide an additional security solution to face the content poisoning attack (CPA) which will prevent malicious consumers from flooding the network with unwanted reporting packet in the context of named data networking.
We agree with the reviewer.
Experimental design
2.2 Overall, the idea is clear and simple. Yet, the threshold adaptability should be related to many other metrics like the cache size and the history of the requested data (can adopt a trust establishment solution for instance).
We thank the reviewer for the kind words. In order to make it clearer; maintaining the history of the requested data can hamper performance of NDN router and cannot be done using in line comparisons. Cache Size is monitored in parallel to the cache-miss ratio values which produced the promising results. (Please see page 13, Section 3.4)
Validity of the findings
2.3 My main concern regarding the authors findings is related to the validation environment. All NDN research community agreed on NDNSim over NS3 because it has all NDN modules already implemented. Why you did not use it?!
We agree with reviewer’s comment regarding the use of NDNSim. However, we have implemented all the requisite functionalities that were necessary to prove our idea. Logically and theoretically, we believe that changing the simulator will have no significant effect on the results and findings presented in the paper.
Comments for the Author
2.4 Figures quality should be improved.
We agree with the reviewer’s comment. We have incorporated improved figures in the manuscript (please see Fig. 1-12)
2.5 Please also include this work in the comparative table, so that we can see its position compared to what already exists.
We agree with the reviewer’s comment and have added Table 5 in the manuscript.
2.6 I also recommend citing the following research works:
https://doi.org/10.1109/ICCCN.2019.8847146
https://doi.org/10.1016/j.future.2020.01.043
We thank the reviewer for identification of the relevant research work. We have cited both of the research papers. (Please see page 3 line 131).
" | Here is a paper. Please give your review comments after reading it. |
21 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As a promising next-generation network architecture, named data networking (NDN) supports name-based routing and in-network caching to retrieve content in an efficient, fast, and reliable manner. Most of the studies on NDN have proposed innovative and efficient caching mechanisms and retrieval of content via efficient routing. However, very few studies have targeted addressing the vulnerabilities in NDN architecture, which a malicious node can exploit to perform a content poisoning attack (CPA). This potentially results in polluting the in-network caches, the routing of content, and consequently isolates the legitimate content in the network. In the past, several efforts have been made to propose the mitigation strategies for the content poisoning attack, but to the best of our knowledge, no specific work has been done to address an emerging attack-surface in NDN, which we call an interest flooding attack. Handling this attack-surface can potentially make content poisoning attack mitigation schemes more effective, secure, and robust. Hence, in this article, we propose the addition of a security mechanism in the CPA mitigation scheme i.e Name-Key Based Forwarding and Multipath Forwarding Based Inband Probe, in which we block the malicious face of compromised consumers by monitoring the Cache-Miss Ratio values and the Queue Capacity at the Edge Routers. The malicious face is blocked when the cache-miss ratio hits the threshold value, which is adjusted dynamically through monitoring the cache-miss ratio and queue capacity values. The experimental results show that we are successful in mitigating the vulnerability of the CPA mitigation scheme by detecting and blocking the flooding interface, at the cost of very little verification overhead at the NDN Routers.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Named Data Networking (NDN) is a well-known and well-researched architecture for the next generation of the Internet, based on a data-centric approach. While the legacy network is based on a host-centric system, the NDN architecture has changed the Internet's communication model altogether (Jacobson et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>). It allows the distribution of data that can be acquired from any content router from the network. A content provider can produce the data in advance and place it as auxiliary storage that can be accessed by any consumer anytime, even if the producer gets offline. A producer does not have to be online, and a consumer does not have to be connected to the producer to fetch the data; Instead, the Consumer can acquire data through in-networking caches. While NDN increases content availability in the network via in-network caching, the integrity of content becomes critical, given NDN's nature (Tarkoma et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>). Hence, NDN opens several security-related issues that are not relevant to the legacy network communication. It includes some new types of data integrity attacks where a malicious or compromised node provides a corrupted copy of the content. These issues are often ignored in NDN-related communication and caching mechanisms and are our main focus in the paper. Manuscript to be reviewed Computer Science compromises the Content Router(CR), and this compromised CR sends a reply to the legit request with totally bogus or corrupted content. This poisoned content pollutes the in-network caches of intermediate NDN routers and thus deprives the consumers of the requested content's legitimate copy. Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed a comprehensive scheme to mitigate Content Poisoning Attack (CPA). A special interest packet is generated by the Consumer, which contains the hash of the poisoned data. This article is all about the identification and mitigation of security flaws that can be exploited by the attacker during this CPA mitigation process.</ns0:p><ns0:p>The research problem lies in the CPA mitigation scheme proposed by Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. A consumer with malicious intent can flood the network with the Interest packet containing the hash digest of legit or un-poisoned data. This hash is stored in its exclude filter field. During CPA mitigation, this packet can flood the network, which will enable multipath forwarding and on-demand verification of hash at the router. This flooding attack can severely affect the throughput of the network or even cause a denial of service for other legitimate consumers. So it's essential to mitigate and add this additional security feature along with CPA mitigation <ns0:ref type='bibr'>(Qureshi and</ns0:ref> Anjum <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>).</ns0:p><ns0:p>In this article, we proposed a scheme to detect the flooding attack generated by the compromised Consumer. A satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet. If the cache miss ratio (of the excluded interest packet) reaches the threshold value, it is considered an attack. A lightweight parameter is added to the Content Store data structure, which stores cache miss counter value. This value is compared with the specified threshold value. When the cache miss counter reaches near that threshold value, an event is raised that blocks the incoming malicious face. Also, in our scheme, we made the threshold value adaptable. At first initial threshold value is calculated by taking the total buffer size and divided it by the verification rate. The proposed idea is that when cache miss ratio avg crosses 50%, and queue capacity saturates, the threshold value is reduced to half. This process continues until the value is thrashed to one.</ns0:p><ns0:p>The articles' main contribution is the addition of a security feature that fills up the attack surface that can be exploited by the malicious Consumer. Our contributions are:</ns0:p><ns0:p>• Adjustment of the threshold value dynamically by monitoring the cache-miss ratio value and queue capacity.</ns0:p><ns0:p>• Detection and mitigation of the flooding attack of special interest packets generated while mitigating the content poisoning attack.</ns0:p><ns0:p>Further, this article is organized into five sections; the second section emphasizes the literature review and related work. The third section is the proposed approach, and in the fourth section, experiments and results are highlighted along with the conclusion in the fifth section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>Any network's primary goal is to share web content, including photographs, texts, and videos.</ns0:p><ns0:p>Implementing security standards and goals such as confidentiality, integrity, and accessibility can ensure robust and flawless communication. Privacy guarantees that only the approved individual shall access the data. Integrity means that the receiver's received data must be similar to the one sent by the sender. Availability ensures that network infrastructure should be available for an authorized user whenever he needs the service (Wein et al. <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>). Kumar et al. <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> and Hassanein and Zulkernine <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> explained some of the most common attacks within the existing TCP / IP model such as Denial of Service (DoS) attack, Distributed Denial of Service (DDoS) attack, eavesdropping (snooping), masquerading, TCP Replay Attack, Man in the Middle Attack, repudiation, and traffic analysis attack. These legacy attacks are not possible in NDN because of the absence of the host, but with the advent of this new architecture, some new attack surfaces have emerged which need to be addressed and it is an active research area. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>associating the data to the packet name when data is being generated. Authentication can be performed by the consumer on the data packet by verifying the signature using the Content Providers' public key. This authentication can be performed even if the data is retrieved from some other entity other than the content provider, <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>.</ns0:p><ns0:p>Zhang et al. <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> stated that If a content providers' public key is not distributed or the consumer has no information about this public key, in that case, the data producer places the signing key name into the specific field of the data packet. It is known as the KeyLocator field. A consumer can acquire a public key by following this field of KeyLocator and can retrieve it just like a normal data packet. Kumar et al. <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> explained some of the most common attacks within the existing TCP / IP model such as Denial of Service (DoS) attack, Distributed Denial of Service (DDoS) attack, eavesdropping (snooping), masquerading, TCP Replay Attack, Man in the Middle Attack, repudiation, and traffic analysis attack. In a modification attack, the attacker does not only compromise the confidentiality of the data by accessing it but also compromises the integrity of the data by trying to alter it. However, this attack is not possible in NDN, because each piece of data is signed by the publisher, which the consumer can verify. However, if the router itself is compromised and alters the data packet, then a corrupted data packet may be sent to the consumer. Consumers after receiving the publishers' public key can validate this corrupted data.</ns0:p><ns0:p>In a masquerading attack, the attacker masks his identity and impersonates to be another person so he/she can acquire some useful information about that person. However, this attack is also not possible in NDN because every piece of data chunk is signed by the publisher using his/her private key. In a replay attack, the attacker performs Man in the Middle attack and tries to get a copy of the message from the sender, then after modifying the message and he/she sends it to the receiver. The recipient assumes that the actual sender has forwarded the message but in fact, it is the modified message from the attacker with malicious intent. This type of attack is also not possible in NDN because the interest packet is identified by the name and for the uniqueness of the namespace in the network, a nonce is used. When the same interest packet reaches the router (with the same name and nonce), the router assumes the packet is duplicate and it is replayed; it will, therefore, be purged from the PIT table. NDN, therefore, protects itself at the network layer level from the replay attack. In NDN architecture, some inherent security features protect us from some of the legacy security attacks by default but still there are some emerging security concerns in this new architecture that needs to be addressed. Security, privacy, and access control are the three major domains that need to be covered in NDN architecture. Several attacks are possible in NDN such as Content Poisoning attack, Content pollution attack, Naming Attack, and Denial of Service attack. In privacy, it can be classified into five categories such as content privacy, signature privacy, client privacy, name privacy, and cache privacy, <ns0:ref type='bibr' target='#b4'>[5,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b8'>9,</ns0:ref><ns0:ref type='bibr' target='#b6'>7]</ns0:ref>.</ns0:p><ns0:p>In access control, there is some mechanism that needs to be addressed are content encryption, content attributes, clients' identity, and authorized sessions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Attack Types in NDN</ns0:head><ns0:p>In NDN there are four main types of security threats that are briefly discussed in the coming sections and the attack effects on the security goals are mentioned in Table 1 <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b10'>11]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.1'>Flooding Attack of Interest Packet</ns0:head><ns0:p>Benmoussa et al. <ns0:ref type='bibr' target='#b11'>[12,</ns0:ref><ns0:ref type='bibr' target='#b12'>13]</ns0:ref> explained in details the effects of an interest flooding attack, in which, an attacker can deplete the network resources by flooding the network with a large number of interest packets batches. PIT, network bandwidth, and producer resources availability to the legit users will be compromised with this attack. This attack consumes NDN resources that restrict legitimate users from accessing them.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.2'>Cache Pollution Attack</ns0:head><ns0:p>Wang et al. <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> discussed the anatomy of cache pollution attack, the attacker attempts to fill the cache with unwanted content in the NDN router by demanding the data packets which are unpopular and not in demand. As a result, the NDN routers' impact ratio decreases. Therefore, the cache hit ratio of the interest packet of the legitimate user will thrash. This will increase the latency and reduce the throughput of the network. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.3'>Cache Privacy Attack</ns0:head><ns0:p>During an assault on cache privacy, the attacker wants to figure out whether or not the sensitive data has been accessed recently. A newly accessed item lies in the routers' cache and the requester gets a quick response to these types of data. The intruder compiles a list of content that is vulnerable to privacy and asks them one by one to know whether it is cached or not by noticing the delay in retrieving the content. If the content is identified, the attacker can conclude that a user or a group of users has recently accessed the content. The adversary will know the user's access pattern using this technique. The content type that is accessed and other related information will also be vulnerable to privacy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.4'>Content Poisoning Attack</ns0:head><ns0:p>One of the most crucial attack vectors in NDN is the Content Poisoning Attack. In CPA, the attacker compromises the router, and this malicious router sends a reply to the legitimate request with totally bogus or corrupted content. The contents of intermediate routers that are involved in NDN communication are stored in CS. This poisoned content spreads when other legitimate consumers request the same content. Content in NDN is of three types, i.e., legit contents, fake or poisonous contents, and corrupted contents. A valid signature of valid content is generated through the private key of a legit publisher. Similarly, a valid signature of fake content can also be generated with any private key that is not associated with the publisher's namespace. Whereas the corrupted content does not have a valid signature. (Ullah et al. <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>) In a Content Poisoning Attack, an attacker takes over a router and replies to incoming interests with corrupted content.</ns0:p><ns0:p>Wu et al. <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> explained that if a consumer requests this corrupted content, it will spread this malicious content on intermediate routers' content stores. It will result in the spreading of this poisonous content all over the network. This verification is usually performed by Consumers who use the content verification process using the content's signature. In NDN, every router can verify the arriving contents on its own, but this verification at line speed takes resources, and it is impractical. P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> described two ways through which a content poisoning attack can be carried out. The first way is that the attacker compromises the routers, spreading the poisoned content while satisfying the requested interest packets. The second way is that poisoned content is distributed via compromised publishers. Compromised publishers can anticipate the Data that will be in high demand, e.g., highlight a famous football match, and create malicious content. So in this way, a compromised producer or router can reply with a malicious data packet against a legitimate interest packet. As per NDN specification, a consumer verifies all the signatures of the requested data packets.</ns0:p><ns0:p>So a feedback-based approach is used to verify the content at the router (P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This approach is the extended version of the NVF technique, as discussed in the previous section.</ns0:p><ns0:p>However, this approach has some new challenges, such as there is no trust relationship between the router and the consumers. Consumers can also be compromised, and in this way, false feedback can consume network resources. Ghali et al. <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> proposed a technique for content ranking calculation and stored in the exclude field of the interest packet, and the range of the values are between 0 and 1. New content is ranked 1, which gets downgraded if rated by consumers and included in the excluded field of the Consumer. This approach is somewhat similar to the technique mentioned in P.Gasti et al. <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>, so it has the same limitations. Ghali et al. <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> highlighted some of the NDN architecture vulnerabilities, such as the PPKD field and name's digest are not the essential components of the Interest packet. Also, no such trust model is adopted unanimously by the Consumer's applications to fetch the content's hash securely.</ns0:p><ns0:p>Based on these vulnerabilities, a technique is proposed, which enables an IKB rule to ensure trust.</ns0:p><ns0:p>According to this rule, the Interest packet must include the producer's (content publisher's)</ns0:p><ns0:p>public key. It is also implied that producers should also have the public key in the Data Packets'</ns0:p><ns0:p>KeyLocator field. Its implication on the router is that it should calculate the hash of the public key of the content received and compare it with the PPKD field against its PIT entry. Upon mismatch, the content is discarded but otherwise verified. Upon successful verification, content is forwarded and stored in the content store of that particular router. Yue et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> stated that IKB implication for consumers is that it has to acquire and validate the content provider's public key before initiating the Interest packet for that specific data packet. Trust model can be acquired using three approaches: public keys of the content provider should be installed in the client application, the second one is the universal key name service, and the third one is global search-based service. Also, to reduce the core routers' workload, the author has proposed that an Interest Key Binding check on the data packet should be performed at the edge routers. In contrast, core routers should perform this check probabilistically. The cons of this approach are that it is assumed that verifying the router is trusted, but it can verify the bogus IKB to be correct if it is malicious. So this scheme lacks scalability and has overhead.</ns0:p><ns0:p>DiBenedetto and Papadopoulos <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> proposed an approach in which consumers, upon verification failure, send a report packet, which will act as feedback to the other entities of the NDN Network.</ns0:p><ns0:p>When consumers detect a poisoned content, a special interest packet is generated by the network stack, and the information regarding the poisoned content is stored in this special report packet.</ns0:p><ns0:p>When the router receives this special interest packet, it acts as one of the two proposed mitigation options that the author proposed. One is Immediate Failover, and the second one is Probe First.</ns0:p><ns0:p>In the first approach, the malicious face is marked with a low priority value for the future. And in the probe first technique, the node, upon receiving the special interest packet known as report packet, stops forwarding the interest packets of the namespaces on which the attack is underway.</ns0:p><ns0:p>Also, that particular node informs their next-hop routers about this malicious namespace.</ns0:p><ns0:p>Nguyen et al. <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> explained three vulnerabilities in NDN architecture; the first one is unregistered remote provider, then multicast forwarding and the last one is the best route forwarding. The first vulnerability is that the interest packet can be satisfied with any data packet received from any of the faces. Therefore, a malicious producer can induce malicious content and satisfy it before it gets satisfied by the legit producer. In NDN, faces are registered in the FIB table's corresponding values, so while doing multicast forwarding, the interest packet is forwarded to all these faces. So, the malicious producers can satisfy the interest packet with its malicious content.</ns0:p><ns0:p>A router ignores a similar interest packet in the best route forwarding with the same name and selectors but different nonce when received during the suppression interval of retransmission.</ns0:p><ns0:p>The interest received after this interval shall be transferred via the next lowest possible cost;</ns0:p><ns0:p>thus, an interest packet can be satisfied with a malicious producer's poisoned contents.</ns0:p><ns0:p>Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> proposed a comprehensive system to mitigate CPA, and this article is all about identifying security flaws and proposing a mitigation strategy to address this flaw in this system.</ns0:p><ns0:p>In the following sections, this base system is elaborated in detail. This system is comprised of three phases. First is the route building phase, then there is a normal content retrieval phase, and the last one is the recovery phase in chase content poisoning. a need to sort out some challenges while developing a CPA mitigation strategy (Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>).</ns0:p><ns0:p>Energy management in routers is an important issue. Gao et al. <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> evaluated that CPA and caching issues can consume a considerable amount of routers' energy, which can add instability to the whole system. In Hu et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> have implemented a robust and efficient mechanism to mitigate the CPA. In Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, we have identified vulnerabilities in content poisoning attack mitigation schemes discussed in previous sections of this article. In the following section, we explore how these vulnerabilities can be exploited and a mitigation strategy is proposed, which is the main research area of this article. During the CPA, the reissued Interest Packet by the consumer stores the hash of the poisoned data in the excluded filter field but a compromised Consumer can also store the hash of and un-poisoned data has in the same field. Consequently, It will result in a cache miss. The ondemand signature verification at the router will also be enabled during this process, consuming a lot of router processing power. When a consumer with malicious intent bombards these excluded Interest packets, although they get discarded at the next router upon verification, it'll drastically increase the router's processing overhead. Other legitimate consumers will face a denial of service from this router. This attack vector should be taken into account, and a mitigation strategy should be devised for such attacks. This way, the process of CPA mitigation will be severely affected. The block diagram of the flooding scenario is elaborated in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The block diagram depicts the scenario of the flooding attack of Interest Packet with excluded filter. The first block shows that the Consumer generates the normal Interest packet. Then a decision is taken in the next block that whether it is a normal Interest packet or an Interest packet with an excluded filter field. In case it is a normal Interest packet, it is directed towards normal NDN operations; otherwise, it is passed to the next module of On-Demand Signature Verification. Here signature verification is performed against PPKD in the Content Store. If validation fails, this packet is discarded; otherwise, it gets purged from the router's CS. In the case of poisoned Data is found in the CS, the normal process is initiated, and content poisoning mitigation will commence. When a consumer is compromised, and it starts flooding the NDN Network with the excluded filter enabled Interest packet, it will trigger On-Demand Signature verification for each bogus packet, and the next NDN router will get saturated. The queue will be occupied, and after a while, there will be less space for the legitimate excluded filter Interest packet. It will hamper the CPA Mitigation mechanism badly. So this scenario is considered an attack and needs mitigation. Manuscript to be reviewed In this article, a reactive approach is proposed to mitigate this attack. A virtual queue is utilized in NDN Routers for incoming reissued Interest packets from the consumers. FIFO (FCFS) queue is shared among all the incoming faces for reissued interest packets. It is a temporary place holder for these packets until they get verified. The allotted memory for the transmitting packets should be different from the one used for caching. If the same CS is used to transmit packets and data chunk, then the CS will be congested with the data chunks that are waiting to be satisfied by the pending Interest packets. To prevent the Malicious Consumer from sending a fake 'excluded Interest packet,' a satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet in the cache. In case a cache miss (of the excluded interest packet) occurs, and the ratio reaches near the threshold value, i.e., it is set by the operator, it is considered an attack. On-demand verification at the router is not enabled unless there is a cache hit of the excluded interest packet; this will reduce the overhead of content verification at each data packet's router. However, in case of a cache miss, this excluded interest packet is discarded. Still, if a consumer with malicious intent floods the edge router with the fake interest packet with the excludes filter, it'll degrade that particular edge router's performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>PROPOSED APPROACH</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The NDN-router service manager at the NDN Router, especially at the edge of the network in the consumer domain, maintains the stats and looks at them. The router will drop the future reissued interests coming from this face with the excluded data packet as it is considered a malicious consumer upon hitting the threshold value. It will be done temporarily and delisted at the discretion of the network operator. A new lightweight parameter is added in the CS Data Structure to retain the cache miss counter of invalid reissued Interest packet with excluded filter field. This value is compared with the threshold value. The Block diagram can show the birds-eye view of this proposed mechanism in figure <ns0:ref type='figure' target='#fig_9'>2</ns0:ref>.</ns0:p><ns0:p>We have introduced a block of the proposed approach. The reissued Interest packet upon several caches misses, and hitting the specified threshold value will trigger an event and blocks Manuscript to be reviewed this malicious face. On the next iteration, this reissued packet from the malicious consumer face will be blocked. In algorithm 4 (Figure <ns0:ref type='figure' target='#fig_11'>3</ns0:ref>) PPKD, ContentName, nonce, incoming face, excluded filter field value, Threshold value, and cache miss counter value is passed as an argument. At statement 1, the hash comparison is performed and if the result is a cache miss then the cache miss counter value gets incremented. If that value reaches the threshold value then the event to block that specific malicious incoming face gets triggered. In case the result is a cache hit then the normal NDN communication process will commence.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='2.1.3'>Dynamic Threshold Value</ns0:head><ns0:p>This approach helps the Network Operators set up the threshold value automatically during the special interest packet flooding attack by a malicious consumer. This approach aims to select the threshold value in an automated fashion based on the statistical monitoring of buffer capacity and cache miss ratio. In this approach, a Network Management software continuously monitors the cache miss ratio and buffer capacity when a special interest packet is initiated. When the cache miss ratio average over a while results in a buffer overflow, the threshold value is thrashed to half. This process continues unless the threshold value becomes 1. This mechanism is elaborated in Algorithm 5 (Figure <ns0:ref type='figure' target='#fig_12'>4</ns0:ref>). At this stage, the incoming face causing the flooding attack will get blocked till the particular timeout. Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Network Management Software will continuously monitor the Cache_Miss_Ratio and Buffer Size of the queue.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Benefits of Dynamic Threshold Values over Static Threshold Values</ns0:head><ns0:p>The mitigation of flooding attack of special interest packets works on two approaches, first one uses the static threshold values which is set by network operators during the router initial configuration. The second approach is the dynamic approach, in which the threshold value is adjusted adaptively by monitoring the Queue size and Cache Miss Ratio value.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>EXPERIMENTAL RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Simulation Environment</ns0:head><ns0:p>For proof of concept and to run this scenario, a custom-built NDN Simulator is developed in C# language in Visual Studio 2019. The network parameters used in simulation scenarios are mentioned in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Network Topology</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.2.1'>Scenario 1: One Malicious Consumer</ns0:head><ns0:p>In scenario 1, our simulations' network topology consists of two routes from the Consumer to the producer. Two paths routes that are used in this scenario are 0-1-2-4-6-7-8 and 0-1-3-5-7-8; these paths are between the consumer and a producer (Spring et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>). In this scenario, it is evident that consumers with malicious intent can flood the network with unwanted interest packets with excluded fields occupied by the non-malicious or legit payload. If not mitigated at the edge router, all the routers will enable the on-demand verification, and this way, router performance will degrade with time. This problem can be mitigated by enabling a mechanism at edge routers of NDN and setting a threshold value that if it hits this value, block that interface through which Manuscript to be reviewed</ns0:p><ns0:p>Computer Science dropped in case of a cache miss, and upon hitting the threshold value, the interface from which these excluded interest packets are received is blocked and added to the delist data structure.</ns0:p><ns0:p>The timeout to get out of this delist data structure is at the desecration of the network operator.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Scenario 2: Two Malicious Consumers</ns0:head><ns0:p>In scenario 2, our simulations' network topology consists of two routes from two consumers (i.e., Consumer 1 and Consumer 2) of the same domain to the producer via Router 8 (edge router).</ns0:p><ns0:p>The routes that are used in this scenario is 0-1-2-4-6-7-8 and 0-1-3-5-7-8; these paths are between the consumer and a producer (Spring et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>) The main thing to note in this scenario is that Consumer 1 and Consumer 2 are in the same domain. Router 8, the virtual queue for Incoming Reissued Interests, is shared between these two consumers. The Queuing mechanism used in this scenario is FIFO. There are two consumers with malicious intent in this scenario and can flood the network with unwanted interest packets with excluded fields occupied by the non-malicious or legit payload. If not mitigated at the edge router, the virtual queues will be fully occupied for the legit reissued interest packet, and consequently, packets will drop. This problem can be mitigated by enabling a mechanism at edge routers of NDN and setting a threshold value that if it hits this value, block that interface through which these malicious excluded interest packets arrive. This way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. In this experiment as shown in Figure <ns0:ref type='figure'>5</ns0:ref>, we have calculated the cache miss ratio of the interest packet containing the exclude filter and compared it with the Queue Length. Upon flooding the router with a fake interest packet, the verification process takes time, and meanwhile, the queue of interest packets will start increasing. After every second, 25% fake packet will drop, and 75% will be added to the queue. Initially, no threshold value is set. After some time, congestion at the router's incoming interest packet queue will occur, resulting in a drop of other future packets at this router.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Experiments and Result</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Experiment 2 (Scenario 1): With Threshold Values</ns0:head><ns0:p>In the second experiment as shown in Figure <ns0:ref type='figure'>6</ns0:ref>, our proposed scheme is enabled at the edge routers in Network Management software. After several cache misses and upon hitting the threshold value to 3 according to the simulation settings, it'll block the incoming face of the Consumer, and further, no more interest packets will be received from this malicious consumer face. After hitting the specified threshold value, the face is blocked and fake packets begin to drop from the queue. At 12 seconds the queue will be empty and the router is no more saturated. Manuscript to be reviewed In the third experiment as shown in Figure <ns0:ref type='figure'>7</ns0:ref>, Consumer 1 starts flooding the network with fake interest packets with the excluded filter; the queue will begin to saturate as the verification rate is slow as compared to the flooding rate. In the 6th second, Consumer 2 also starts to flood the network; consequently, the queue begins to saturate linearly.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Initially, no threshold value is set, and at the 9th-second congestion at the router's incoming interest packet queue will occur which will result in a drop of other future packets at this router.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.4'>Experiment 4 (Scenario 2): With Threshold Values</ns0:head><ns0:p>In the fourth experiment as shown in Figure <ns0:ref type='figure'>8</ns0:ref>, our proposed scheme is enabled at the edge routers in Network Management software. Upon cache miss threshold value reaches 3, it'll block the incoming face of the consumer1 after three failed verification at 4th second. Further, no interest will be received from this malicious consumer face. At 6th second, Malicious consumer 2 starts to saturate the queue which will, and similarly, after three failed attempts, this face gets blocked as well, and queues start to thrashed after both of the malicious consumer faces are blocked.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Dynamic Threshold Value</ns0:head><ns0:p>It is evident in the experiment that with the increase in the Cache_Miss_Ratio, the Queue size will increase because the flooding rate is greater than the verification rate. Also, Cache_Miss has penality on the processor of the router, which can increase the processing overhead. In the graph Manuscript to be reviewed At 7th second the queue is filled up to 100 percent. At this stage, the new packets will start to drop. Here, the system should act prudently and reduce the threshold value to half of the current value, and if flooding continues threshold value is reduced to half as shown in Figure <ns0:ref type='figure' target='#fig_7'>10</ns0:ref>, and so on till the value is reduced to 1. At this stage, the incoming face is blocked as it is considered as an attack. The queue will not be saturated, and memory will be available for other interest packets to get processed. If the flooding attack continues, we will multiplicatively decrease the threshold value to another half. This mechanism will continue against that particular flooding malicious face until the threshold value reaches 1. At this stage, that particular face will be blocked and considered as a malicious face. The face will be blocked until the timeout, whose value will be at the network operator's discretion.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='3.5'>Effectiveness and Accuracy of Proposed Solution by Comparing the Throughput of the Normal Special Interest Packets</ns0:head><ns0:p>The simulation scenario is depicted in Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science bombarded by one compromised consumer. 1000 Normal Interest Packets were also induced in the system by a legitimate consumer in the same domain. In this scenario, our proposed solution is placed and activated inside the NDN Router service manager. The maximum throughput of a particular face is 100 bps. The throughput of the normal interest packets was up to 90% of the total capacity which is the desired result. But in the subsequent seconds, the Malicious packets entered the router. Queue capacity started to saturate, then the proposed solution gets activated and blocks the malicious face when the cache miss counter value reached the threshold value. Then we can see that according to our simulation environment after 3rd-second malicious packets didn't enter the router queue and throughput of the normal interest packet will start to raise and other factors like processing overhead and queue capacity ratio get into the normal working range. This scenario is depicted in Figure <ns0:ref type='figure' target='#fig_18'>12</ns0:ref>. 2000 Malicious Packets bombarded were detected and dropped successfully by our system. System accuracy proved to be 100%. Also, 1000 legitimate special Interest packets were processed and no packet was dropped.</ns0:p><ns0:p>Comparison of throughput, queue capacity and processing overhead during the CPA special interest packet flooding attack and that of our proposed approach is summarized in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>. <ns0:ref type='figure' target='#fig_7'>11</ns0:ref> and in Figure <ns0:ref type='figure' target='#fig_18'>12</ns0:ref> which is summarized in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>. It is evident from the experiments that during the Special Interest Packet flooding attack, our proposed approach showed promising results in terms of throughput, queue capacity and processing overhead.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSION AND FUTURE DIRECTION</ns0:head><ns0:p>The main contribution of this work is to devise a mechanism that identifies and prevents the compromised consumers from flooding the network with special Interest packets that are generated during the mitigation process of the Content Poisoning Attack. The compromised consumers place the hash of an un-poisoned content in the excluded filter field of the special interest packet which causes cache miss at the edge router. Owing to the bombardment of these special Interest packets, it'll tremendously increase the processing overhead on the NDN Router.</ns0:p><ns0:p>The cost is in terms of Cache-Miss penalty and verification overhead. Also, the queue capacity of the NDN Router gets saturated. Consequently, the legitimate requests from the other consumers get dropped or face a substantial amount of delays. We also observed the damaging effect of multiple malicious consumers flooding the edge router which was also well handled by using the proposed technique. After the implementation of our scheme in the Network Service manager at the NDN Edge Router, the malicious face will be blocked when the cache-miss ratio value reaches the specified threshold value. We also have made the threshold value dynamic by adjusting Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Class-Based Queuing can be tested to augment our approach. Also, traffic shaping and rate control mechanism can be used to hold back the malicious face.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>One of the most critical attack vectors in NDN is the Content Poisoning Attack. The attacker PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>1. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>NDN's Data-Centric Security and Security Issues in NDN At the network layer of NDN, data-centric security is mandated via a digital signature on each data packet. A digital signature is added by the content provider (producer) to every data packet 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>3 / 19 PeerJ</ns0:head><ns0:label>319</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>4 / 19 PeerJ</ns0:head><ns0:label>419</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>2. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Introduction The name-key based forwarding and Multipath forwarding based Inband probe is a very comprehensive scheme for mitigation of the CPA. It fills most of the attack surfaces regarding the Content Poisoning Attack. However, with the advent of the NDN architecture's structural changes, it has induced a new attack vector that can be exploited by the adversary. With this attack, the whole system can collapse. So it is very crucial to highlight this aspect. One of the 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>important attack vectors that have emerged with this technique is the flooding of the reissued Interest Packet containing the excluded filter field. It is the leading research contribution of this article. A consumer with malicious intent can flood the network with interest containing the hash digest of legit or unpoisoned data in its exclude field, which can flood the network and enable multipath. It can harm the throughput of the network or even can cause DDoS. Based on the research gap mention in the previous section, this article has formulated the following research questions: What will be the mechanism to detect the attack initiated by consumers with malicious intent? What will be the parameters that will mitigate the malicious consumers' reissued interest packet flooding attack? So it's essential to mitigate and add this additional security feature in this CPA Mitigation technique 2.1.1 Detection of Malicious Consumer Interest Packet with Excluded Filter Field</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>7 / 19 PeerJ</ns0:head><ns0:label>719</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Detection of Flooding Attack during Content Poisoning Attack Mitigation</ns0:figDesc><ns0:graphic coords='9,158.28,63.78,380.49,310.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>8 / 19 PeerJ</ns0:head><ns0:label>819</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Mitigation of Flooding Attack</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.58,390.58' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>1 ) 9 / 19 PeerJ</ns0:head><ns0:label>1919</ns0:label><ns0:figDesc>InitT H = QueueSize/Veri f icationRate (Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Detection and Mitigation of Flooding Attack of Reissued Interest Packet</ns0:figDesc><ns0:graphic coords='11,141.73,90.25,413.57,589.58' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Dynamic Threshold Value</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.58,296.14' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>3. 3 . 1</ns0:head><ns0:label>31</ns0:label><ns0:figDesc>Experiment 1 (Scenario 1): With No Threshold Values</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>12 / 19 PeerJ</ns0:head><ns0:label>1219</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Flooding Attack With No Threshold Value And With 1 Malicious Consumer</ns0:figDesc><ns0:graphic coords='14,172.75,63.78,351.54,163.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Flooding Attack With No Threshold Value And With 2 Malicious Consumers</ns0:figDesc><ns0:graphic coords='15,141.73,63.78,413.57,191.54' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Without Dynamic Threshold Value</ns0:figDesc><ns0:graphic coords='16,141.73,63.78,413.58,262.38' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Throughput of the Normal Special Interest Packets In Flooding Attack Scenario With proposed Mitigation Strategy</ns0:figDesc><ns0:graphic coords='17,141.73,451.53,413.57,200.31' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>the initial threshold according to cache-miss ratio and queue capacity values. An improvement in this technique can be done by incorporating Quality of Service solutions in NDN Routers. Multiple Virtual queues for special Interest packets can be maintained in NDN Routers to handle the flooding of these packets. Different queuing disciplines and algorithms like Adaptive Virtual Queue (AVQ), Credit-Based Fair Queuing, Weighted Fair Queuing, Quick Fair Queueing, and 17/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>NDN Attack Types</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Attack Types Flooding Attack of Interest Packet Cache Pollution At-tack Privacy Attack Content Poisoning Attack</ns0:cell><ns0:cell>Adversary Consumer Consumer Consumer Producer or Router Consumer/ Router Integrity/ Availabil-Victim Compromised Secu-rity Goal Consumer/ Router/ Availability Producer Consumer Availability Consumer Confidentiality ity</ns0:cell><ns0:cell>NDN Element In-volved in Attack PIT CS CS CS</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>1.3 CPA Detection and Mitigation Approaches Content Poisoning Attack can be detected and mitigated through two major approaches, Collab-</ns0:cell></ns0:row></ns0:table><ns0:note>orative Signature Verification, and the Consumer Dependent approach. The former method is those in which NDN routers collaborate to verify the content's signature. The latter method uses extra fields in the Interest and Data packets or uses clients' feedback.1.3.1 Mitigation of CPA using Consumer Dependent Approach</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Verification of cached content is performed between the malicious payload included in the interest packets' exclude filter or in the Data Packet that is returned and gets matched with the reissued interest packet. There are two benefits of this approach; first, with multipath forwarding, there is a great chance that consumers will acquire the legitimate content while legitimate content can be restored on the intermediate router via alternative forwarding options. This way, poisoned contents will be purged, and for future requests, legitimate contents will be returned from the routers' cache. Thus it'll increase the overall throughput of the network.1.4 Comparisons of CPA Mitigation Approaches2 is a summarized view of the CPA Mitigation approaches, as discussed in previous sections:Based on the analysis of existing techniques and work to detect and mitigate CPA, there is still CPA Detection and Mitigation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Ref P.Gasti et al. [17] Ghali et al. [18] Ghali et al. [19] Nam et al. [23] Kim et al. [24] Wu et al. [16] Kim et al. [25] DiBenedetto and Papadopoulos [21] Hu et al. [3]</ns0:cell><ns0:cell>NDN Node Consumer, Router Consumer Router Router Router Consumer, Router Router Consumer, Router Consumer, Router</ns0:cell><ns0:cell>Detection Signature, PPKD Signature PPKD and Signature Signature Signature Signature Signature in case of cache-hit Signature PPKD and Signature</ns0:cell><ns0:cell>Mitigation SSCIC, DSCIC Content Ranking Interest Key Bind-ing SLRU Extension SLRU Extension Reputation Based Forwarding SLRU Extension Modified Forward-ing Strategy Name-key based for-warding and mul-tipath forwarding based Inband probe</ns0:cell><ns0:cell>Overhead Verification of Random Signatures Content Ranking Calcula-tion Signature Verification Signature Verification Signature Verification Signature Verification Signature Verification Complete Bogus Packet in Reissued Interest Packet Signature Verification (Hash matching is fast due to PPKD entry)</ns0:cell></ns0:row></ns0:table><ns0:note>It is required that NDN routers should enable name-key-based forwarding to forward interest towards registered content sources, and to specify legitimate content sources, every route advertisement should be authenticated 5/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)Manuscript to be reviewedComputer Sciencewith a trust management system. If content poisoning occurs on intermediate routers, then a mechanism of 'Multipath Forwarding based Inband Probe' is performed. In this approach, an interest packet with an exclude filter (poisoned content) is reissued and forwarded via alternate paths. When this packet reaches a particular router, it enables cached contents' on-demand signature verification.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Vulnerabilities in CPA Mitigation Schemes (Compromised Consumers can flood Routers)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference Gasti et al [17] Ghali et al [18] DiBenedetto and Papadopoulos [21] Hu et al. [3]</ns0:cell><ns0:cell>Checked by Consumer Router Consumer First Consumer and and then router First Consumer and then Router</ns0:cell><ns0:cell>Proposed Solution SSCIC & DSCIC Content Ranking Algorithm Modifying Forward-ing Strategy Name-Key Based Forwarding and Multipath Forward-ing Based Inband Probe</ns0:cell><ns0:cell>Energy Efficient Yes No -Overhead of calculating the con-tent ratings No -uses Complete bogus packet in re-port Yes -Only use a PPKD extra field and use bogus/cor-rupted data hash in of interest packet excluded filter field</ns0:cell><ns0:cell>Security Features Cannot detect cor-rupted content Do not handle ma-licious consumer in case it reports false content rating. Only handles the malicious consumer identity but do not handle the corrupted data Can prevent poison-ing of content by generating special interest packets.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Simulation Parameters interest packets are coming. This way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. So to handle this issue Network Manager at NDN Edge Router enables this mechanism in which malicious interest packet with exclude field is</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Request Rate Max Queue Length Verification of Interest Packet No. of Malicious Consumers Threshold Value</ns0:cell><ns0:cell>Default Value 100 Interests/second/Consumer (Interest with Exclude Parameter) 500 (Experiment 1 and Experiment 2) 1000 (Experiment 3 and Experiment 4) 25 Interest/second 1 (Experiment 1 and Experiment 2) 2 (Experiment 3 and Experiment 4) x</ns0:cell></ns0:row><ns0:row><ns0:cell>these malicious excluded</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>. In the first scenario, 2000 malicious interest packets are bombarded by one compromised consumer. 1000 Normal Interest Packets were also induced in the system by a legitimate consumer in the same domain. In this scenario, no 14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54463:2:0:CHECK 12 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Simulation Parameters For Effectiveness of Proposed Approach</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter Request Rate Interest Packet Max Queue Length Verification of Interest Packet Number of Malicious Special Interest Packets Number of Normal Special Interest Packets Number of Malicious Consumers Threshold Value</ns0:cell><ns0:cell>Default Value 100 Interests/second/Consumer (Special Interest Packets) 500 25 Interest/second 2000 pkts 1000 pkts 1 x</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 11. Throughput of the Normal Special Interest Packets In Flooding Attack Scenario</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of Throughput, Queue Capacity and Processing Overhead during Special Interest Packet Flooding Attack</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category Min Throughput of Normal Interest Packet Max Queue Occupation Reporting Packet Size Trust Anchor Max Processing Overhead Compromised Consumer Detection Bogus Report Packet Detection</ns0:cell><ns0:cell>Proposed Approach 65% 55% Lightweight (Sha256 Hash -32 Bytes) Yes 53% Yes Yes</ns0:cell><ns0:cell>DiBenedetto et al[21] 12% 95% Heavyweight (Com-plete Packet) Yes 93% Yes Partial</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Effectiveness and efficacy of proposed solution by comparing the throughput of interest packets and queue capacity is elaborated in experiments as in Figure</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Date: 12th February, 2021
Dear Prof. Dr Vicente Alarcon-Aquino,
Greetings!
We thank you and the reviewers for the valuable and generous comments/feedback on the manuscript.
We have addressed all the comments point-by-point in the following pages.
We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Nadeem Anjum
Senior Member, IEEE
Assistant Professor, Department of Computer Science,
Capital University of Science and Technology
On behalf of all authors.
Reviewer 1
Basic reporting
1.1 The abstract needs to be revised. What is the takeaway from the paper that should clearly be mentioned? This article’s main contribution is the addition of a security mechanism in the CPA mitigation. This statement is not sufficient. Need to support with data.
We have thoroughly revised the abstract (please see page 1)
1.2 Check grammar and typo mistakes in the paper.
Grammar and typo mistakes are fixed.
1.3 Mention various types of attacks in tabular form
Table for main attack types is added (Please see Table 1, Page 4)
Experimental design
1.4 Table 5. Qualitative Comparison-- check the column Proposed Approach better support with values instead of mentioning Very High, high
We have addressed this issue, please see Table 6 Page 17
The validity of the findings
1.5 Table 5. Qualitative Comparison -give reference numbers
this table should support with data and the comparisons should be in graphs i.e pictorial representation
We have addressed this issue, please see Figure 11 and Figure 12 on Page 16 and Table 6 on Page 17
Comments for the author
1.6 Check the paper against the grammatical and typo mistakes
We have fixed the grammatical and typo mistakes
1.7 Visibility of the figures need to be enhanced
The visibility of figures is improved.
1.8 Check the numbering of tables and figures
Numberings of tables and figures are in order.
" | Here is a paper. Please give your review comments after reading it. |
22 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the current age of overwhelming information and massive production of textual data on the Web, Event Detection has become an increasingly important task in various application domains. Several research branches have been developed to tackle the problem from different perspectives, including Natural Language Processing and Big Data analysis, with the goal of providing valuable resources to support decision-making in a wide variety of fields. In this paper, we propose a real-time domain-specific clustering-based eventdetection approach that integrates textual information coming, on one hand, from traditional newswires and, on the other hand, from microblogging platforms. The goal of the implemented pipeline is twofold: (i) providing insights to the user about the relevant events that are reported in the press on a daily basis; (ii) alerting the user about potentially important and impactful events, referred to as hot events, for some specific tasks or domains of interest. The algorithm identifies clusters of related news stories published by globally renowned press sources, which guarantee authoritative, noise-free information about current affairs; subsequently, the content extracted from microblogs is associated to the clusters in order to gain an assessment of the relevance of the event in the public opinion. To identify the events of a day d we create the lexicon by looking at news articles and stock data of previous days up to d</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div> <ns0:ref type='bibr' target='#b14'>(Dou et al., 2012)</ns0:ref><ns0:p>. In this sense, Event Detection aims to discover contents published on the Web that report on the same current topic, organize them in meaningful groups and provide insights, based on properties extracted automatically from the data <ns0:ref type='bibr' target='#b2'>(Allan et al., 1998b;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hu et al., 2017)</ns0:ref>. It represents a valuable resource to create awareness and support decision making in various domains of application, including epidemics <ns0:ref type='bibr' target='#b4'>(Aramaki et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b51'>Rosa et al., 2020)</ns0:ref>, earthquakes <ns0:ref type='bibr' target='#b54'>(Sakaki et al., 2010)</ns0:ref>, social events <ns0:ref type='bibr' target='#b46'>(Petkos et al., 2012)</ns0:ref> and economy (see Section 2.4), among others. In some cases, the scope of the event detection task is not limited to arranging the contents and providing analytics, but constitutes the basis for further algorithmic processing, like for example the development of automatic trading strategies in financial applications <ns0:ref type='bibr' target='#b19'>(Gilbert and Karahalios, 2010;</ns0:ref><ns0:ref type='bibr' target='#b52'>Ruiz et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b34'>Makrehchi et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Given the importance of Event Detection, an increasing number of researchers have focused their attention on this problem since the late 1990s, building on the theoretic foundations of Information Retrieval and, later on, taking advantage of the discoveries of Natural Language Processing, Text Mining and Big Data processing. Early works mainly based their approaches on traditional news stories as they started being digitalized <ns0:ref type='bibr' target='#b2'>(Allan et al., 1998b;</ns0:ref><ns0:ref type='bibr' target='#b31'>Lam et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b29'>Kumaran and Allan, 2004)</ns0:ref>, while social media platforms like Twitter 1 and Stocktwits 2 have become the dominant data source in the last decade <ns0:ref type='bibr' target='#b20'>(Hasan et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Atefeh and Khreich, 2015)</ns0:ref>. However, it has been demonstrated by <ns0:ref type='bibr' target='#b48'>Petrovic et al. (2013)</ns0:ref> that Twitter still cannot replace traditional newswire providers when considering the coverage and the timeliness of breaking news. In fact, this study shows that, while Twitter has a better coverage of minor events ignored by other media, traditional newswire sources often report events before users on the social network. Another disadvantage of microblogs is that they contain a considerable amount of noise, such as irregular syntax, misspellings and non-standard use of the language, let alone the increasing phenomenon of fake news, which makes it difficult to extract valuable information <ns0:ref type='bibr' target='#b28'>(Kaufmann and Kalita, 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ajao et al., 2018)</ns0:ref>. In light of this, a promising line of research has provided evidence that combining multiple sources of information allows to mitigate the flaws and exploit the advantages of each medium, thus improving the quality of the event detection task <ns0:ref type='bibr' target='#b41'>(Musaev et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Petkos et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b60'>Thapen et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Inspired by these findings, we developed a domain-specific clustering-based event-detection method that exploits the integration of traditional news articles and Stocktwits messages (which from now on will be referred to as tweets, for simplicity) to identify real-world events and to generate alerts for highly relevant events on a daily basis. The main intuition behind the integration of traditional press and social media is that, even though the former represents an authoritative and noise-free source which is convenient to mine to get qualitative information, it fails, taken alone, to provide insights about the entity or the resonance of the events. On the contrary, microblogs contain a considerable amount of noisy and unreliable content, but have the advantage of reflecting the impact that events have on public opinion.</ns0:p><ns0:p>Because of this, we decided to exploit traditional news articles to construct a qualitative basis for our event-detection approach and to integrate the social media data on top of that, in order to get a quantitative measure.</ns0:p><ns0:p>The proposed approach, which will be described in full detail in Section 3, is defined as domainspecific because it collects news from the same sphere of interest (e.g. economy, politics, sports) and represents these documents focusing on the words that are most relevant for that field. However, the approach can be applied to various domains with minimum modifications. For example, if we are interested in identifying events that brought happiness or sadness to people, one might use social media text elements instead of news and a sentiment index indicator created on the same interval time of the social text to associate each social post to its sentiment level. Thus the lexicon would consist of tokens used within social media posts and weighted depending on their sentiment indicators. Besides, please note that our approach performs real-time event detection as it is supposed to identify events of the day d without any knowledge of the future. In particular, it creates the lexicon by looking at news articles and stock data of previous days up to d − 1 without looking at the future. In this paper, we present the implementation of the pipeline that we designed specifically for the financial domain, which is a field where Event Detection has had one of its most promising applications (see Section 2.4). Our motivation derives from the intuition offered by several works in the financial literature that, drawing inspiration from the Adaptive Market Hypothesis <ns0:ref type='bibr' target='#b33'>(Lo, 2004)</ns0:ref>, show that public news have an impact on the stock markets, explaining a part of the return variance <ns0:ref type='bibr' target='#b8'>(Boudoukh et al., 2019)</ns0:ref>. This justifies the need for automatic tools that can support companies, traders and all the other actors involved in the market, providing an at-a-glance visualization of acquisitions, stock splits, dividend announcements and other relevant economic events <ns0:ref type='bibr' target='#b22'>(Hogenboom et al., 2013)</ns0:ref>.</ns0:p><ns0:p>We validated our approach through an experimental evaluation based, on one hand, on the Dow Jones' Data, News and Analytics dataset 3 , which contains news articles delivered by globally renown sources, and, on the other hand, on a set of messages collected from Stocktwits, a microblogging platform inspired by Twitter, where users posts short messages related to stock markets and trading. The events that constitute our ground truth for the alert generation algorithm were selected based on the stock price time series of the Standard & Poor's 500 Index (S&P 500), following the intuition that relevant economic events lead to significant movements of the market. Our qualitative and quantitative analysis shows that the proposed method is able to extract meaningful, separable clusters, which correspond to real-world events. Furthermore, the alert generation algorithm detects hot events with high accuracy, proving the effectiveness of the integration of news articles and tweets.</ns0:p><ns0:p>The contributions of our work can be summarized as follows:</ns0:p><ns0:p>• we propose a novel approach to represent news documents exploiting a domain-specific lexicon created ad-hoc using the technique we have introduced in <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>, where the lexicon has been generated on a different dataset;</ns0:p><ns0:p>• we design an original clustering-based event-detection approach that integrates news documents and tweets;</ns0:p><ns0:p>• we show the effectiveness of our method by means of an experimental evaluation performed on real-world datasets;</ns0:p><ns0:p>• we offer a visual inspection of the output obtained on a selected number of real-world events, including the Brexit Referendum, the U.S.-China trade war and the recent outbreak of the Covid-19 pandemic.</ns0:p><ns0:p>The reminder of this paper is organized as follows. Section 2 offers a thorough overview of the background research on Event Detection, analyzing works that deal with different kinds of media and application fields. The proposed approach is described in full detail in Section 3. The datasets and the methodology we have carried out for the evaluation are described in Section 4 while the obtained results are illustrated in Section 5. Finally, Section 6 contains general conclusions about this work and future lines of research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>The origins of Event Detection can be traced back to 1998, when a joint effort between the Defense Advanced Research Projects Agency (DARPA), the University of Massachusetts, Carnegie Mellon University and Dragon Systems aimed to define the problem within the wider field of Topic Detection and Tracking (TDT) and proposed an approach based on broadcast news stories that paved the way for new research on the field <ns0:ref type='bibr'>(Allan et al., 1998b,a;</ns0:ref><ns0:ref type='bibr'>Yand et al., 1998)</ns0:ref>. Since then, a considerable variety of algorithms have been proposed to tackle the problem, gradually taking advantage of the remarkable advances in Text Mining and Natural Language Processing. Most interestingly, the birth of social media platforms like Facebook, Twitter and Stocktwits in mid 2000s and their increasing popularity, together with the birth of the new era of Big Data <ns0:ref type='bibr' target='#b36'>(Marx, 2013)</ns0:ref>, led to a widening of the range of data that could be exploited to detect real-world events. To note that it is common to employ lexicons for news representation for the financial domain. Within our previous work <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>, we defined a strategy to generate industry-specific lexicons from news documents with the goal of dynamically capturing the correlation between words and stock price fluctuations. This has been then employed to solve a binary classification task with the goal of predicting the magnitude of future price changes for individual companies. Conversely, in this work we leverage the same technique to represent a new dataset and to solve a different problem, event detection.</ns0:p><ns0:p>In the following, we will hereby illustrate the previous research carried out in Event Detection, grouping it according to the type of source employed in the analysis -basically newswires, social media Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and an integration of heterogeneous data. Because the approach presented in this paper can be applied to different domains, also our overview of related works will cover a variety of fields, including health, security, sports and many others. However, we will conclude the overview by focusing on the financial sphere, since this is the specific domain within which our approach was developed and validated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Newswires-based</ns0:head><ns0:p>The first type of data that has been explored in this field consists of traditional newswires and press releases, which, however, still have a primary role even in present research. Early works typically rely on tf-idf features to represent the documents in a Vector Space <ns0:ref type='bibr' target='#b55'>(Salton et al., 1975;</ns0:ref><ns0:ref type='bibr' target='#b32'>Li et al., 2005)</ns0:ref> or <ns0:ref type='bibr'>Bag-of-Words (Zhang et al., 2010)</ns0:ref>. Modification of these classic methods were proposed in order to enhance the representation by means of contextual information <ns0:ref type='bibr' target='#b31'>(Lam et al., 2001)</ns0:ref>, lexical features <ns0:ref type='bibr' target='#b58'>(Stokes and Carthy, 2001)</ns0:ref>, named entities <ns0:ref type='bibr' target='#b29'>(Kumaran and Allan, 2004)</ns0:ref>, topic models <ns0:ref type='bibr'>(Yang et al., 2002)</ns0:ref> and, in more recent work, word-embeddings <ns0:ref type='bibr' target='#b24'>(Hu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Kusner et al., 2015)</ns0:ref>. The most common approaches for the detection task are based on clustering, text classification or a combination of these <ns0:ref type='bibr' target='#b5'>(Atefeh and Khreich, 2015)</ns0:ref>.</ns0:p><ns0:p>Going into more detail, authors in <ns0:ref type='bibr' target='#b24'>(Hu et al., 2017)</ns0:ref> exploit word-embeddings to overcome the downsides of tf-idf representation, namely sparsity and high dimensionality. On top of this, they build an adaptive online clustering algorithm that leads to an improvement in both efficiency and accuracy.</ns0:p><ns0:p>Similarly, authors in <ns0:ref type='bibr'>(Zhou et al., 2018)</ns0:ref> enhance the tf-idf model by integrating the Jaccard Similarity coefficient, word-embeddings and temporal aspects of published news, with the goal of spotting hot events. Others <ns0:ref type='bibr' target='#b38'>(Mele et al., 2019)</ns0:ref> propose an algorithm to detect, track and predict events from multiple news streams, taking into account the publishing patterns of different sources and their timeliness in reporting the breaking news. They use a Hidden Markov Model <ns0:ref type='bibr' target='#b6'>(Beal et al., 2002)</ns0:ref> to represent current events and, subsequently, to predict facts that will be popular in the next time slice.</ns0:p><ns0:p>The main contribution of the proposed approach with respect to this line of research is the fact that, in our algorithm, the representation of the events extracted from news articles is enriched by the information mined on social media sources. In this way, we obtain a multifaceted perspective of events. Furthermore, another innovation regards the method employed to represent the textual data. In particular, our pipeline includes the creation of an ad-hoc lexical resource, which detects the words that are most relevant for a specific domain. During the construction of the vector representation of documents, only the word-embeddings of the selected terms are included, as described in full detail in Sections 3.2 and 3.3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Social Media-based</ns0:head><ns0:p>Since the development of social media platforms and microblogging websites, a big share of the researchers' interest has been aimed at mining these sources of information for a more dynamic and multifaceted inspection of events. Among these platforms, the case of Twitter definitely stands out, becoming a de facto standard domain for Event Detection <ns0:ref type='bibr' target='#b48'>(Petrovic et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Saeed et al., 2019)</ns0:ref>. A thorough survey by <ns0:ref type='bibr' target='#b20'>Hasan et al. (2018)</ns0:ref>, focused on Twitter-based approaches, suggests that this research branch can be split into three main categories: (i) methods that exploit properties in a tweet's keywords;</ns0:p><ns0:p>(ii) methods that rely on probabilistic topic models; (iii) clustering-based methods.</ns0:p><ns0:p>For the first group, it is worth mentioning TwitInfo <ns0:ref type='bibr'>(Marcus et al., 2011), TwitterMonitor (Mathioudakis and</ns0:ref><ns0:ref type='bibr' target='#b37'>Koudas, 2010)</ns0:ref> and EnBlogue <ns0:ref type='bibr' target='#b3'>(Alvanaki et al., 2011)</ns0:ref>, which identify real-time trends on Twitter and allow the final user to browse large collections of messages, providing contextual information about tweets, visualizations and meaningful insights that describe the identified topics. <ns0:ref type='bibr' target='#b57'>Stilo and Velardi (2016)</ns0:ref> include temporal factors in their analysis in order to cope with the limited context of Twitter messages. <ns0:ref type='bibr' target='#b63'>Weng and Lee (2011)</ns0:ref> propose an approach that builds signals for individual words by applying wavelet analysis <ns0:ref type='bibr' target='#b26'>(Kaiser, 2010)</ns0:ref> on the frequency-based raw signals of the words; this method is able to spot the most relevant words and finally cluster them to form events. Among the works that employ probabilistic topic models to represent tweets in a latent space, TwiCal <ns0:ref type='bibr' target='#b49'>(Ritter et al., 2012)</ns0:ref> is an open-domain event-extraction framework that identifies significant events based on a multitude of features including, but not limited to, contextual, dictionary and orthographic features.</ns0:p><ns0:p>TopicSketch <ns0:ref type='bibr' target='#b64'>(Xie et al., 2016)</ns0:ref> is a system that identifies bursty topics from live tweet streams in an efficient way, by tracking the occurrence of word pairs and triples in small 'sketches' of data. Zhou et al.</ns0:p><ns0:p>(2015) devise a lexicon-based approach to spot tweets that are event-related and, based on these tweets, extract a structured representation of events by means of an unsupervised Bayesian model.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As for clustering-based approaches, <ns0:ref type='bibr' target='#b47'>Petrović et al. (2010)</ns0:ref> propose a time-efficient way to determine the novelty of a new tweet appearing in a live stream; novel tweets represent new stories and, therefore, will be assigned to newly created clusters, which are later ranked according to the number of unique user posts and the entropy information. The approach by <ns0:ref type='bibr' target='#b7'>Becker et al. (2011)</ns0:ref> groups tweets into semantically related clusters and then exploits a series of cluster properties (including temporal, social and topical features) in order to discriminate between real-world events and non-events messages. Analogously, <ns0:ref type='bibr' target='#b27'>Kaleel and Abhari (2015)</ns0:ref> employ a locality-sensitive-hashing scheme to extract clusters from the Twitter stream; the exploration of the clusters, which takes into account size, time and geolocation, leads to the identification of significant real-world events.</ns0:p><ns0:p>As already mentioned, the novelty of our approach with respect to these works is that social media data is not considered on its own, but in conjunction with news articles, in order to obtain a more insightful representation of events.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Integration of heterogeneous data</ns0:head><ns0:p>As stated in the Introduction section, several works in the literature suggest that, in many scenarios, an integration of different kinds of sources is necessary to improve the effectiveness of the event-detection algorithm, as far as both timeliness and coverage are concerned <ns0:ref type='bibr' target='#b48'>(Petrovic et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b41'>Musaev et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Petkos et al., 2012)</ns0:ref>. As a consequence, a promising research branch has grown based on this principle.</ns0:p><ns0:p>One interesting example is represented by the work by <ns0:ref type='bibr' target='#b45'>Osborne et al. (2012)</ns0:ref>, which aims to mitigate the spuriousness intrinsic to Twitter messages by means of information from Wikipedia. The latter is used as a filter to discard large numbers of noisy tweets, thus refining the representation of the extracted events. <ns0:ref type='bibr' target='#b60'>Thapen et al. (2016)</ns0:ref> propose a methodology to automatically spot outbreaks of illness from spikes of activity in real-time Twitter streams. A summary of these events is provided to the user with the goal of creating situational awareness; this is achieved by presenting the most significant tweets and by linking them with relevant news, which are searched on the Web based on term occurrences. <ns0:ref type='bibr' target='#b46'>Petkos et al. (2012)</ns0:ref> develop a novel multimodal clustering algorithm to explore multimedia items extracted from several social media platforms, with the purpose of detecting social events. The authors suggest that the proposed approach can be extended to any scenario which requires the usage of multimodal data. In <ns0:ref type='bibr' target='#b10'>(Consoli et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b11'>(Consoli et al., , 2020) )</ns0:ref> the authors present some novel optimization strategies for the quartet method of hierarchical clustering, a methodology popular in the context of biological phylogenesis construction by integration and clustering of different heterogeneous data.</ns0:p><ns0:p>Our approach differs from other works in this category in the way news and tweets are juxtaposed.</ns0:p><ns0:p>In fact, the information extracted from news articles constitutes the basis of our event-detection algorithm, while the processing of tweets is implemented on top of that, with the goal of corroborating that information.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Event detection in Finance</ns0:head><ns0:p>Event detection, Natural Language Processing and Sentiment Analysis have been widely applied in the financial sphere to provide more and more insightful tools for supporting decision making <ns0:ref type='bibr' target='#b65'>(Xing et al., 2018)</ns0:ref>. Some works have pushed the research as far as correlating the information about the events with the movement of the stock prices, with the goal of predicting future returns and developing trading strategies. <ns0:ref type='bibr' target='#b21'>Heston and Sinha (2017)</ns0:ref> study in which way the sentiment and the aggregation of the news affect the time horizon of the stock return predictability. In particular, through a neural network-based method, they show that daily news can forecast returns within one or two days, while aggregating news over one week provides predictability for up to 13 weeks. Moreover, the authors produce evidence that positive news stories increase stock returns quickly, while negative stories have a long delayed reaction. <ns0:ref type='bibr' target='#b56'>Schumaker and Chen (2009)</ns0:ref> combine news textual data and S&P 500 price time-series to estimate a discrete stock price twenty minutes after a news article was released, using Support Vector Machines <ns0:ref type='bibr' target='#b59'>(Suykens and Vandewalle, 1999)</ns0:ref>. <ns0:ref type='bibr' target='#b13'>Ding et al. (2015)</ns0:ref> extract a structured representation of events from financial news, relying on the Open Information Extraction tool developed by <ns0:ref type='bibr'>Yates et al. (2007)</ns0:ref>, and subsequently train a neural tensor network to learn event embeddings; this dense vector representation is then fed into a deep learning model to predict short-term and long-term stock price movements on S&P 500.</ns0:p><ns0:p>As far as social media-based approaches are concerned, <ns0:ref type='bibr' target='#b12'>Daniel et al. (2017)</ns0:ref> carry out an analysis of the content published on Twitter about the thirty companies that compose the Dow Jones Average. In particular, the authors start by detecting and discarding noisy tweets that might distort the information about relevant financial events; in the next steps, they perform a sentiment analysis on the valuable tweets Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and correlate them with the behavior of the stock market. Authors in <ns0:ref type='bibr' target='#b61'>(Tsapeli et al., 2017)</ns0:ref> apply a bursty topic detection method on a stream of tweets related to finance or politics and, then, employ a classifier to identify significant events that influence the volatility of Greek and Spanish stock markets. Events are represented as feature vectors that encompass a rich variety of information, including their semantics and meta data. Starting from the same motivations, <ns0:ref type='bibr' target='#b34'>Makrehchi et al. (2013)</ns0:ref> collect a set of tweets related to companies of the S&P 500 index and label them based on the price movement of the corresponding stock.</ns0:p><ns0:p>Then, they train a model on this set to make predictions on the labels of future tweets and, on top, create trading strategies that prove to give significant returns compared to baseline methods.</ns0:p><ns0:p>Another branch in financial event detection is focused on the extraction of potentially useful information, like events, from news and social media, that can represent a valuable resource for further algorithmic processing or for human-in-the-loop decision making. The Semantics-Based Pipeline for Economic Event Detection (SPEED) <ns0:ref type='bibr' target='#b22'>(Hogenboom et al., 2013)</ns0:ref> aims to extract financial events from news articles and annotate them with meta-data with an efficiency that allows real-time use. This is achieved through the integration of several resources, including ontologies, named entities and word disambiguators, and constitute a feedback loop which fosters future reuse of acquired knowledge in the event detection process. <ns0:ref type='bibr' target='#b25'>Jacobs et al. (2018)</ns0:ref> tackle the task of economic event detection by means of a supervised data-driven approach. They define the problem as a sentence level multilabel classification task, where the goal is to automatically assign the presence of a set of pre-determined economic event categories in a sentence of a news article. Following the same intuition, Ein-Dor et al. ( <ns0:ref type='formula'>2019</ns0:ref>) develop a supervised learning approach for identifying events related to a given company. For this purpose, the authors train a sentence-level classifier, which leverages labels automatically extracted from relevant Wikipedia sections. <ns0:ref type='bibr' target='#b23'>Hogenboom et al. (2015)</ns0:ref> measured the effects of various news events on stock prices. They retrieved 2010 and 2011 ticker data and news events for different equities and identified the irregular events. Finally, they cleaned the ticker data of rare event-generated noise and obtained a dataset with a more accurate representation of the expected returns distribution.</ns0:p><ns0:p>Moreover, <ns0:ref type='bibr' target='#b42'>Nuij et al. (2014)</ns0:ref> presented a framework for automatic exploitation of news in stock trading strategies where events were extracted from news messages presented in free text without annotations. It turned out that the news variable was often included in the optimal trading rules, indicating the added value of news for predictive purposes.</ns0:p><ns0:p>The innovation that we bring with respect to the literature consists, first of all, in the integration of different sources to obtain richer representations of events. Second, we propose a method to estimate the resonance of an event based on the activity on social media platforms, and we leverage this measure to provide warnings to the final user. Last but not least, our method has been deployed for real-time detection of financial events, although within the evaluation we carried out we applied it on historical data but without considering the future information of a day under analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROPOSED APPROACH</ns0:head><ns0:p>The problem that we set out to tackle in this work is twofold. In the first place, we want to identify groups of news stories related to real-world events of a specific domain, on a daily basis. More precisely, given a day d and a look-back period of n days, our approach aims to extract k semantically related clusters made of text documents published by newswires providers during the n days before d. The parameter k is automatically estimated from the data so that it reflects the real events actually taking place in the best possible way. Each cluster is described by a set of properties, including relevant headlines and keywords, that are semantically correlated with the event represented by the cluster.</ns0:p><ns0:p>Secondly, we intend to tackle the problem of understanding whether a highly relevant event is taking place on a given day. Such an event is defined as hot and is associated with an increased amount of content published on a microblogging platform about that topic in the same time interval.</ns0:p><ns0:p>The main ideas underlying our proposed approach are the following:</ns0:p><ns0:p>• detecting the words that are more significant for the context under analysis can lead to more effective domain-aware representations of documents;</ns0:p><ns0:p>• clustering techniques allow to identify and distinguish events reported in news stories;</ns0:p><ns0:p>• the integration of social media data and news stories is key to spot hot events that are potentially noteworthy for the context under analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the following sections, we will describe the implementation of the algorithm that we designed for a specific scenario, namely the financial field. However, we would like to point out that our proposal can be generalized to any sphere of interest with minimum modifications, concerning mainly the filter applied to the news corpus and the numeric feedback used to assign a score to words in the lexicon generation phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Overall Architecture</ns0:head><ns0:p>The proposed algorithm is outlined in the pipeline in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, which is repeated for each single day d on which the event-detection task is executed. The first step consists of the generation of a dynamic, context-specific lexicon, which includes the list of words that have proven to have the biggest impact on the market in a given period before d <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>. This resource is computed by combining two different data sources: on the one hand, words are extracted from financial news published in a time interval that typically ranges from 2 to 4 weeks previous to d. On the other hand, the stock price time-series of the chosen market is used to assign numeric scores to the words appearing in the press releases.</ns0:p><ns0:p>In the following we will give some formal notation to illustrate how this step corresponds to perform a marginal screening <ns0:ref type='bibr' target='#b18'>(Genovese et al., 2012)</ns0:ref>, a form of variable selection which is proven to be more efficient than the Lasso and with good statistical accuracy. Let us assume that in the period [d − ℓ, d − 1] the algorithm collects N articles, where a portion of them contains the term j. Then</ns0:p><ns0:formula xml:id='formula_0'>f ( j) = 1 N * ∑ 1≤k≤N X k ( j) * ∆ d (k) ,</ns0:formula><ns0:p>where X k ( j) is a dummy variable for whether term j appears in article k and ∆ d (k) is the return on the day d for article k. In this form, f ( j) is the slope of a cross-article regression of</ns0:p><ns0:formula xml:id='formula_1'>∆ d = (∆ d (1), . . . , ∆ d (N))</ns0:formula><ns0:p>on the dummy variable X( j) = (X 1 ( j), . . . , X N ( j)). More precisely, f ( j) are coefficients of a marginal regression. By sorting them by decreasing scores and selecting those whose values are over (under) some specified threshold, is similar to taking the first n and the last n. Moreover, in our lexicon construction, if S is the index set of positive and negative words (those corresponding to high or low stock variations), and Ŝ = { j : f ( j) ≥ t + or f ( j) ≤ t − }, under certain conditions Prob( Ŝ = S) = 1 as N and the number of terms go to infinity. This corresponds to the sure screening property <ns0:ref type='bibr' target='#b16'>(Fan and Lv, 2008)</ns0:ref>.</ns0:p><ns0:p>Once the specialized lexicon is obtained, it is applied as a filter on the news documents, so that only the terms that appear in the lexicon are retained. Subsequently, a document-embedding representation of each news story is constructed by computing the average of the word-embeddings of its filtered words.</ns0:p><ns0:p>After the news-modeling stage, the document-embeddings are fed to an agglomerative clustering algorithm, which returns a list of labels, which indicate the cluster to which each specific observation belongs, and a variable number of centroids. Intuitively, each cluster should correspond to the event discussed in the news contained in it, while the cluster centroid serves as an high-level discriminating representation of the event. The previous output is later refined through an operation of outlier removal, whose goal is to find and discard those documents whose assignment to their cluster is weak. Once the spurious data have been cleaned out from the clusters, a series of properties are extracted from each group of news, both for illustrative and for evaluation purposes. This information includes the titles of the articles, the percentage of positive and negative words (associated to high or low stock price variations, as described in the next paragraph), and the list of the most relevant words for the cluster, assessed through a tf-idf -based method.</ns0:p><ns0:p>At this point of the pipeline, the integration between news stories and social media data takes place.</ns0:p><ns0:p>The idea here is, first, to find tweets that are semantically correlated to some group of news and, second, to detect if an event reported in the news has a wide resonance on the social media platform. More specifically, the tweets relevant for the market under analysis published on the most recent day of the time interval are collected and then represented with the same embedding-based method previously employed for the news. The assignment task consists of attaching every tweet to the closest news-cluster, according to a similarity measure calculated between the tweet-embedding and each news-centroid, as long as this distance is smaller than a defined tweet distance threshold; otherwise, the tweet is discarded.</ns0:p><ns0:p>The last step in the event-detection pipeline is the alert generation. This happens when the percentage of the assigned tweets w.r.t the overall number of tweets published on the most recent day of the time interval is bigger than a given alert threshold. In fact, this suggests that a considerable number of people on the social media platform are discussing some events reported in the news.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Lexicon Generation</ns0:head><ns0:p>The lexicon generation stage leverages the method that we proposed in <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020</ns0:ref>), which we hereby set out to illustrate for the sake of completeness. From a general perspective, the goal of the lexicon generation is to select the set of words that are most relevant for a specific domain in a given time interval. In order to be able to capture the impact of events that occur day by day (and thus the effect of new words that show up in news articles reporting such events), we perform the lexicon creation in a dynamic way, repeating its generation every day. For these reasons, we define the lexicons generated by our approach as time-aware and domain-specific.</ns0:p><ns0:p>If we apply this concept to the financial sphere, the relevance of a word can be estimated by observing the effect that it has on the market after the delivery of the news stories containing this word. In this sense, the resulting lexicons will capture potential correlations between words that appear in news stories and stock price movements: terms that are consistently followed by significant positive (negative) variations will receive a high (low) score, while terms that are followed by negligible or arbitrary variations will tend to have a score close to 0. Going into more detail, for each day we collect all the news that are relevant for the S&P 500 Index published during the time frame [d − ℓ, d − 1] (with ℓ ≥ 1). More precisely, we select all news with at least one mention of Standard & Poor (or strictly related keywords like SP500 and SPX). For each news article in this set, we extracted the text, consisting of the title, the snippet and the full body of the article, and then we performed some standard pre-processing techniques on it, such as stop-words removal (using that of Stanford CoreNLP 4 ), stemming and tokenization (the last two using NLTK 5 ). In addition, we removed from the corpus all the words that appeared too frequently and too infrequently, according to given tolerance thresholds. In our case, we filtered out all the words that appear in more than 90% of the documents or in less than 10 documents (both thresholds were set experimentally). Subsequently, we construct a document-term matrix, in which each row corresponds to a news article and date and each column corresponds to a term, as obtained after the pre-processing. In the next step, we iterate over the rows of the matrix and, for each of them, we assign to each of its terms a value equal to the stock price variation registered on the day after the article was published, defined as:</ns0:p><ns0:formula xml:id='formula_2'>∆ d ′ = close d ′ − close (d ′ −1) close (d ′ −1) ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where d</ns0:p><ns0:formula xml:id='formula_3'>′ ∈ [d − ℓ, d − 1]</ns0:formula><ns0:p>is the day after the publication of the underlying article, and close d ′ is the price of the stock at the closing time of the market on day d ′ . Finally, each column is averaged (counting only non-zero entries), thus obtaining a list of terms, each associated to a score given by the average of the values assigned to them. We sort the terms by decreasing scores and select the first n and the last n.</ns0:p><ns0:p>These are the ones associated to higher price variations, respectively positive and negative, and represent the time-aware, domain-specific lexicon that will be exploited for the news modeling phase. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Feature Engineering</ns0:head><ns0:p>The aim of the news modeling phase is to obtain a representation of the news documents in a vector space, such that it captures its semantics and it is convenient for the subsequent cluster analysis. This must be achieved by characterizing each article through the words that are more relevant for the specific domain, ignoring the words that represent noise or that, in any case, do not provide valuable information. The two main resources that are exploited in this stage are the lexicons described in the previous Section 3.2 and a word-embedding model, which assigns a dense vector representation to words <ns0:ref type='bibr' target='#b39'>(Mikolov et al., 2013)</ns0:ref>. The latter can be obtained by training the model on the text corpus under analysis or by loading a pre-trained model previously fit on an independent corpus.</ns0:p><ns0:p>First of all, each news article undergoes a series of standard text pre-processing operations, namely tokenization, conversion to lower case and stopwords removal. Subsequently, the words of each document are filtered by means of the lexicon produced on the day of the publication of the news, so that only the words that appear in the lexicon are retained. Finally, the word-embeddings of the filtered words are extracted and their average is computed to obtain the news-embedding. The embedding representation of news documents obtained in the previous step is the input to the clustering algorithm (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>), whose goal is to split the articles in semantically-correlated groups.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>News clustering</ns0:head><ns0:p>Ideally, each cluster corresponds to a real-word event.</ns0:p><ns0:p>For this purpose, we employ the agglomerative clustering algorithm. The decision mainly arises from a comparison with other standard techniques, which in this specific scenario do not prove as effective at separating the input data (see Section 4 for a detailed comparative analysis). The agglomerative clustering is a method pertaining to the family of hierarchical algorithms, which build nested clusters by merging or splitting them successively <ns0:ref type='bibr' target='#b50'>(Rokach and Maimon, 2005;</ns0:ref><ns0:ref type='bibr' target='#b40'>Murtagh, 1983;</ns0:ref><ns0:ref type='bibr'>Zhao et al., 2005)</ns0:ref>. More specifically, the agglomerative algorithm follows a bottom up approach: at the beginning, each sample represents a cluster on its own, and clusters are successively merged together according to a linkage criteria. In this study, the choice fell on the average linkage criterion, which minimizes the average of the distances between all observations of pairs of clusters, while the affinity used to compute the linkage was the cosine distance, the most commonly employed metric when dealing with text documents.</ns0:p><ns0:p>An important aspect to take into account is the number of clusters k that the algorithm extracts. This can be set as a parameter to the agglomerative method, but finding the most suitable k a priori is not trivial.</ns0:p><ns0:p>Above all, using a fixed k for all days would in most cases lead to a misshaped approximation of reality, because the number of events taking place around the world naturally varies enormously from day to day and in different periods of the year. For this reason, a technique known as the silhouette maximization method is used to find the ideal value of k in a dynamic manner. The silhouette coefficient is a metric used to evaluate the performance of a clustering algorithm when a ground truth is not available. It ranges from -1 to 1, where higher scores relate to models with better defined clusters and it is defined for each sample by the following formula: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>whole model can be easily computed as the average of all the scores computed on the single samples. In fact, the average silhouette coefficient is the metric that guides us in the choice of the best number of clusters k on each day on which the event-detection pipeline is executed. The agglomerative clustering algorithm is run with k values ranging from 2 to 10 and the silhouette score is computed on the output for every k. The value of k which led to the highest silhouette is selected.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the output of a small instance of the silhouette maximization method applied on a set of news collected in one week. The output of the agglomerative algorithm is simply a sequence of labels, which indicate the cluster to which each specific observation belongs. The method by itself does not return any centroid, as this notion is not employed in any step of its procedure. However, the next phases in the event-detection pipeline require also a centroid for each cluster (i.e., a vector obtained through a combination of the samples in the cluster, typically the mean or median). For this reason, we manually construct a series of centroids, computed as the median of the document-embeddings contained in the respective cluster. In this scenario, the median is a more suited choice compared to the mean, because it is less sensitive to noise and outliers.</ns0:p><ns0:p>The resulting centroids, which are vectors of the same length of the document-embeddings, serve as high-level discriminating representations of the corresponding events.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Outlier Removal</ns0:head><ns0:p>Not necessarily all the articles published by press sources report events currently taking place: for example, in some cases they might refer to anniversaries of past happenings or they might discuss current affairs from a general perspective, including more than one event. This can cause noise in the formation of the clusters and, to some extent, can negatively influence the features of the centroid. For this reason, it is recommendable to detect and remove the outlier documents within each cluster (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>). Intuitively, these are the observations on which the clustering algorithm was least effective.</ns0:p><ns0:p>Again, the silhouette coefficient (this time in its per-sample version) is used to spot the documents that were poorly clusterized: those with lower silhouette scores are typically the ones that lie on the border between two or more groups, causing a higher uncertainty in the clustering task. This is not enough, though: in fact, there might be samples that, even if they are not located on a border, have a weak correlation with the other articles of the same cluster: these are typically the documents that lie further away from the centroid of the group to which they belong. Therefore, the noise-reduction task that we designed exploits two different metrics in order to detect the outliers: the per-sample silhouette coefficient and the cosine distance from the centroid. First of all, the samples are sorted in decreasing order according to these two metrics, respectively, thus obtaining two different rankings. Then, cutoff threshold is defined on each ranking, by picking a percentile value computed on each of the two lists, respectively (typically somewhere between the 10th and the 30th). Finally, all the samples whose scores Manuscript to be reviewed</ns0:p><ns0:p>Computer Science are below the cutoff threshold in one of the two rankings are marked as outliers and suppressed. It is straightforward to note that choosing higher percentiles to set the cutoff threshold will make the algorithm more selective, in the sense that it will consider more documents as outliers. In rare extreme cases, this might lead to the total suppression of one of more clusters, if these already contained few samples in the first place and were not compact.</ns0:p><ns0:p>At this point, the new centroids of the affected clusters need to be computed, to account for the elimination of some of the documents. Similarly to what was done before, each centroid is obtained as the median of the document-embeddings that remain in the cluster after the outlier removal.</ns0:p><ns0:p>An example of clustering and outlier removal can be observed in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>, which presents a bidimensional visualization of the clusters obtained from the financial news published on the U.S. press on the week before the Brexit referendum, an event that we will use along the paper that took place on the 23rd of June 2016. Figure <ns0:ref type='figure' target='#fig_9'>5a</ns0:ref> shows the original clusters including all documents. In Figure <ns0:ref type='figure' target='#fig_9'>5b</ns0:ref> the outliers are marked in black. For this example, the 30th percentile was used as the cut-off threshold, in order to make the effects of the algorithm more visible. Centroids are indicated by '+' marks, in the same color of the respective cluster.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Relevant Words Extraction</ns0:head><ns0:p>There are several properties that can be extracted from each cluster to provide insightful information to the user. At the same time, they can serve as a useful means to perform a qualitative evaluation of the clusters, as they allow judging at first sight if the clusters are meaningful and coherent. These properties include the titles and snippets of the news articles, the time-span covered by the cluster, the percentage of positive and negative words from the specialized lexicon and the list of relevant words. Hereby we focus our attention on the latter (Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>), as the other ones are trivial to extract.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>At first, all the news articles included in the current time interval are fed to a tf-idf model, regardless of their cluster. The features used to fit the model are the words included in the specialized lexicon, so this is equivalent to filtering the documents' words with the lexicon. The output of the model is a sequence of vectors, one for each document, where the values represent the relevance of the corresponding words for the document. At this point, the tf-idf vectors are grouped up according to the cluster to which the respective documents have been assigned. Then the average of the vectors is computed for each group, thus obtaining a unique score for each feature for each cluster, indicating the relevance of that word for that cluster. Finally, it is sufficient to rank the features and select the top n to get the list of the most relevant words for each group of news.</ns0:p><ns0:p>Tables <ns0:ref type='table' target='#tab_1'>1 and 2</ns0:ref> show the instance of the 3 most relevant headlines and the lists of the 10 most relevant words, respectively, for the clusters obtained on the day of the Brexit referendum. It is clear from the news titles that cluster#0 effectively captures this event, while the others are more focused on different aspects of business and finance (cluster#1 deals with global finance, cluster#2 with stock markets, cluster#3 with volatility, cluster#4 with the Federal Reserve system). The fact that several mentions of Brexit also appear in the headlines of the other clusters is attributable to the huge impact of the British referendum on many spheres of economy around the world. Not surprisingly, also the titles of the Brexit-cluster are characterized by the financial jargon, since the whole set of news on which the event-detection task is performed was selected by this specific field of interest. For the same reason, the variety of semantic fields involved in the lists of relevant words is not so wide between clusters. Noticeably though, these lists reflect quite accurately the content of headlines of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>Tweet Assignment</ns0:head><ns0:p>The goal of this phase is to enrich each cluster of news with a group of tweets that are semantically correlated with the event associated to the cluster (Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>). First of all, we collect from Stocktwits all the tweets relevant to the market under analysis, published on the most recent day of the time interval used for the event-detection task 6 . The duplicate tweets are removed in order to avoid the negative influence of spam. A vector representation is constructed for every tweet with the same method used for the news articles: the punctuation is removed, the text is tokenized, the words are filtered with the specialized lexicons and the average of the embeddings of the remaining words is computed. Subsequently, the actual assignment takes place. Each tweet-embedding is compared to each newscluster centroid using the cosine similarity measure. The tweet is attached to the closest cluster only if this distance is smaller than a fixed tweet distance threshold; otherwise, the tweet is considered as noise and is not associated to any cluster.</ns0:p><ns0:p>An example of tweet assignment can be observed in Table <ns0:ref type='table'>3</ns0:ref>, which presents the lists of the 3 most relevant tweets for the clusters obtained on the day of the Brexit referendum. Most importantly, the content of these tweets is totally coherent with the titles reported in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. This means that the association of tweets to news-clusters was successful. It is noteworthy that even URLs, in case they contain meaningful keywords, can provide precious information for the semantic representation of the tweets and for the assignment task. This can be observed in the URLs of the first two tweets of cluster#0, which contain direct references to the Brexit referendum.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.8'>Alert Generation</ns0:head><ns0:p>The last step in the pipeline consists of the detection of the hot events: these are facts that not only have been reported in the news, but are also widely discussed on the social media platforms. The amount of 6 This can be easily done by using the cashtag functionality, i.e. by searching for tweets that contain the symbol $ followed by the market code.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed No tweets assigned.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. List of the 3 most relevant tweets (i.e. closest to the respective centroid) for each of the 5 clusters obtained from the news collected in the week before the Brexit referendum.</ns0:p><ns0:p>content produced on the Internet about a certain episode is an insightful indicator of the entity of that episode and its consequences. For example, a remarkable popularity of a certain event among the users of Stocktwits is likely to translate into a potential impact on the market, since this website deals mainly with business and finance. Hence the importance of generating alerts that make the investor or trader aware of factors that they should take into account before operating on the market.</ns0:p><ns0:p>This task exploits the tweets-cluster assignment produced in the previous step and simply checks if the percentage of assigned tweets (among all clusters) with respect to the overall number of tweets published on the most recent day of the time interval (thus including also the discarded tweets) is above a fixed alert threshold. If this is true, an alert is generated.</ns0:p><ns0:p>The plot in Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> shows the percentage of assigned tweets between the 10th of June 2016 and the 10th of July 2016. As expected, a peak is observed on the 23rd of June, day of the Brexit referendum, and an alert is generated. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTAL SETTINGS</ns0:head><ns0:p>In this section we will illustrate the datasets we have employed within our study and the methodology we have followed for the experimental evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Datasets</ns0:head></ns0:div>
<ns0:div><ns0:head>Dow Jones DNA</ns0:head><ns0:p>The Dow Jones 'Data, News and Analytics' dataset 7 provides documents from more than 33,000 globally renowned newspapers, including e.g. The Wall Street Journal, the Dow Jones Newswires and The Washington Post. The publications are both in print and online format and cover a wide variety of topics, such as finance, business, current affairs and lifestyle. The delivery frequency ranges from ultra-low latency newswires to daily, weekly, or monthly editions. For every article in the dataset, the headline, the snippet and the full body are available. Furthermore, every item is enriched with a set of metadata providing information about the source, the time and place of the publication, the relevant companies and the topics, among others.</ns0:p><ns0:p>Content usage rights vary based on the specific content, API, or feed combination. These rights include the display for human consumption or text mining for machine consumption and the content retention period.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stocktwits data</ns0:head><ns0:p>Stocktwits 8 is a social media platform designed for sharing ideas between investors, traders, and entrepreneurs. It was founded in 2008 and currently counts over two million registered community members and millions of monthly visitors. Inspired by Twitter, it allows users to share and explore streams of short messages with a maximum 140 characters, organized around tickers referring to specific stocks and financial securities in general. This is achieved through the use of cashtags, which consists of the symbol '$' followed by the code of a financial security (e.g., '$AAPL', '$FB').</ns0:p><ns0:p>The dataset that we employed in our study contains the entire stream of tweets about S&P 500 published between June 2016 and March 2020. These messages were downloaded by means of the official API 9 , selecting only the ones that contained the cashtag '$SPX', which corresponds to the aforementioned stock. The whole obtained collection contains 283,473 tweets.</ns0:p><ns0:p>Beside the full text of the tweet, every item in the dataset comes with a set of metadata, including the exact time of the publication, the number of 'likes' (positive reactions by other users) received by the tweet, the sentiment score associated with the content and the number of the author's followers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Standard & Poor's time-series</ns0:head><ns0:p>Another fundamental data source exploited in our analysis consists of the stock price time series of the the Standard & Poor's 500 Index, which measures the market performance of 500 large companies listed on stock exchanges in the United States. Companies are weighted in the index in proportion to their market value. The 10 largest companies in the index account for 26% of the market capitalization of the index.</ns0:p><ns0:p>These are, in order of weighting, Apple Inc., Microsoft, Amazon.com, Alphabet Inc., Facebook, Johnson & Johnson, Berkshire Hathaway, Visa Inc., Procter & Gamble and JPMorgan Chase.</ns0:p><ns0:p>The dataset that we used for our evaluation was collected at a daily frequency and includes the following information:</ns0:p><ns0:p>• open: price of the stock at the opening time of the market;</ns0:p><ns0:p>• close: price of the stock at the closing time of the market;</ns0:p><ns0:p>• high: maximum price reached by the stock during the day;</ns0:p><ns0:p>• low: minimum price reached by the stock during the day;</ns0:p><ns0:p>• volume: number of operations performed on the stock during the day.</ns0:p><ns0:p>The aforementioned indicators are collected in an aggregated way, taking into account the values recorded for all companies included in the index.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Methodology and Settings</ns0:head><ns0:p>The goal of the experimental framework that we designed to evaluate the proposed approach is twofold: on the one hand, we wish to verify that the clustering algorithm, supported by the news-modeling method and the outlier removal, is effective at separating the news stories according to their content and, thus, at providing an insightful way to inspect events. On the other hand, we want to assess the accuracy of the alert-generation algorithm, in particular to confirm that there is a correlation between hot events spotted by our approach and remarkable real-world events. For our purposes, we performed the following set of experiments: i) comparison of different clustering techniques; ii) event-detection qualitative evaluation;</ns0:p><ns0:p>iii) alert-generation assessment.</ns0:p><ns0:p>Assessing the performance of an event-detection task is a daunting task, and getting a thorough quantitative evaluation is not trivial as well. This is partly due to a certain degree of subjectivity implied in the definition of an event, even more when we consider it with respect to a very specific context. In fact, an important occurrence such as a terrorist attack taking place in Europe, which is relevant in an absolute sense, might not be perceived as a relevant event in the sphere of U.S. finance. Moreover, to the best of our knowledge, universally recognized benchmarks of financial events are not available. For these reasons, in order to limit the subjectivity of the choice, we decided to select a list of events in a deterministic way, based on the weekly variations of the S&P 500 Index (more details on the selection method will be given in Section 5). Intuitively, we follow the assumption that important financial events are commonly associated with significant reactions of the stock markets, as suggested by the plots in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and clustering performance metrics, were based on the popular scikit-learn library 10 ; the K-Means, K-Medoids, K-Medians algorithms used for comparison were implemented through the Pyclustering library 11 ; Natural Language Toolkit 12 and gensim 13 libraries were exploited for text pre-processing. As far as word-embeddings are concerned, we relied on the pre-computed word2vec model based on <ns0:ref type='bibr' target='#b39'>(Mikolov et al., 2013)</ns0:ref>, trained on part of a Google News dataset composed of about 100 billion words. The model contains 300-dimensional vectors for 3 million words and phrases 14 . Throughout the experiments presented in the paper, the parameters were set as follows (if not specified otherwise): each daily lexicon was created on a set of news documents collected from a time window of 4 weeks, excluding all stopwords and terms that appeared in more than 90% or less than 10 documents, and the final lexicon consists of the words below the 20th and above the 80th percentiles of the ranking. The look-back window to collect the news documents to be clusterized on each day is 7 days. The cutoff threshold for the outlier removal stage is set to the 15th percentile. The tweet distance threshold for the tweet-assignment task is set to 0.5; the alert threshold is set to 3%. All the values of these parameters were carefully selected experimentally.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RESULTS</ns0:head><ns0:p>In this section we will show the results we have obtained. In particular we will show the results related to the clustering algorithm, those related to three specific events, and those related to the alert-generation algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Clustering performance evaluation</ns0:head><ns0:p>The first aspect we investigate is the choice of the clustering algorithm. As mentioned, the average Silhouette Coefficient is a standard metric to evaluate the goodness of a set of clusters. However, since it plays a role in the very construction of the clusters, we need some additional counter-checks to make the assessment more robust and less skewed. For this reason, we decided to include three more indicators in our evaluation:</ns0:p><ns0:p>• Dunn Index: similarly to the Silhouette Coefficient, it is a standard metric used to assess the performance of a clustering method when the ground truth is not available. It ranges from 0 to 1, with higher values indicating better clustering and is defined as:</ns0:p><ns0:formula xml:id='formula_4'>Dunn Index = min 1≤i≤c min i≤ j≤c,i = j δ (X i , X j ) max i≤k≤c {∆(X k )} ,</ns0:formula><ns0:p>where c is the total number of clusters, δ (X i , X j ) is the intercluster distance between clusters X i and X j and ∆(X k ) is the intracluster distance within cluster X k .</ns0:p><ns0:p>• Number of extracted clusters: this is also a useful indicator to evaluate the quality of a set of clusters, as higher values typically suggest a better separability of the data.</ns0:p><ns0:p>• Overlapping between the clusters' relevant words: it is estimated by computing the Jaccard Index 15 between the lists of top-10 relevant words for each pair of clusters, and by averaging the results.</ns0:p><ns0:p>A small average overlapping signifies that news documents belonging to different groups discuss different topics and, therefore, that the articles were properly split according to their semantic content. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We used these metrics and the Silhouette Coefficient to compare four different techniques, namely</ns0:p><ns0:p>Agglomerative clustering, K-Means, K-Medians and K-Medoids. These algorithms were executed on the same instances of the data selected for our evaluation, on each day of the time interval, using a look-back window of 1 week. Figure <ns0:ref type='figure' target='#fig_16'>12</ns0:ref> shows the outcome of this experiment, indicating that Agglomerative is the algorithm that leads to better performance in terms of Silhouette, Dunn Index and most remarkably in the number of extracted clusters, while the overlapping of relevant words does not differ much. Please consider that the metrics were computed only after the outlier removal phase, which is responsible for an improvement of approximately 50% of both Silhouette and Dunn Index.</ns0:p></ns0:div>
<ns0:div><ns0:head>Event-detection evaluation</ns0:head><ns0:p>The results presented in the previous section, although obtained through an unsupervised validation, prove by themselves the effectiveness of the clustering algorithm at detecting events from a set of news. For illustration purposes, in this section we will carry out a qualitative analysis of the output of the clustering algorithm, focusing our attention on three specific events:</ns0:p><ns0:p>• The 2016 United States Presidential Elections (8th November 2016);</ns0:p><ns0:p>• The escalation of the U.S.-China trade war (9th May 2019);</ns0:p><ns0:p>• The outbreak of the Covid-19 pandemic (28th January 2020).</ns0:p><ns0:p>The reader notices that these three events are well-known world wide and there was no need to agree on those days. We invite the reader to refer to Section 3 for an analysis of the Brexit referendum in June 2016.</ns0:p><ns0:p>For each event, we picked one date among the most significant ones: the 8th of November is the actual date of the 2016 U.S. elections, which brought to the victory of Donald Trump 16 ; the 9th of May 2019 is a few days after Trump threatened to raise tariffs on China and just one day before U.S. actually increased tariffs from 10% to 25% 17 ; the 28th of January is the first day on which the total number of confirmed cases of Covid-19 worldwide surpassed the one-thousand threshold, passing from 793 to 1, 786 with a dramatic 125% daily change 18 . We applied the event-detection algorithm on the news published in the previous week, not including that date.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_17'>13</ns0:ref> we illustrate the results of the event-detection for each tested clustering algorithm. We remind that the Agglomerative clustering outperforms the others (as it can also be seen from the plots), and, therefore, we will focus our analysis on it (subfigures a, b and c).</ns0:p><ns0:p>From the 2D visualizations presented in Figure <ns0:ref type='figure' target='#fig_17'>13</ns0:ref>, it can be seen that the points representing the news documents properly group up to form clusters. Interestingly enough, Figure <ns0:ref type='figure' target='#fig_17'>13c</ns0:ref> shows a strong polarization of the news, with only two clusters obtained. This is probably ascribable to the epochal impact of the Covid-19 outbreak, that drew a considerable part of the attention of the media, with many other topics left uncovered in the press. The average Silhouette Coefficient is decidedly above 0 for all three case studies (0.28, 0.27 and 0.36, respectively), indicating a satisfactory performance of the Agglomerative algorithm. These results are confirmed by the lists of relevant words (Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>), relevant news headlines (Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>) and relevant tweets (Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>), which accurately reflect the semantics of the events (these last have been generated from the agglomerative clustering output only).</ns0:p></ns0:div>
<ns0:div><ns0:head>Alert-generation evaluation</ns0:head><ns0:p>As mentioned in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, the Alert-generation is the last step of the proposed pipeline and is performed on top of the clustering results and the tweets assignment to the generated clusters. The accuracy of the alert-generation algorithm can be gauged in terms of how many hot events it is able to spot in a given ground truth. As mentioned in Section 4.2, we selected the ground truth for our evaluation by looking at the weekly variations of the S&P 500 Index. More in detail, for every day d between June 2016 and March 2020 we compute the variation, in absolute value, between the close price of d and the close price of d + 7 (i.e., 7 days after d). This quantity is formally defined as: Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>∆ d = | close (d+7) − close d | close d .<ns0:label>(2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The days d for which ∆ d > 0.02 are marked as event days. The threshold of 0.02, which corresponds to a 2% weekly variation, is set experimentally and leads to approximately 15% of days being marked as an event day. Consecutive event days are aggregated to form events, which are thus defined as contiguous intervals, delimited by a start date and an end date. In order to avoid splitting intervals that actually refer to the same real event, we ignore interruptions of up to 3 days in the chain of consecutive event and <ns0:ref type='bibr'>(start: 2018-01-12, end: 2018-01-15)</ns0:ref>. We assess the recall of the alert-generation algorithm using the following method: for each (start date, end date) pair, we check if the algorithm produces at least one alert within that interval. In the positive case, the event is considered as spotted. The final recall is</ns0:p><ns0:p>given by the number of spotted events out of the total number of events. On the other hand, to calculate the precision we iterate over the daily alerts generated by the algorithms. If an alert lies within an event interval defined by a (start date, end date) pair, then it is considered a hit; otherwise, it is considered a false alarm. The final precision is obtained as the number of hits out of the overall number of alerts. The F-score is computed with the standard formula: 2 • precision • recall precision + recall . We have repeated the experiment for different values of alert threshold, in a range between 1 and 5, with higher values producing less alerts and thus making the algorithm more selective. Not surprisingly, recall scores become lower as the threshold is increased, while precision follows the opposite tendency, as shown in Figure <ns0:ref type='figure' target='#fig_19'>14</ns0:ref>. The reader notes that we have considered precision, recall and F-score for each of the four clustering algorithms, although the discussion below targets the results obtained with the Agglomerative clustering only. This is a well-known phenomenon in Machine Learning evaluation, commonly referred to as trade-off between precision and recall. However, it is remarkable that, with the lowest threshold, our algorithm is able to identify almost all the events listed in the ground truth, while keeping the number of false alarms relatively small (the precision is above 0.5). It is worth noting that, in this specific application field, recall can be considered more important than precision: in fact, for a trader who relies on the alert-generation algorithm to make informed decisions, receiving some false alarms is arguably a lesser evil than missing relevant warnings about events that actually take place. In order to study further this phenomenon, we manually inspected several clusters that led to a false alarm, in order to understand which kinds of events cause this behavior. In many cases, we observed events like, e.g., quarterly earnings reports, that generate a big 'hype' among Stocktwits users, but usually do not produce a proportional impact on the stock price. Furthermore, we calculated the percentage of days marked with an alert out of the whole period on which the algorithm was executed. Figure <ns0:ref type='figure' target='#fig_20'>15</ns0:ref> demonstrates that, for each of the employed clustering algorithms, even with the lower thresholds, the probability of receiving an alert is still reasonably low, confirming that the algorithm is well-aimed.</ns0:p><ns0:p>An interesting finding is that, in several cases, the alert is produced with a delay of several days after the actual event took place. This can be partly ascribed to the asynchronism between newswires providers and social media <ns0:ref type='bibr' target='#b47'>(Petrović et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b44'>Osborne and Dredze, 2014)</ns0:ref>. In addition to this, in our specific application it is important to take into account the latency between the event itself and its effects on the market. In fact, an event might appear in financial news and on financial-related media only after some time, when its economical consequences manifest themselves. This was the case, for example, for the Covid-19 emergency: American investors, consumers and market in general basically ignored news of the virus outbreak in January 2020, even though several hundred cases had already been reported and Wuhan City (first main site of the virus) had already been quarantined by the Chinese authorities. Nevertheless, starting from February the virus became the main concern of the financial-related press and media and the stock market suffered a violent plunge towards the end of the month due to Covid-19 fears 19 .</ns0:p><ns0:p>To conclude the visual inspection of the events discussed in the previous section, Figure <ns0:ref type='figure' target='#fig_21'>16</ns0:ref> illustrates the plot of the percentage of total assigned tweets, whose behaviour determines the generation of the alerts (as explained in Section 3.8). It is straightforward to notice that the curves reach their peaks in correspondence of the date of the event for all three case studies, further confirming the sensitivity of our approach. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>In this work, we proposed an event-detection approach tailored for financial applications that leverages the integration of traditional newswires and social media data in order to extract information about real-world events, on a daily basis. Starting from a specialized domain-aware representation of textual data obtained through ad-hoc lexicons and word-embeddings, our pipeline is able to identify groups of semantically related news by means of a hierarchical clustering algorithm. An outlier-removal module refines the clusters by discarding misclassified documents, so that a noise-free, meaningful representation of events can be computed. At this point, the news clusters are enriched by data coming from social media, with the goal of estimating the impact of events on public opinions. Despite the defined tweet distance threshold that should avoid this case, it might happen that tweets containing different information are associated with the same cluster triggering the presence of a hot event that would correspond to a false positive.</ns0:p><ns0:p>Although this condition has not occurred within our experiments, we will investigate it further in future works we are headed. Finally, by monitoring the activity on social media platforms, an alert-generation algorithm produces warnings to the final users, in order to make them aware of potentially relevant events that have a big resonance on the Internet. The reader notices that to identify the events of a day d our proposed approach generates the lexicon out of news articles and stock data information of previous days up to d − 1 without looking at the future. This makes our approach suitable for real-time event detection.</ns0:p><ns0:p>One of the advantages of the proposed approach is that, although it is domain-specific, it can be easily extended to various application fields with minimum modifications. However, in this work we described the specific pipeline and experimental framework that we implemented for the financial sphere. More in detail, word2vec models can be trained ad-hoc on text corpora in other languages, as the algorithm itself is not language-dependent. As an example, libraries such as spaCy 20 provide pre-trained word-embedding models in 15 languages. Although the approach is scalable and does not have high computational times, each of its steps can be run on different machines by exploiting its pipeline architecture. Also, big data frameworks such as Apache Spark, Apache Hadoop, ElasticSearch can be leveraged and managed by cloud systems (e.g. Amazon AWS) to further make the approach faster especially if the input dataset grows exponentially.</ns0:p><ns0:p>We validated our approach by means of a qualitative and quantitative evaluation, based on Dow Jones' Data, News and Analytics dataset, a stream of data collected from the Stocktwits platform and the stock price time series of the S&P 500 Index. Our experiments show that the approach is effective at identifying clusters of news that correspond to relevant real-world events and at extracting meaningful properties about the associated topic. Besides, the alert-generation algorithm produces warning about hot events with a satisfactory accuracy, covering the majority of financial events taking place in the real world and keeping the number of false alarms relatively small. An added value of our evaluation is given by the visual inspection of a selected number of significant real-world events, starting from the Brexit Referendum and reaching until the recent outbreak of the Covid-19 pandemic in early 2020.</ns0:p><ns0:p>One of the applications we envision is the creation of a set of financial indicators that can help improving the accuracy of existing robo-advisory and robo-trading systems. The idea is that identified hot events should be associated to high stock variations and this information might be leveraged to further tune financial forecasting systems.</ns0:p><ns0:p>In the future, we intend to carry out a deeper inspection of the temporal aspects related to the eventdetection task. In particular, we want to gain a better understanding of the effect produced by parameters such as the size of the time windows used for the lexicon creation or for the clustering algorithm. Together with this, we will evaluate the timeliness of the alert-generation algorithm, with the goal of reducing the delay of the generated warnings with respect to the actual starting moment of the events. Another aspect that deserves further investigation is the method used to represent social media data in a vector space. Specifically, we intend to refine the representation of tweets by applying pre-processing techniques that are required by the specificity of the language forms commonly employed by users on Internet platforms. These methods include the assessment of the veracity and reliability of the published content and the detection of slang, grammatical mistakes, misspellings and abbreviations. Last but not least, we would like to take full advantage of the power and benefit that Semantic Web technologies bring: as Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>big data frameworks previously mentioned should address potential computational or scalability problems we might encounter.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>https://developer.dowjones.com/site/global/home/index.gsp 3/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Overall architecture of the proposed approach.</ns0:figDesc><ns0:graphic coords='9,141.73,63.77,416.34,141.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>https://tinyurl.com/yygyo6wk 5 https://www.nltk.org/ 8/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Flowchart of the clustering algorithm.</ns0:figDesc><ns0:graphic coords='10,158.28,265.85,380.47,99.21' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>silhouette = (b − a) max(a, b) , where a is the mean distance between a sample and all other points in the same class and b is the mean distance between a sample and all other points in the next nearest cluster. A global score for the 9/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>( a )Figure 3 .</ns0:head><ns0:label>a3</ns0:label><ns0:figDesc>Figure3. Illustration of the silhouette maximization method. For space reasons, only the output with 4, 5 and 6 clusters is showed(Figures 3a, 3b, 3c, respectively). In this case, the algorithm would choose the number of clusters k = 5, which is the value that leads to the highest silhouette score (0.27 against 0.24 in the other two settings). The bi-dimensional visualization of news clusters is obtained by means of t-SNE, a tool to visualize high-dimensional data (van der Maaten and Hinton, 2008), which reduces the dimension of embeddings from 300 to 2. Every point represents a news in the 2D space and each color represents a different cluster.</ns0:figDesc><ns0:graphic coords='11,141.73,160.13,132.34,97.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Flowchart of the outlier removal procedure.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.54,117.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Illustration of the outlier removal method on the weeks published in the week before the Brexit referendum. Figure5ashows the original clusters including all documents. In Figure5bthe outliers are marked in black. For this example, the 30th percentile was used as the cut-off threshold, in order to make the effects of the algorithm more visible. Centroids are indicated by '+' marks, in the same color of the respective cluster.</ns0:figDesc><ns0:graphic coords='12,141.73,380.80,202.66,150.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flowchart of the algorithm used to extract the most relevant words from each cluster.</ns0:figDesc><ns0:graphic coords='13,144.28,315.13,408.48,79.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Flowchart of the tweet assignment task.</ns0:figDesc><ns0:graphic coords='14,153.86,361.27,389.34,130.14' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Plot of the percentage of assigned tweets (among all clusters) with respect to the overall number of published tweets, for each day in the interval between the 10th of June 2016 and the 10th of July 2016. The red markers indicate the generated alerts, while the red horizontal line represents the alert threshold.</ns0:figDesc><ns0:graphic coords='15,150.97,479.86,395.10,168.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figures 9 ,</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figures 9,10 and 11, that show the correlation between the weekly variations of S&P 500 stock price and relevant events taking place in the U.S. and in the rest of the world.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Correlation between the weekly variations of the S&P 500 stock price and relevant events taking place in the U.S. and worldwide in 2017. Source: https://gordianadvisors.com/</ns0:figDesc><ns0:graphic coords='17,146.05,329.21,404.94,294.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Correlation between the weekly variations of the SP500 stock price and relevant events taking place in the U.S. and worldwide in 2018. Source: https://gordianadvisors.com/</ns0:figDesc><ns0:graphic coords='18,147.77,65.99,401.51,287.97' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Comparison of the Silhouette Coefficient, Dunn Index, number of clusters obtained by different clustering algorithms and overlapping between the clusters' relevant words. The horizontal orange line represents the median of the obtained scores whereas the average is indicated between parenthesis. For further details we refer the reader to the official documentation of matplotlib library: https: //matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.boxplot.html.</ns0:figDesc><ns0:graphic coords='20,141.73,354.17,202.65,160.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure13. Illustration of the news clusters extracted on the three case studies considered in our qualitative analysis. The cluster associated to the event is highlighted by a circle. The correspondence between cluster and event is easily understood by manually reading the relevant words and the headlines of the documents that were associated to that label. For information about the 2D visualization technique, please refer to caption in Figure3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>days. For example, if the event days are2018-01-01, 2018-01-02, 2018-01-03, 2018-01-12, 2018-01-13, 2018-01-15, then the resulting events are defined by the intervals(start: 2018-01-01, end: 2018-01-03) </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Precision, recall and F-score achieved by the alert-generation algorithm for different values of alert threshold and for each of the four clustering approaches.</ns0:figDesc><ns0:graphic coords='24,141.73,417.69,186.10,129.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Percentage of alerts produced by the alert-generation algorithm in the time-span considered for the experiments (June 2016 -March 2020), for different values of alert threshold.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Plot of the percentage of assigned tweets with respect to the overall number of published tweets, for each day in the interval around the dates of the events included in our qualitative evaluation, respectively. The red markers indicate the generated alerts, while the red horizontal line represents the alert threshold.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>such we would like to employ ontologies and best practices of the Semantic Web for the extraction and identification of particular events in order to improve further the obtained clustering. The employment of 20 https://spacy.io/ 26/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,98.42,380.50,161.34' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,277.69,380.49,161.74' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,457.37,380.48,161.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Lists</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>-D.C. juggernaut in manufacturing is splitting in two</ns0:cell></ns0:row><ns0:row><ns0:cell>#1</ns0:cell><ns0:cell>-Global Finance: Abu Dhabi Banks Considering Merger -Deal would create biggest lender in Middle East; industry stocks rally in region</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Global Finance: Bankruptcy Filing By Phone Firm Hits Big Brazilian Bank</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-As Fears of Brexit Ease, Wall St. Thrives</ns0:cell></ns0:row><ns0:row><ns0:cell>#2</ns0:cell><ns0:cell>-Health and Tech Shares Lead a Down Day for the Market</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Market Ends a Losing Streak</ns0:cell></ns0:row><ns0:row><ns0:cell>#3</ns0:cell><ns0:cell>-This Time Around, the Volatility Index Matters -Stock Volatility Expected to Last</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Stocks Fall 5th Day in Row -Fed rate decision likely means high-dividend shares</ns0:cell></ns0:row><ns0:row><ns0:cell>#4</ns0:cell><ns0:cell>will benefit as banks are pressured -Growth Tepid, Fed Slows Plan to Raise Rates</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Brexit fears lead Fed to postpone increase in key interest rate</ns0:cell></ns0:row></ns0:table><ns0:note>investors shift their bets as opinion surveys tilt slightly to Britain staying in EU -Relief Rally Lifts Stocks and Oil -Dow industrials gain 129.71 as bets rise that U.K. would stay in EU; crude jumps 2.9% -Global markets rally as polls show that enthusiasm for Brexit is waning of 3 most relevant titles (i.e., pertaining to the three documents that are closest to the respective centroids) for each of the 5 clusters obtained from the news collected in the week before the Brexit referendum (cluster#3 contains only 2 news documents in total).12/30PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>List of the 10 most relevant words for the cluster obtained on the day of the Brexit referendum.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The polls are closer than the establishment cares to admit http://uk.reuters.com/article/uk-britain-eu-tns-poll Cameron and Osborne have credibility issues with British http://www.express.co.uk/news/uk/682561/david-cameron-eu-referendum-european-union -brexit-germany-boris-johnson-brussels</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>CLUSTER TOP-3 ASSIGNED TWEETS</ns0:cell></ns0:row><ns0:row><ns0:cell>#0</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>EU referendum outcomes explained</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>https://www.youtube.com/watch?v=VRIF4C c2qs</ns0:cell></ns0:row><ns0:row><ns0:cell>#1</ns0:cell><ns0:cell>No tweets assigned.</ns0:cell></ns0:row><ns0:row><ns0:cell>#2</ns0:cell><ns0:cell>No tweets assigned.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Gotta love this crazy volatility market</ns0:cell></ns0:row><ns0:row><ns0:cell>#3</ns0:cell><ns0:cell>S&P 500 squeeze back to 208 #volatility</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Fundamentals Still Look Solid Despite Brexit-Induced Volatility</ns0:cell></ns0:row><ns0:row><ns0:cell>#4</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Lists of top-10 relevant words for the three case studies considered in our qualitative evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>List of the 3 most relevant headlines (i.e. closest to the respective centroid) for the three events considered in the qualitative evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>EVENT</ns0:cell><ns0:cell>TOP-3 RELEVANT TWEETS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Hillary Clinton Wins!</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 U.S. elections</ns0:cell><ns0:cell>-The stock market's continual favoritism of Hillary Clinton proves that she has been bought. Corruption loves company.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Markets says 'Hillary Clinton Wins'. Congratulation New President</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Goldman Sachs think the increase in tariffs will be narrowly avoided.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Odds of new tariffs at 40% if the Chinese delegation still comes.</ns0:cell></ns0:row><ns0:row><ns0:cell>U.S-China trade war</ns0:cell><ns0:cell>-Tariff increase on Chinese imports will take effect on May 10 -Federal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Register</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-'Reuters: Trump's punitive tariffs will burden consumers'; yeah like it...</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Mainland Chinese, Hong Kong stocks tumble as Covid-19 death toll rises</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Second U.S. Covid-19 case is Chicago resident who traveled to Wuhan</ns0:cell></ns0:row><ns0:row><ns0:cell>Covid-19 outbreak</ns0:cell><ns0:cell>-3M Ceo says there factories are working 24/7 making masks &</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>protective equipment to fight the virus. Buy your calls while there</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>cheap. #stocks #covid-19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>List of the 3 most relevant tweets (i.e. closest to the respective centroid) for the three events considered in the qualitative evaluation. Please keep in mind that the time interval used for the U.S. elections does not include the outcome of the polls (hence the wrong forecasts by users that initially proclaimed Hillary Clinton's victory).</ns0:figDesc><ns0:table /><ns0:note>22/30PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>http://www.twitter.com 2 http://www.stocktwits.com 2/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='7'>https://developer.dowjones.com/site/global/home/index.gsp 8 http://www.stocktwits.com 9 api.stocktwits.com/developers/docs 15/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='10'>http://scikit-learn.org 11 http://pyclustering.github.io 12 http://www.nltk.org 13 http://radimrehurek.com/gensim/index.html 14 http://code.google.com/archive/p/word2vec/ 15 The Jaccard Index between two lists is defined as the size of their intersection divided by the size of their union 18/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:1:1:NEW 5 Feb 2021)</ns0:note>
</ns0:body>
" | "Major Revision
January 29, 2021
Dear Editor,
first of all, we would like to thank you for managing our manuscript and the
reviewers for their constructive remarks. We completed the requested major
revision of our manuscript entitled Event Detection in Finance Using Hierarchical
Clustering Algorithms on News and Tweets, PaperID: 55204, submitted to PeerJ
Computer Science, on the basis of the observations made by the reviewers.
In the following, you will find the details of each of them. The reviewers’ observations and our responses have been organized as a series of numbered questions
(Q) and answers (A).
Moreover, within the paper, the blue text corresponds to all the edits we did to
address all the reviewers’ comments.
Thanks again for all your feedback and support that led us to largely improve
the paper.
Best Regards,
Salvatore Carta, Sergio Consoli, Luca Piras, Alessandro Sebastian Podda, Diego
Reforgiato Recupero
• • •
Reviewer 1
(Q1) In general, the manuscript reads very well. The English (wording, grammar)
is spot-on and the authors have clearly made an effort to make everything as
clear as possible. The article is well-structured according to the standards and
seems to be well-balanced in terms of lengthiness of the separate sections.
(A1) We thank the reviewer for his/her positive comments.
(Q2) To my taste, you could consider separating Section 4 into implementation
and experiments, and subsequenlty each experiment into set-up and results, just
to clarify the article even further and improve readability.
(A2) As suggested by the reviewer, we have split section 4 in two more sections:
Experimental Settings, where we describe the used datasets and the methodology, and Results, where we show the obtained results. We have updated the
Introduction section accordingly.
(Q3) Figures are of good quality and provide additional clarity to the story.
Minor remarks: the explanation of boxplots in Figure 12 is redundant and can
be removed. Positioning of the figures and tables is also a bit awkward at times,
but can be fixed in the typesetting / editing phase.
(A3) As suggested, we have removed the explanation of boxplots from the caption
of Figure 12.
(Q4) The literature review seems thorough, although I am missing some relevant
work that is targeted to the actual implementation of event detection in financial
algorithms, e.g.: - A News Event-Driven Approach for the Historical Value
at Risk Method: Hogenboom et al. in: Expert Systems With Applications,
42(10):4667-4675, 2015. - An Automated Framework for Incorporating News
into Stock Trading Strategies: Nuij et al. in: IEEE Transactions on Knowledge
and Data Engineering, 26(4):823-835, 2014.
In fact, the authors do not make a clear distinction between “historical” and
“real-time” detection of financial events. It would be nice if this is incorporated
into the literature review.
(A4) We have included the suggested references within the Related Work section
and mentioned what our method deals with in relation to historical and real-time
detection of financial events.
(Q5) The work is self contained. I would like the differences with the 2020
conference paper to be explicitly stated in this article.
(A5) We thank the reviewer for this heads up, since it might create misunderstanding if not well explained. The two works are totally different. Within the
conference version, we defined a strategy to generate industry-specific lexicons
from news documents with the goal of dynamically capturing the correlation
between words and stock price fluctuations. This has been then employed to
solve a binary classification task with the goal of predicting the magnitude of
future price changes for individual companies. Conversely, in this work, we leverage the same technique but to represent a new dataset and to solve a different
problem, event detection. Therefore, as suggested, within the introduction we
have slightly edited the first contribution of our work specifying what we use
from the 2020 conference paper work and, moreover, within the related work,
2
we cite our previous work including the differences with respect to the proposed
one.
(Q6) The title and abstract seem to indicate a focus on event detection. However,
the article mainly focuses on the clustering part of the problem. Alert generation
(which in my view are the “actual” relevant events) is only a small, final step.
The research question remains implicit, and I would like to see it defined
explicitly.
(A6) We thank the reviewer for this further heads up. Our definition of event
detection is consistent with that in literature, that is ”Event detection is the
task of discovering news documents that report on the same event and arranging
them under the same group. Event detection provides a conceptual structure of
the news stories and facilitates better navigation for users in news spaces.”[1].
The alert generation is a more specific application to detect hot events and is
performed on top of the clustering technique and tweets assignment, as mentioned
in Figure 1. We specified better that the Alert Generation evaluation subsection,
by specifying that the clustering is preparatory to the tweets assignment, which
is in turn preparatory for the alert generation.
(Q7) The contributions, however, are very clear. The NLP-driven clusteringbased approach, to my knowledge, is novel in the (financial) event detection
scene. The definition of events as clusters of news items / tweets makes sense,
and the fact that they are based on extracted words seems logical to me as
well. The fact that the authors additionally take into account time is essential.
Moreover, I like the fact that domain-specific lexicons are created ad-hoc, as most
existing approaches in fact make use of pre-defined lexicons, with or without an
update mechanism. Therefore, it would be nice to improve the lexicon generation
description in Section 3, as it is not explained very well. For example:
“For each news article in this set, we extract the text, consisting of the title,
the snippet and the full body of the article, and then we perform some standard
pre-processing techniques on it, such as stop-words removal, stemming and
tokenization. In addition, we remove from the corpus all the words that appear
too frequently and too infrequently, according to given tolerance thresholds.”
These sentences are a bit puzzling. Which algorithms are used for preprocessing?
(A7) First of all, we thank the reviewer for his/her very positive comments. We
really appreciate them. The sentences mentioned in the comments have been
restructured and further details have been added.
(Q8) Later on, in Section 4, I understand the authors are using standard
Python packages, but they have to make sure they know which algorithms are
3
implemented. As methodology and implementation are in separate sections
(which I think is very nice), I would suggest the authors already include the
applied algorithms in the methodology section, and simply state in Section 4
that the algorithms are implemented using the Python packages.
(A8) We thank the reviewer for his/her further positive comments. As suggested
in one of the previous comments, we have split the current section 4 in two. The
new section 4 includes the experimental settings whereas section 5 illustrates the
obtained results. In the new section 4.2 (Methodology and Settings) we added a
paragraph with the implementation details and which framework/tool we have
used.
(Q9)The alert generation is flawed, as these are only based on the number
of mentions in tweets / news messages. I understand that this is an indication
of the importance of the event, but large clusters can in fact be very general
clusters. The number of mentions is directly related to the number of clusters.
Secondly, why do the authors make use of today’s number of mentions, but
ignore next day’s stock rate differences? They are an important feature of the
clusters, so why not here? To me, it would make sense that hot events have a
larger impact on stock rates than regular events.
(A9) For the first point mentioned by the reviewer, the purpose of the outlier
removal step was exactly to avoid the generation of big clusters with general
content. The rationale behind that was that if there are several news present
in the same cluster all containing different events (i.e. there are no hot events),
it is implicit that some of them will be removed because of their Silhouette
value. When a high number of tweets are assigned to the same cluster this will
probably indicate the presence of an event. For the experiments we performed
we did not see the formation of big general clusters although the reviewer is
correct and that condition may happen despite the presence of the tweet distance
threshold we introduced. We included a paragraph with this explanation in the
conclusion section mentioning potential weaknesses of our approach we want
to further investigate. For the second point, we cannot leverage the next day’s
stock rate because we are doing real time event detection and for a day d we do
not have access to the information of the day d + 1. This was better clarified in
the introduction and related work sections where we explain better the type of
event detection that we tackle.
(Q10)Finally, it is worth mentioning that in general, the methodology is described
rigorously, which enhances the reproducibility of the research sufficiently.
(A10) We really appreciate the comment of the reviewer. Thanks!
4
(Q11)Experiments performed in the article are thorough and are based on
a large timespan of data. I am happy to see that the authors not only performed
a quantitative, but also a qualitative analysis. In my view, the latter is absolutely
necessary. One thing that remains unclear, is how the real-world (ground-truth)
events have been determined. Did they determine the events based on the output
clusters? Or did they create them beforehand? Is there an inter-annotator
agreement or the like?
(A11) We thank the reviewer one more time for his/her positive comment and
feedback. For the qualitative analysis, we simply chose three well known events
(US presidential elections, US China trade war, Covid-19 outbreak) and the
period of each of them and checked if our approach returned those events in
the generated clusters. We wrote within section 5 paragraph Event-detection
evaluation a couple of lines mentioning that. Moreover, for the identification
of hot events, within the Alert-generation evaluation section we wrote how we
generated the ground-truth together with details and formulas. We put the text
in blue.
(Q12)One major thing that is lacking, is an evaluation of the performance
against other state-of-the art approaches. I think this is crucial for the acceptance of this paper. Other than that, the evaluation of the methodology is very
thorough and statistically sound, though.
(A12) Given the particular formulation of our problem of alert-generation and its
pipeline architecture, it is not trivial to perform a fair comparison between ours
and other approaches. For example, under different conditions, state-of-the-art
methods [2] report accuracy level of about 60%, in line with our alert-generation
metrics (Figure 14). Therefore, to address reviewer’s comment in the fairest way
possible, and to reinforce our evaluation, we have added comparison results of
each of the three steps of our algorithm. Step 1 already included a comparison
of four different clustering algorithms out of which the agglomerative algorithm
had been chosen and used for steps 2 and 3. Now we have added also for steps
2 and 3 new figures related to the results of agglomerative, kmeans, kmedians,
and kmedoids clustering algorithms. In particular, the new Figure 13 illustrates
the results of the event-detection evaluation using the four clustering algorithms
(step 2). The new Figure 14 includes the precision and recall achieved by the
alert-generation algorithm for the four clustering algorithms (step 3). The new
Figure 15 includes the percentage of alerts produced by the alert-generation
algorithm for the four clustering algorithms (step 3).
(Q13)The code is available in a GitHub repo, which is very nice. At first
sight, the data seems to be available, but I have problems identifying the full content. Some folders mentioned in the readme, i.e., ”lexicons” and ”word2vec data”
are missing. I believe for the former folder, the generation script has to be
5
executed, and for the latter folder, large external datasets have to be downloaded.
It would be nice to create stub folders in the repo to avoid some confusion.
(A13) We thank the reviewer for this further heads up. We have fixed the readme
and the repository so that it is self-contained and anyone can download the code
and follow the readme to run it. Please note that, due to licensing constraints,
we cannot publish the whole Dow Jones’ Data, News and Analytics dataset,
which was employed in the experimental framework of the paper. However, we
hereby make available a small example set of news and tweets for illustrative
purpose and to allow the execution of the algorithm. Furthermore, a set of
pre-computed lexicons is already available in the folder lexicons, to allow an
immediate execution of the code. In case you want to use lexicons with different
parameters, you need to create new lexicons from scratch, based on the small
example data. Please read the documentation on the script create lexicons.py
for reference.
For size constraint, it was not possible to upload the word-embedding model
to this repository. We recommend that the user download the model from
https://code.google.com/archive/p/word2vec/ or, alternatively, generate
word-embeddings using the interface offered by the gensim library: https:
//radimrehurek.com/gensim/models/word2vec.html. It is crucial that the
final model may be read as a classic Python dictionary, in which keys are words
and values are vectors.
(Q14)As the research question remains a bit vague, the conclusions are also
quite general. I am also left with questions, i.e., does this easily port to other
languages? Or is the approach heavily dependent on available word2vec models?
How does the performance stack up against existing work? Does the proposed
algorithm scale well? What kind of applications are envisioned?
(A14) We have enriched the conclusions with more details and by responding to
all the above questions raised by the reviewer.
(Q15)Future work suggestions are valid. Especially temporal aspects are worth
looking into. If by Semantic Web technologies the authors mean ontologies, I
also believe they could help out. In essence, you are including predefined domain
knowledge, which should balance out some of the artifacts of word2vec-like
approaches. However, you have to keep in mind the performance hit you might
suffer.
(A15) We thank the reviewer for the positive feedback. Within the conclusions
section we have addressed his/her further comment.
(Q16)The proposed method seems to heavily rely on Python packages. Have the
6
authors thought about scaling issues? For instance, the document-term matrices
that are generated suffer from the curse of dimensionality. What is needed for
gigabytes or petabytes of data? Have they considered cluster-computing, or even
map-reduce? What kind of implications are there for performance?
(A16) As future applications, in presence of huge datasets, we might employ big
data frameworks such as Apache Spark to address potential scalability issues.
Within the conclusions we have addressed this comment as well.
(Q17) The applicability of the research in practice is a bit questionable, as
the analysis is a historic one. It is not easily changed either, because one of the
main clustering features is stock variation the day *after* the tweet or news
article has been published. This makes it hard to apply in a trading environment,
as the focus there is on the here and now, and of course the future. As far as I
am concerned, the merit of this work lies more in the (automatic) application in
existing algorithms, but the only application I see now is historical analysis. It
would be nice if the authors discuss this in more detail.
(A17) We think there was a misunderstanding and we need to better specify
the following. The stock variation of the next day is used only within the
computation of the score to assign to words of the lexicon. The only features
that represent the news to be clustered are given by the word-embeddings. We
never look at the future: when we look for events of a given day d we create the
lexicon using the news articles up to the day d − 1 and in the computation of
the scores we use the stock variation of previous days up to d − 1. We do not
use the stock variation of the day d. Therefore we are fair and our algorithm
can be used for the real-time event detection. We have better specified this in
the abstract, introduction and conclusions.
Reviewer 2
(Q1) The article proposes an event detection methodology which is time dependent and domain specific. The domain of application is financial markets where
the events are extracted from financial news. The methodology consist of 3 main
steps: 1. to create a time and domain-specific lexicon 2. clustering and filtering
to distinguish relevant events discussed in the news 3. to spot hot topics by
leveraging the news topics (events) with social media data (basically count the
volume of talk for an event in Twitter)
The article is very well written, I found no grammatical mistakes, highly
professional English is used. I must also praise the extensive and well-commented
background literature (sect 2 Related Work) provided by the authors for it gives
a good survey of the state-of-the-art of Event detection algorithms, and puts
their work in modern perspective. Against this background literature this article
7
is certainly a contribution to the field of Event detection.
(A1)We are very happy to read this comment and thank the reviewer for
such an appreciation.
(Q2) Now, I have some suggestions to make the description of the proposed
approach for event detection more rigorous and founded on statistical theory.
My main concern is on step 1 the lexicon generation for which it is not clear why
the heuristic proposed for selecting terms will work in general (in a formal sense,
not just in some practical scenarios as shown in the Experimental section). I
do note that the lexicon generation method comes from previous work by the
authors (see in their references: Carta et al (2020)) which I read through. Neither
there or in this paper the method is given a statistical support/foundation, only
empirical assessment.
First a suggestion to make the description of step 1 (Sec. 3.2 .Lexicon generation) more clear and unambiguous. line 341: ”each row corresponds to a news
article **and date**, and each column ...” (The words ”and date” should be
added: Each news must have its time-stamp explicitly for the next step described
in lines 343-346 where return is associated to the terms) L 344-346, I advice to
use other letter for the ”day after the article was published” (e.g. c), since d is
the date at which we want to discover hot events and for which we are looking
at news published on days c ∈ [d-l;d-1]. In this way it becomes evident that we
are looking at news and returns at different time-stamps, whereas the current
formula (1) gives a first impression that we are only looking at news on day d-1
(before the events of interest)
(A2)We thank the reviewer for the heads up and have edited where the he/she
0
suggested with only one change. Instead of the letter c we used the symbol d
for two main reasons: a) that might generate confusion when we introduce the
clustering mechanism, b) for consistency with the formula we used within the
Alert-generation evaluation paragraph of Section 5.
(Q3)Now I offer the authors an argument that may connect their construction to
the theory of ”sure screening” (Fan and Lv, 2008), hence sustaining it on formal
statistics. I see their lexicon generation equivalent to computing the frequency
f(j) with which a term j co-occurs with a positive value of return. Assume in the
period [d-l;d-1] the algorithm collects N articles, out of P
which n¡N contain the
term j (articles can be sentences) . Then f (j) = (1/n) ∗ 1≤k≤N Xk (j) ∗ ∆c (k)
where Xk (j) is a dummy variable for whether term j appears in article k and
Deltac (k) is the return on the day c for article k. Thus, in this form, f(j) is
the slope of a cross-article regression of ∆c = (∆c (1), ..., ∆c (N )) on the dummy
variable X(j) = (X1 (j), ..., XN (j)). i.e. f(j) are coefficients of a marginal
regression [Genovese et al, 2012]. By next sorting them by decreasing scores and
selecting those whose value is over (resp. under) some threshold t+ (respect.
8
t-) -which is similar to taking the first n and the last n-, the authors are doing
”marginal screening”, a form of variable selection which is proven to be more
efficient than the Lasso and with good statistical accuracy. What I mean with
this last sentence is that if S is the index set of true sentiment charged terms
(positive and negative), and Ŝ = j : f (j) ≥ t+, orf (j) ≤ t− then under certain
conditions P rob(Ŝ = S) − − > 1 as N and the number of terms go to infinity .
This is the sure screening property (Fan and Lv).
This guarantees consistency of the procedure for screening terms for the
lexicon (that almost surely are the true sentiment charged terms correlated with
price variation) and if you were to work out the proof of the asymptotic result
you’ll get along the way an expression for the threshold, the number of articles
and the terms needed. I don’t see how to prove this in this context easily (and I
guess is difficult), so I would not pose this as requirement for publishing this
article. However, it would be nice that the authors check this framework of
sure screening and include at least an explanation like the one I am offering to
mathematically sustain their lexicon construction heuristic.
References: Fan, Jianqing, and Jinchi Lv, 2008, Sure independence screening
for ultrahigh dimensional feature space, Journal of the Royal Statistical Society:
Series B (Statistical Methodology) 70, 849-911.
Genovese, Christopher R, Jiashun Jin, Larry Wasserman, and Zhigang Yao,
2012, A comparison of the lasso and marginal regression, Journal of Machine
Learning Research 13, 2107-2143
(A3) We highly thank the reviewer for the explanation and citations. We
have gone through the citations, read what the mentioned screening property is
and included in the paper his/her suggestion by citing the mentioned papers.
We believe this gives further solidity to our lexicon generation approach.
(Q4)In the legend of Figure 3 : ”The bi-dimensional visualization ... by means
of tSNE, ...” What is t-SNE? (I am not familiar with this nomenclature) . I
short line explaining it, or a reference where to look it up will be appreciated.
(A4) We have included in caption of Figure 3 a sentence explaining what
the t-SNE is and linked to the proper reference.
(Q5) In the Introduction (lines 85-86) you claim the approach (of event detection)
”can be applied to various domains with minimum modifications”. However I
see that a key component is to have some form of quantifying the behavior of
the domain one want to characterize the important events, in order to construct
a sentimental-lexicon conditioned to that target characterising the domain. In
the case of financial markets, represented by the market index (e.g SP500), this
quantification of its behavior is the variation of the price of the index (or return).
For another domain (e.g. politics) what could be this numeric feature that
characterizes the domain and that serves as driver for the lexicon generation?
9
An example would be a plus for this nice and interesting event detection proposal.
(A5) We thank the reviewer for this further heads up. Within the Introduction
we have included an example of application in another domain different than
financial.
References
[1] Linmei Hu et al. “Adaptive online event detection in news streams”. In:
Knowledge-Based Systems 138 (2017), pp. 105–112.
[2] P. Lin et al. “A News-Driven Recurrent Neural Network for Market Volatility
Prediction”. In: 2017 4th IAPR Asian Conference on Pattern Recognition
(ACPR). 2017, pp. 776–781. doi: 10.1109/ACPR.2017.35.
10
" | Here is a paper. Please give your review comments after reading it. |
23 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the current age of overwhelming information and massive production of textual data on the Web, Event Detection has become an increasingly important task in various application domains. Several research branches have been developed to tackle the problem from different perspectives, including Natural Language Processing and Big Data analysis, with the goal of providing valuable resources to support decision-making in a wide variety of fields. In this paper, we propose a real-time domain-specific clustering-based eventdetection approach that integrates textual information coming, on one hand, from traditional newswires and, on the other hand, from microblogging platforms. The goal of the implemented pipeline is twofold: (i) providing insights to the user about the relevant events that are reported in the press on a daily basis; (ii) alerting the user about potentially important and impactful events, referred to as hot events, for some specific tasks or domains of interest. The algorithm identifies clusters of related news stories published by globally renowned press sources, which guarantee authoritative, noise-free information about current affairs; subsequently, the content extracted from microblogs is associated to the clusters in order to gain an assessment of the relevance of the event in the public opinion. To identify the events of a day d we create the lexicon by looking at news articles and stock data of previous days up to d</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div> <ns0:ref type='bibr' target='#b14'>(Dou et al., 2012)</ns0:ref><ns0:p>. In this sense, Event Detection aims to discover contents published on the Web that report on the same current topic, organize them in meaningful groups and provide insights, based on properties extracted automatically from the data <ns0:ref type='bibr' target='#b2'>(Allan et al., 1998b;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hu et al., 2017)</ns0:ref>. It represents a valuable resource to create awareness and support decision making in various domains of application, including epidemics <ns0:ref type='bibr' target='#b4'>(Aramaki et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b51'>Rosa et al., 2020)</ns0:ref>, earthquakes <ns0:ref type='bibr' target='#b54'>(Sakaki et al., 2010)</ns0:ref>, social events <ns0:ref type='bibr' target='#b46'>(Petkos et al., 2012)</ns0:ref> and economy (see Section 2.4), among others. In some cases, the scope of the event detection task is not limited to arranging the contents and providing analytics, but constitutes the basis for further algorithmic processing, like for example the development of automatic trading strategies in financial applications <ns0:ref type='bibr' target='#b19'>(Gilbert and Karahalios, 2010;</ns0:ref><ns0:ref type='bibr' target='#b52'>Ruiz et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b34'>Makrehchi et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Given the importance of Event Detection, an increasing number of researchers have focused their attention on this problem since the late 1990s, building on the theoretic foundations of Information Retrieval and, later on, taking advantage of the discoveries of Natural Language Processing, Text Mining and Big Data processing. Early works mainly based their approaches on traditional news stories as they started being digitalized <ns0:ref type='bibr' target='#b2'>(Allan et al., 1998b;</ns0:ref><ns0:ref type='bibr' target='#b31'>Lam et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b29'>Kumaran and Allan, 2004)</ns0:ref>, while social media platforms like Twitter 1 and Stocktwits 2 have become the dominant data source in the last decade <ns0:ref type='bibr' target='#b20'>(Hasan et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Atefeh and Khreich, 2015)</ns0:ref>. However, it has been demonstrated by <ns0:ref type='bibr' target='#b48'>Petrovic et al. (2013)</ns0:ref> that Twitter still cannot replace traditional newswire providers when considering the coverage and the timeliness of breaking news. In fact, this study shows that, while Twitter has a better coverage of minor events ignored by other media, traditional newswire sources often report events before users on the social network. Another disadvantage of microblogs is that they contain a considerable amount of noise, such as irregular syntax, misspellings and non-standard use of the language, let alone the increasing phenomenon of fake news, which makes it difficult to extract valuable information <ns0:ref type='bibr' target='#b28'>(Kaufmann and Kalita, 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Ajao et al., 2018)</ns0:ref>. In light of this, a promising line of research has provided evidence that combining multiple sources of information allows to mitigate the flaws and exploit the advantages of each medium, thus improving the quality of the event detection task <ns0:ref type='bibr' target='#b41'>(Musaev et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Petkos et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b60'>Thapen et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Inspired by these findings, we developed a domain-specific clustering-based event-detection method that exploits the integration of traditional news articles and Stocktwits messages (which from now on will be referred to as tweets, for simplicity) to identify real-world events and to generate alerts for highly relevant events on a daily basis. The main intuition behind the integration of traditional press and social media is that, even though the former represents an authoritative and noise-free source which is convenient to mine to get qualitative information, it fails, taken alone, to provide insights about the entity or the resonance of the events. On the contrary, microblogs contain a considerable amount of noisy and unreliable content, but have the advantage of reflecting the impact that events have on public opinion.</ns0:p><ns0:p>Because of this, we decided to exploit traditional news articles to construct a qualitative basis for our event-detection approach and to integrate the social media data on top of that, in order to get a quantitative measure.</ns0:p><ns0:p>The proposed approach, which will be described in full detail in Section 3, is defined as domainspecific because it collects news from the same sphere of interest (e.g. economy, politics, sports) and represents these documents focusing on the words that are most relevant for that field. However, the approach can be applied to various domains with minimum modifications. For example, if we are interested in identifying events that brought happiness or sadness to people, one might use social media text elements instead of news and a sentiment index indicator created on the same interval time of the social text to associate each social post to its sentiment level. Thus the lexicon would consist of tokens used within social media posts and weighted depending on their sentiment indicators. Besides, please note that our approach performs real-time event detection as it is supposed to identify events of the day d without any knowledge of the future. In particular, it creates the lexicon by looking at news articles and stock data of previous days up to d − 1 without looking at the future. In this paper, we present the implementation of the pipeline that we designed specifically for the financial domain, which is a field where Event Detection has had one of its most promising applications (see Section 2.4). Our motivation derives from the intuition offered by several works in the financial literature that, drawing inspiration from the Adaptive Market Hypothesis <ns0:ref type='bibr' target='#b33'>(Lo, 2004)</ns0:ref>, show that public news have an impact on the stock markets, explaining a part of the return variance <ns0:ref type='bibr' target='#b8'>(Boudoukh et al., 2019)</ns0:ref>. This justifies the need for automatic tools that can support companies, traders and all the other actors involved in the market, providing an at-a-glance visualization of acquisitions, stock splits, dividend announcements and other relevant economic events <ns0:ref type='bibr' target='#b22'>(Hogenboom et al., 2013)</ns0:ref>.</ns0:p><ns0:p>We validated our approach through an experimental evaluation based, on one hand, on the Dow Jones' Data, News and Analytics dataset 3 , which contains news articles delivered by globally renown sources, and, on the other hand, on a set of messages collected from Stocktwits, a microblogging platform inspired by Twitter, where users posts short messages related to stock markets and trading. The events that constitute our ground truth for the alert generation algorithm were selected based on the stock price time series of the Standard & Poor's 500 Index (S&P 500), following the intuition that relevant economic events lead to significant movements of the market. Our qualitative and quantitative analysis shows that the proposed method is able to extract meaningful, separable clusters, which correspond to real-world events. Furthermore, the alert generation algorithm detects hot events with high accuracy, proving the effectiveness of the integration of news articles and tweets.</ns0:p><ns0:p>The contributions of our work can be summarized as follows:</ns0:p><ns0:p>• we propose a novel approach to represent news documents exploiting a domain-specific lexicon created ad-hoc using the technique we have introduced in <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>, where the lexicon has been generated on a different dataset;</ns0:p><ns0:p>• we design an original clustering-based event-detection approach that integrates news documents and tweets;</ns0:p><ns0:p>• we show the effectiveness of our method by means of an experimental evaluation performed on real-world datasets;</ns0:p><ns0:p>• we offer a visual inspection of the output obtained on a selected number of real-world events, including the Brexit Referendum, the U.S.-China trade war and the recent outbreak of the Covid-19 pandemic.</ns0:p><ns0:p>The reminder of this paper is organized as follows. Section 2 offers a thorough overview of the background research on Event Detection, analyzing works that deal with different kinds of media and application fields. The proposed approach is described in full detail in Section 3. The datasets and the methodology we have carried out for the evaluation are described in Section 4 while the obtained results are illustrated in Section 5. Finally, Section 6 contains general conclusions about this work and future lines of research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>The origins of Event Detection can be traced back to 1998, when a joint effort between the Defense Advanced Research Projects Agency (DARPA), the University of Massachusetts, Carnegie Mellon University and Dragon Systems aimed to define the problem within the wider field of Topic Detection and Tracking (TDT) and proposed an approach based on broadcast news stories that paved the way for new research on the field <ns0:ref type='bibr'>(Allan et al., 1998b,a;</ns0:ref><ns0:ref type='bibr'>Yand et al., 1998)</ns0:ref>. Since then, a considerable variety of algorithms have been proposed to tackle the problem, gradually taking advantage of the remarkable advances in Text Mining and Natural Language Processing. Most interestingly, the birth of social media platforms like Facebook, Twitter and Stocktwits in mid 2000s and their increasing popularity, together with the birth of the new era of Big Data <ns0:ref type='bibr' target='#b36'>(Marx, 2013)</ns0:ref>, led to a widening of the range of data that could be exploited to detect real-world events. To note that it is common to employ lexicons for news representation for the financial domain. Within our previous work <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>, we defined a strategy to generate industry-specific lexicons from news documents with the goal of dynamically capturing the correlation between words and stock price fluctuations. This has been then employed to solve a binary classification task with the goal of predicting the magnitude of future price changes for individual companies. Conversely, in this work we leverage the same technique to represent a new dataset and to solve a different problem, event detection.</ns0:p><ns0:p>In the following, we will hereby illustrate the previous research carried out in Event Detection, grouping it according to the type of source employed in the analysis -basically newswires, social media Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and an integration of heterogeneous data. Because the approach presented in this paper can be applied to different domains, also our overview of related works will cover a variety of fields, including health, security, sports and many others. However, we will conclude the overview by focusing on the financial sphere, since this is the specific domain within which our approach was developed and validated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Newswires-based</ns0:head><ns0:p>The first type of data that has been explored in this field consists of traditional newswires and press releases, which, however, still have a primary role even in present research. Early works typically rely on tf-idf features to represent the documents in a Vector Space <ns0:ref type='bibr' target='#b55'>(Salton et al., 1975;</ns0:ref><ns0:ref type='bibr' target='#b32'>Li et al., 2005)</ns0:ref> or <ns0:ref type='bibr'>Bag-of-Words (Zhang et al., 2010)</ns0:ref>. Modification of these classic methods were proposed in order to enhance the representation by means of contextual information <ns0:ref type='bibr' target='#b31'>(Lam et al., 2001)</ns0:ref>, lexical features <ns0:ref type='bibr' target='#b58'>(Stokes and Carthy, 2001)</ns0:ref>, named entities <ns0:ref type='bibr' target='#b29'>(Kumaran and Allan, 2004)</ns0:ref>, topic models <ns0:ref type='bibr'>(Yang et al., 2002)</ns0:ref> and, in more recent work, word-embeddings <ns0:ref type='bibr' target='#b24'>(Hu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Kusner et al., 2015)</ns0:ref>. The most common approaches for the detection task are based on clustering, text classification or a combination of these <ns0:ref type='bibr' target='#b5'>(Atefeh and Khreich, 2015)</ns0:ref>.</ns0:p><ns0:p>Going into more detail, authors in <ns0:ref type='bibr' target='#b24'>(Hu et al., 2017)</ns0:ref> exploit word-embeddings to overcome the downsides of tf-idf representation, namely sparsity and high dimensionality. On top of this, they build an adaptive online clustering algorithm that leads to an improvement in both efficiency and accuracy.</ns0:p><ns0:p>Similarly, authors in <ns0:ref type='bibr'>(Zhou et al., 2018)</ns0:ref> enhance the tf-idf model by integrating the Jaccard Similarity coefficient, word-embeddings and temporal aspects of published news, with the goal of spotting hot events. Others <ns0:ref type='bibr' target='#b38'>(Mele et al., 2019)</ns0:ref> propose an algorithm to detect, track and predict events from multiple news streams, taking into account the publishing patterns of different sources and their timeliness in reporting the breaking news. They use a Hidden Markov Model <ns0:ref type='bibr' target='#b6'>(Beal et al., 2002)</ns0:ref> to represent current events and, subsequently, to predict facts that will be popular in the next time slice.</ns0:p><ns0:p>The main contribution of the proposed approach with respect to this line of research is the fact that, in our algorithm, the representation of the events extracted from news articles is enriched by the information mined on social media sources. In this way, we obtain a multifaceted perspective of events. Furthermore, another innovation regards the method employed to represent the textual data. In particular, our pipeline includes the creation of an ad-hoc lexical resource, which detects the words that are most relevant for a specific domain. During the construction of the vector representation of documents, only the word-embeddings of the selected terms are included, as described in full detail in Sections 3.2 and 3.3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Social Media-based</ns0:head><ns0:p>Since the development of social media platforms and microblogging websites, a big share of the researchers' interest has been aimed at mining these sources of information for a more dynamic and multifaceted inspection of events. Among these platforms, the case of Twitter definitely stands out, becoming a de facto standard domain for Event Detection <ns0:ref type='bibr' target='#b48'>(Petrovic et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Saeed et al., 2019)</ns0:ref>. A thorough survey by <ns0:ref type='bibr' target='#b20'>Hasan et al. (2018)</ns0:ref>, focused on Twitter-based approaches, suggests that this research branch can be split into three main categories: (i) methods that exploit properties in a tweet's keywords;</ns0:p><ns0:p>(ii) methods that rely on probabilistic topic models; (iii) clustering-based methods.</ns0:p><ns0:p>For the first group, it is worth mentioning TwitInfo <ns0:ref type='bibr'>(Marcus et al., 2011), TwitterMonitor (Mathioudakis and</ns0:ref><ns0:ref type='bibr' target='#b37'>Koudas, 2010)</ns0:ref> and EnBlogue <ns0:ref type='bibr' target='#b3'>(Alvanaki et al., 2011)</ns0:ref>, which identify real-time trends on Twitter and allow the final user to browse large collections of messages, providing contextual information about tweets, visualizations and meaningful insights that describe the identified topics. <ns0:ref type='bibr' target='#b57'>Stilo and Velardi (2016)</ns0:ref> include temporal factors in their analysis in order to cope with the limited context of Twitter messages. <ns0:ref type='bibr' target='#b63'>Weng and Lee (2011)</ns0:ref> propose an approach that builds signals for individual words by applying wavelet analysis <ns0:ref type='bibr' target='#b26'>(Kaiser, 2010)</ns0:ref> on the frequency-based raw signals of the words; this method is able to spot the most relevant words and finally cluster them to form events. Among the works that employ probabilistic topic models to represent tweets in a latent space, TwiCal <ns0:ref type='bibr' target='#b49'>(Ritter et al., 2012)</ns0:ref> is an open-domain event-extraction framework that identifies significant events based on a multitude of features including, but not limited to, contextual, dictionary and orthographic features.</ns0:p><ns0:p>TopicSketch <ns0:ref type='bibr' target='#b64'>(Xie et al., 2016)</ns0:ref> is a system that identifies bursty topics from live tweet streams in an efficient way, by tracking the occurrence of word pairs and triples in small 'sketches' of data. Zhou et al.</ns0:p><ns0:p>(2015) devise a lexicon-based approach to spot tweets that are event-related and, based on these tweets, extract a structured representation of events by means of an unsupervised Bayesian model. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As for clustering-based approaches, <ns0:ref type='bibr' target='#b47'>Petrović et al. (2010)</ns0:ref> propose a time-efficient way to determine the novelty of a new tweet appearing in a live stream; novel tweets represent new stories and, therefore, will be assigned to newly created clusters, which are later ranked according to the number of unique user posts and the entropy information. The approach by <ns0:ref type='bibr' target='#b7'>Becker et al. (2011)</ns0:ref> groups tweets into semantically related clusters and then exploits a series of cluster properties (including temporal, social and topical features) in order to discriminate between real-world events and non-events messages. Analogously, <ns0:ref type='bibr' target='#b27'>Kaleel and Abhari (2015)</ns0:ref> employ a locality-sensitive-hashing scheme to extract clusters from the Twitter stream; the exploration of the clusters, which takes into account size, time and geolocation, leads to the identification of significant real-world events.</ns0:p><ns0:p>As already mentioned, the novelty of our approach with respect to these works is that social media data is not considered on its own, but in conjunction with news articles, in order to obtain a more insightful representation of events.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Integration of heterogeneous data</ns0:head><ns0:p>As stated in the Introduction section, several works in the literature suggest that, in many scenarios, an integration of different kinds of sources is necessary to improve the effectiveness of the event-detection algorithm, as far as both timeliness and coverage are concerned <ns0:ref type='bibr' target='#b48'>(Petrovic et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b41'>Musaev et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Petkos et al., 2012)</ns0:ref>. As a consequence, a promising research branch has grown based on this principle.</ns0:p><ns0:p>One interesting example is represented by the work by <ns0:ref type='bibr' target='#b45'>Osborne et al. (2012)</ns0:ref>, which aims to mitigate the spuriousness intrinsic to Twitter messages by means of information from Wikipedia. The latter is used as a filter to discard large numbers of noisy tweets, thus refining the representation of the extracted events. <ns0:ref type='bibr' target='#b60'>Thapen et al. (2016)</ns0:ref> propose a methodology to automatically spot outbreaks of illness from spikes of activity in real-time Twitter streams. A summary of these events is provided to the user with the goal of creating situational awareness; this is achieved by presenting the most significant tweets and by linking them with relevant news, which are searched on the Web based on term occurrences. <ns0:ref type='bibr' target='#b46'>Petkos et al. (2012)</ns0:ref> develop a novel multimodal clustering algorithm to explore multimedia items extracted from several social media platforms, with the purpose of detecting social events. The authors suggest that the proposed approach can be extended to any scenario which requires the usage of multimodal data. In <ns0:ref type='bibr' target='#b10'>(Consoli et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b11'>(Consoli et al., , 2020) )</ns0:ref> the authors present some novel optimization strategies for the quartet method of hierarchical clustering, a methodology popular in the context of biological phylogenesis construction by integration and clustering of different heterogeneous data.</ns0:p><ns0:p>Our approach differs from other works in this category in the way news and tweets are juxtaposed.</ns0:p><ns0:p>In fact, the information extracted from news articles constitutes the basis of our event-detection algorithm, while the processing of tweets is implemented on top of that, with the goal of corroborating that information.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Event detection in Finance</ns0:head><ns0:p>Event detection, Natural Language Processing and Sentiment Analysis have been widely applied in the financial sphere to provide more and more insightful tools for supporting decision making <ns0:ref type='bibr' target='#b65'>(Xing et al., 2018)</ns0:ref>. Some works have pushed the research as far as correlating the information about the events with the movement of the stock prices, with the goal of predicting future returns and developing trading strategies. <ns0:ref type='bibr' target='#b21'>Heston and Sinha (2017)</ns0:ref> study in which way the sentiment and the aggregation of the news affect the time horizon of the stock return predictability. In particular, through a neural network-based method, they show that daily news can forecast returns within one or two days, while aggregating news over one week provides predictability for up to 13 weeks. Moreover, the authors produce evidence that positive news stories increase stock returns quickly, while negative stories have a long delayed reaction. <ns0:ref type='bibr' target='#b56'>Schumaker and Chen (2009)</ns0:ref> combine news textual data and S&P 500 price time-series to estimate a discrete stock price twenty minutes after a news article was released, using Support Vector Machines <ns0:ref type='bibr' target='#b59'>(Suykens and Vandewalle, 1999)</ns0:ref>. <ns0:ref type='bibr' target='#b13'>Ding et al. (2015)</ns0:ref> extract a structured representation of events from financial news, relying on the Open Information Extraction tool developed by <ns0:ref type='bibr'>Yates et al. (2007)</ns0:ref>, and subsequently train a neural tensor network to learn event embeddings; this dense vector representation is then fed into a deep learning model to predict short-term and long-term stock price movements on S&P 500.</ns0:p><ns0:p>As far as social media-based approaches are concerned, <ns0:ref type='bibr' target='#b12'>Daniel et al. (2017)</ns0:ref> carry out an analysis of the content published on Twitter about the thirty companies that compose the Dow Jones Average. In particular, the authors start by detecting and discarding noisy tweets that might distort the information about relevant financial events; in the next steps, they perform a sentiment analysis on the valuable tweets Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and correlate them with the behavior of the stock market. Authors in <ns0:ref type='bibr' target='#b61'>(Tsapeli et al., 2017)</ns0:ref> apply a bursty topic detection method on a stream of tweets related to finance or politics and, then, employ a classifier to identify significant events that influence the volatility of Greek and Spanish stock markets. Events are represented as feature vectors that encompass a rich variety of information, including their semantics and meta data. Starting from the same motivations, <ns0:ref type='bibr' target='#b34'>Makrehchi et al. (2013)</ns0:ref> collect a set of tweets related to companies of the S&P 500 index and label them based on the price movement of the corresponding stock.</ns0:p><ns0:p>Then, they train a model on this set to make predictions on the labels of future tweets and, on top, create trading strategies that prove to give significant returns compared to baseline methods.</ns0:p><ns0:p>Another branch in financial event detection is focused on the extraction of potentially useful information, like events, from news and social media, that can represent a valuable resource for further algorithmic processing or for human-in-the-loop decision making. The Semantics-Based Pipeline for Economic Event Detection (SPEED) <ns0:ref type='bibr' target='#b22'>(Hogenboom et al., 2013)</ns0:ref> aims to extract financial events from news articles and annotate them with meta-data with an efficiency that allows real-time use. This is achieved through the integration of several resources, including ontologies, named entities and word disambiguators, and constitute a feedback loop which fosters future reuse of acquired knowledge in the event detection process. <ns0:ref type='bibr' target='#b25'>Jacobs et al. (2018)</ns0:ref> tackle the task of economic event detection by means of a supervised data-driven approach. They define the problem as a sentence level multilabel classification task, where the goal is to automatically assign the presence of a set of pre-determined economic event categories in a sentence of a news article. Following the same intuition, Ein-Dor et al. ( <ns0:ref type='formula'>2019</ns0:ref>) develop a supervised learning approach for identifying events related to a given company. For this purpose, the authors train a sentence-level classifier, which leverages labels automatically extracted from relevant Wikipedia sections. <ns0:ref type='bibr' target='#b23'>Hogenboom et al. (2015)</ns0:ref> measured the effects of various news events on stock prices. They retrieved 2010 and 2011 ticker data and news events for different equities and identified the irregular events. Finally, they cleaned the ticker data of rare event-generated noise and obtained a dataset with a more accurate representation of the expected returns distribution.</ns0:p><ns0:p>Moreover, <ns0:ref type='bibr' target='#b42'>Nuij et al. (2014)</ns0:ref> presented a framework for automatic exploitation of news in stock trading strategies where events were extracted from news messages presented in free text without annotations. It turned out that the news variable was often included in the optimal trading rules, indicating the added value of news for predictive purposes.</ns0:p><ns0:p>The innovation that we bring with respect to the literature consists, first of all, in the integration of different sources to obtain richer representations of events. Second, we propose a method to estimate the resonance of an event based on the activity on social media platforms, and we leverage this measure to provide warnings to the final user. Last but not least, our method has been deployed for real-time detection of financial events, accordingly within the evaluation we carried out we applied it on historical data but without considering information beyond the day under analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROPOSED APPROACH</ns0:head><ns0:p>The problem that we set out to tackle in this work is twofold. In the first place, we want to identify groups of news stories related to real-world events of a specific domain, on a daily basis. More precisely, given a day d and a look-back period of n days, our approach aims to extract k semantically related clusters made of text documents published by newswires providers during the n days before d. The parameter k is automatically estimated from the data so that it reflects the real events actually taking place in the best possible way. Each cluster is described by a set of properties, including relevant headlines and keywords, that are semantically correlated with the event represented by the cluster.</ns0:p><ns0:p>Secondly, we intend to tackle the problem of understanding whether a highly relevant event is taking place on a given day. Such an event is defined as hot and is associated with an increased amount of content published on a microblogging platform about that topic in the same time interval.</ns0:p><ns0:p>The main ideas underlying our proposed approach are the following:</ns0:p><ns0:p>• detecting the words that are more significant for the context under analysis can lead to more effective domain-aware representations of documents;</ns0:p><ns0:p>• clustering techniques allow to identify and distinguish events reported in news stories;</ns0:p><ns0:p>• the integration of social media data and news stories is key to spot hot events that are potentially noteworthy for the context under analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the following sections, we will describe the implementation of the algorithm that we designed for a specific scenario, namely the financial field. However, we would like to point out that our proposal can be generalized to any sphere of interest with minimum modifications, concerning mainly the filter applied to the news corpus and the numeric feedback used to assign a score to words in the lexicon generation phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Overall Architecture</ns0:head><ns0:p>The proposed algorithm is outlined in the pipeline in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, which is repeated for each single day d on which the event-detection task is executed. The first step consists of the generation of a dynamic, context-specific lexicon, which includes the list of words that have proven to have the biggest impact on the market in a given period before d <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>. This resource is computed by combining two different data sources: on the one hand, words are extracted from financial news published in a time interval that typically ranges from 2 to 4 weeks previous to d. On the other hand, the stock price time-series of the chosen market is used to assign numeric scores to the words appearing in the press releases.</ns0:p><ns0:p>Once the specialized lexicon is obtained, it is applied as a filter on the news documents, so that only the terms that appear in the lexicon are retained. Subsequently, a document-embedding representation of each news story is constructed by computing the average of the word-embeddings of its filtered words.</ns0:p><ns0:p>After the news-modeling stage, the document-embeddings are fed to an agglomerative clustering algorithm, which returns a list of labels, which indicate the cluster to which each specific observation belongs, and a variable number of centroids. Intuitively, each cluster should correspond to the event discussed in the news contained in it, while the cluster centroid serves as an high-level discriminating representation of the event. The previous output is later refined through an operation of outlier removal, whose goal is to find and discard those documents whose assignment to their cluster is weak. Once the spurious data have been cleaned out from the clusters, a series of properties are extracted from each group of news, both for illustrative and for evaluation purposes. This information includes the titles of the articles, the percentage of positive and negative words (associated to high or low stock price variations, as described in the next paragraph), and the list of the most relevant words for the cluster, assessed through a tf-idf -based method.</ns0:p><ns0:p>At this point of the pipeline, the integration between news stories and social media data takes place.</ns0:p><ns0:p>The idea here is, first, to find tweets that are semantically correlated to some group of news and, second, to detect if an event reported in the news has a wide resonance on the social media platform. More specifically, the tweets relevant for the market under analysis published on the most recent day of the time interval are collected and then represented with the same embedding-based method previously employed for the news. The assignment task consists of attaching every tweet to the closest news-cluster, according to a similarity measure calculated between the tweet-embedding and each news-centroid, as long as this distance is smaller than a defined tweet distance threshold; otherwise, the tweet is discarded.</ns0:p><ns0:p>The last step in the event-detection pipeline is the alert generation. This happens when the percentage of the assigned tweets w.r.t the overall number of tweets published on the most recent day of the time interval is bigger than a given alert threshold. In fact, this suggests that a considerable number of people on the social media platform are discussing some events reported in the news. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.2'>Lexicon Generation</ns0:head><ns0:p>The lexicon generation stage leverages the method that we proposed in <ns0:ref type='bibr' target='#b9'>(Carta et al., 2020)</ns0:ref>, which we hereby set out to illustrate for the sake of completeness. From a general perspective, the goal of the lexicon generation is to select the set of words that are most relevant for a specific domain in a given time interval. In order to be able to capture the impact of events that occur day by day (and thus the effect of new words that show up in news articles reporting such events), we perform the lexicon creation in a dynamic way, repeating its generation every day. For these reasons, we define the lexicons generated by our approach as time-aware and domain-specific.</ns0:p><ns0:p>If we apply this concept to the financial sphere, the relevance of a word can be estimated by observing the effect that it has on the market after the delivery of the news stories containing this word. In this sense, the resulting lexicons will capture potential correlations between words that appear in news stories and stock price movements: terms that are consistently followed by significant positive (negative) variations will receive a high (low) score, while terms that are followed by negligible or arbitrary variations will tend to have a score close to 0. Going into more detail, for each day we collect all the news that are relevant for the S&P 500 Index published during the time frame [d − ℓ, d − 1] (with ℓ ≥ 1). More precisely, we select all news with at least one mention of Standard & Poor (or strictly related keywords like SP500 and SPX). For each news article in this set, we extracted the text, consisting of the title, the snippet and the full body of the article, and then we performed some standard pre-processing techniques on it, such as stop-words removal (using that of Stanford CoreNLP 4 ), stemming and tokenization (the last two using NLTK 5 ). In addition, we removed from the corpus all the words that appeared too frequently and too infrequently, according to given tolerance thresholds. In our case, we filtered out all the words that appear in more than 90% of the documents or in less than 10 documents (both thresholds were set experimentally). Subsequently, we construct a document-term matrix, in which each row corresponds to a news article and date and each column corresponds to a term, as obtained after the pre-processing. In the next step, we iterate over the rows of the matrix and, for each of them, we assign to each of its terms a value equal to the stock price variation registered on the day after the article was published, defined as:</ns0:p><ns0:formula xml:id='formula_0'>∆ d ′ = close d ′ − close (d ′ −1) close (d ′ −1) ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_1'>d ′ ∈ [d − ℓ, d − 1]</ns0:formula><ns0:p>is the day after the publication of the underlying article, and close d ′ is the price of the stock at the closing time of the market on day d ′ . Finally, each column is averaged (counting only non-zero entries), thus obtaining a list of terms, each associated to a score given by the average of the values assigned to them. We sort the terms by decreasing scores and select the first n and the last n.</ns0:p><ns0:p>These are the ones associated to higher price variations, respectively positive and negative, and represent the time-aware, domain-specific lexicon that will be exploited for the news modeling phase.</ns0:p><ns0:p>Hereafter, we give some formal notation to illustrate how this step corresponds to perform a marginal screening <ns0:ref type='bibr' target='#b18'>(Genovese et al., 2012)</ns0:ref>, a form of variable selection which is proven to be more efficient than the Lasso and with good statistical accuracy. Let us assume that in the period [d − ℓ, d − 1] the algorithm collects N articles, where a portion of them contains the term j. Then</ns0:p><ns0:formula xml:id='formula_2'>f ( j) = 1 N • ∑ 1≤k≤N X k ( j) • ∆ d (k) ,</ns0:formula><ns0:p>where X k ( j) is a dummy variable for whether term j appears in article k and ∆ d (k) is the return on the day d for article k. In this form, f ( j) is the slope of a cross-article regression of</ns0:p><ns0:formula xml:id='formula_3'>∆ d = (∆ d (1), . . . , ∆ d (N))</ns0:formula><ns0:p>on the dummy variable X( j) = (X 1 ( j), . . . , X N ( j)). More precisely, f ( j) are coefficients of a marginal regression. By sorting them by decreasing scores and selecting those whose values are over (under) some specified threshold t + (t − ), is similar to taking the first n and the last n. Moreover, in our lexicon construction, if S is the index set of positive and negative words (those corresponding to high or low stock variations), and Ŝ = { j : f ( j) ≥ t + or f ( j) ≤ t − }, under certain conditions Prob( Ŝ = S) = 1 as N and the number of terms go to infinity. This corresponds to the sure screening property <ns0:ref type='bibr' target='#b16'>(Fan and Lv, 2008)</ns0:ref>. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Feature Engineering</ns0:head><ns0:p>The aim of the news modeling phase is to obtain a representation of the news documents in a vector space, such that it captures its semantics and it is convenient for the subsequent cluster analysis. This must be achieved by characterizing each article through the words that are more relevant for the specific domain, ignoring the words that represent noise or that, in any case, do not provide valuable information. The two main resources that are exploited in this stage are the lexicons described in the previous Section 3.2 and a word-embedding model, which assigns a dense vector representation to words <ns0:ref type='bibr' target='#b39'>(Mikolov et al., 2013)</ns0:ref>. The latter can be obtained by training the model on the text corpus under analysis or by loading a pre-trained model previously fit on an independent corpus.</ns0:p><ns0:p>First of all, each news article undergoes a series of standard text pre-processing operations, namely tokenization, conversion to lower case and stopwords removal. Subsequently, the words of each document are filtered by means of the lexicon produced on the day of the publication of the news, so that only the words that appear in the lexicon are retained. Finally, the word-embeddings of the filtered words are extracted and their average is computed to obtain the news-embedding. The embedding representation of news documents obtained in the previous step is the input to the clustering algorithm (Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>), whose goal is to split the articles in semantically-correlated groups.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>News clustering</ns0:head><ns0:p>Ideally, each cluster corresponds to a real-word event.</ns0:p><ns0:p>For this purpose, we employ the agglomerative clustering algorithm. The decision mainly arises from a comparison with other standard techniques, which in this specific scenario do not prove as effective at separating the input data (see Section 4 for a detailed comparative analysis). The agglomerative clustering is a method pertaining to the family of hierarchical algorithms, which build nested clusters by merging or splitting them successively <ns0:ref type='bibr' target='#b50'>(Rokach and Maimon, 2005;</ns0:ref><ns0:ref type='bibr' target='#b40'>Murtagh, 1983;</ns0:ref><ns0:ref type='bibr'>Zhao et al., 2005)</ns0:ref>. More specifically, the agglomerative algorithm follows a bottom up approach: at the beginning, each sample represents a cluster on its own, and clusters are successively merged together according to a linkage criteria. In this study, the choice fell on the average linkage criterion, which minimizes the average of the distances between all observations of pairs of clusters, while the affinity used to compute the linkage was the cosine distance, the most commonly employed metric when dealing with text documents.</ns0:p><ns0:p>An important aspect to take into account is the number of clusters k that the algorithm extracts. This can be set as a parameter to the agglomerative method, but finding the most suitable k a priori is not trivial.</ns0:p><ns0:p>Above all, using a fixed k for all days would in most cases lead to a misshaped approximation of reality, because the number of events taking place around the world naturally varies enormously from day to day and in different periods of the year. For this reason, a technique known as the silhouette maximization method is used to find the ideal value of k in a dynamic manner. The silhouette coefficient is a metric used to evaluate the performance of a clustering algorithm when a ground truth is not available. It ranges from -1 to 1, where higher scores relate to models with better defined clusters and it is defined for each sample by the following formula: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>whole model can be easily computed as the average of all the scores computed on the single samples. In fact, the average silhouette coefficient is the metric that guides us in the choice of the best number of clusters k on each day on which the event-detection pipeline is executed. The agglomerative clustering algorithm is run with k values ranging from 2 to 10 and the silhouette score is computed on the output for every k. The value of k which led to the highest silhouette is selected.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the output of a small instance of the silhouette maximization method applied on a set of news collected in one week. The output of the agglomerative algorithm is simply a sequence of labels, which indicate the cluster to which each specific observation belongs. The method by itself does not return any centroid, as this notion is not employed in any step of its procedure. However, the next phases in the event-detection pipeline require also a centroid for each cluster (i.e., a vector obtained through a combination of the samples in the cluster, typically the mean or median). For this reason, we manually construct a series of centroids, computed as the median of the document-embeddings contained in the respective cluster. In this scenario, the median is a more suited choice compared to the mean, because it is less sensitive to noise and outliers.</ns0:p><ns0:p>The resulting centroids, which are vectors of the same length of the document-embeddings, serve as high-level discriminating representations of the corresponding events.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Outlier Removal</ns0:head><ns0:p>Not necessarily all the articles published by press sources report events currently taking place: for example, in some cases they might refer to anniversaries of past happenings or they might discuss current affairs from a general perspective, including more than one event. This can cause noise in the formation of the clusters and, to some extent, can negatively influence the features of the centroid. For this reason, it is recommendable to detect and remove the outlier documents within each cluster (Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>). Intuitively, these are the observations on which the clustering algorithm was least effective.</ns0:p><ns0:p>Again, the silhouette coefficient (this time in its per-sample version) is used to spot the documents that were poorly clusterized: those with lower silhouette scores are typically the ones that lie on the border between two or more groups, causing a higher uncertainty in the clustering task. This is not enough, though: in fact, there might be samples that, even if they are not located on a border, have a weak correlation with the other articles of the same cluster: these are typically the documents that lie further away from the centroid of the group to which they belong. Therefore, the noise-reduction task that we designed exploits two different metrics in order to detect the outliers: the per-sample silhouette coefficient and the cosine distance from the centroid. First of all, the samples are sorted in decreasing order according to these two metrics, respectively, thus obtaining two different rankings. Then, cutoff threshold is defined on each ranking, by picking a percentile value computed on each of the two lists, respectively (typically somewhere between the 10th and the 30th). Finally, all the samples whose scores Manuscript to be reviewed</ns0:p><ns0:p>Computer Science are below the cutoff threshold in one of the two rankings are marked as outliers and suppressed. It is straightforward to note that choosing higher percentiles to set the cutoff threshold will make the algorithm more selective, in the sense that it will consider more documents as outliers. In rare extreme cases, this might lead to the total suppression of one of more clusters, if these already contained few samples in the first place and were not compact.</ns0:p><ns0:p>At this point, the new centroids of the affected clusters need to be computed, to account for the elimination of some of the documents. Similarly to what was done before, each centroid is obtained as the median of the document-embeddings that remain in the cluster after the outlier removal.</ns0:p><ns0:p>An example of clustering and outlier removal can be observed in Figure <ns0:ref type='figure' target='#fig_10'>5</ns0:ref>, which presents a bidimensional visualization of the clusters obtained from the financial news published on the U.S. press on the week before the Brexit referendum, an event that we will use along the paper that took place on the 23rd of June 2016. Figure <ns0:ref type='figure' target='#fig_10'>5a</ns0:ref> shows the original clusters including all documents. In Figure <ns0:ref type='figure' target='#fig_10'>5b</ns0:ref> the outliers are marked in black. For this example, the 30th percentile was used as the cut-off threshold, in order to make the effects of the algorithm more visible. Centroids are indicated by '+' marks, in the same color of the respective cluster.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Relevant Words Extraction</ns0:head><ns0:p>There are several properties that can be extracted from each cluster to provide insightful information to the user. At the same time, they can serve as a useful means to perform a qualitative evaluation of the clusters, as they allow judging at first sight if the clusters are meaningful and coherent. These properties include the titles and snippets of the news articles, the time-span covered by the cluster, the percentage of positive and negative words from the specialized lexicon and the list of relevant words. Hereby we focus our attention on the latter (Figure <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>), as the other ones are trivial to extract.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>At first, all the news articles included in the current time interval are fed to a tf-idf model, regardless of their cluster. The features used to fit the model are the words included in the specialized lexicon, so this is equivalent to filtering the documents' words with the lexicon. The output of the model is a sequence of vectors, one for each document, where the values represent the relevance of the corresponding words for the document. At this point, the tf-idf vectors are grouped up according to the cluster to which the respective documents have been assigned. Then the average of the vectors is computed for each group, thus obtaining a unique score for each feature for each cluster, indicating the relevance of that word for that cluster. Finally, it is sufficient to rank the features and select the top n to get the list of the most relevant words for each group of news.</ns0:p><ns0:p>Tables <ns0:ref type='table' target='#tab_1'>1 and 2</ns0:ref> show the instance of the 3 most relevant headlines and the lists of the 10 most relevant words, respectively, for the clusters obtained on the day of the Brexit referendum. It is clear from the news titles that cluster#0 effectively captures this event, while the others are more focused on different aspects of business and finance (cluster#1 deals with global finance, cluster#2 with stock markets, cluster#3 with volatility, cluster#4 with the Federal Reserve system). The fact that several mentions of Brexit also appear in the headlines of the other clusters is attributable to the huge impact of the British referendum on many spheres of economy around the world. Not surprisingly, also the titles of the Brexit-cluster are characterized by the financial jargon, since the whole set of news on which the event-detection task is performed was selected by this specific field of interest. For the same reason, the variety of semantic fields involved in the lists of relevant words is not so wide between clusters. Noticeably though, these lists reflect quite accurately the content of headlines of Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>Tweet Assignment</ns0:head><ns0:p>The goal of this phase is to enrich each cluster of news with a group of tweets that are semantically correlated with the event associated to the cluster (Figure <ns0:ref type='figure' target='#fig_12'>7</ns0:ref>). First of all, we collect from Stocktwits all the tweets relevant to the market under analysis, published on the most recent day of the time interval used for the event-detection task 6 . The duplicate tweets are removed in order to avoid the negative influence of spam. A vector representation is constructed for every tweet with the same method used for the news articles: the punctuation is removed, the text is tokenized, the words are filtered with the specialized lexicons and the average of the embeddings of the remaining words is computed. Subsequently, the actual assignment takes place. Each tweet-embedding is compared to each newscluster centroid using the cosine similarity measure. The tweet is attached to the closest cluster only if this distance is smaller than a fixed tweet distance threshold; otherwise, the tweet is considered as noise and is not associated to any cluster.</ns0:p><ns0:p>An example of tweet assignment can be observed in Table <ns0:ref type='table'>3</ns0:ref>, which presents the lists of the 3 most relevant tweets for the clusters obtained on the day of the Brexit referendum. Most importantly, the content of these tweets is totally coherent with the titles reported in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. This means that the association of tweets to news-clusters was successful. It is noteworthy that even URLs, in case they contain meaningful keywords, can provide precious information for the semantic representation of the tweets and for the assignment task. This can be observed in the URLs of the first two tweets of cluster#0, which contain direct references to the Brexit referendum.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.8'>Alert Generation</ns0:head><ns0:p>The last step in the pipeline consists of the detection of the hot events: these are facts that not only have been reported in the news, but are also widely discussed on the social media platforms. The amount of 6 This can be easily done by using the cashtag functionality, i.e. by searching for tweets that contain the symbol $ followed by the market code. No tweets assigned.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. List of the 3 most relevant tweets (i.e. closest to the respective centroid) for each of the 5 clusters obtained from the news collected in the week before the Brexit referendum.</ns0:p><ns0:p>content produced on the Internet about a certain episode is an insightful indicator of the entity of that episode and its consequences. For example, a remarkable popularity of a certain event among the users of Stocktwits is likely to translate into a potential impact on the market, since this website deals mainly with business and finance. Hence the importance of generating alerts that make the investor or trader aware of factors that they should take into account before operating on the market.</ns0:p><ns0:p>This task exploits the tweets-cluster assignment produced in the previous step and simply checks if the percentage of assigned tweets (among all clusters) with respect to the overall number of tweets published on the most recent day of the time interval (thus including also the discarded tweets) is above a fixed alert threshold. If this is true, an alert is generated.</ns0:p><ns0:p>The plot in Figure <ns0:ref type='figure' target='#fig_14'>8</ns0:ref> shows the percentage of assigned tweets between the 10th of June 2016 and the 10th of July 2016. As expected, a peak is observed on the 23rd of June, day of the Brexit referendum, and an alert is generated. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTAL SETTINGS</ns0:head><ns0:p>In this section we will illustrate the datasets we have employed within our study and the methodology we have followed for the experimental evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Datasets</ns0:head></ns0:div>
<ns0:div><ns0:head>Dow Jones DNA</ns0:head><ns0:p>The Dow Jones 'Data, News and Analytics' dataset 7 provides documents from more than 33,000 globally renowned newspapers, including e.g. The Wall Street Journal, the Dow Jones Newswires and The Washington Post. The publications are both in print and online format and cover a wide variety of topics, such as finance, business, current affairs and lifestyle. The delivery frequency ranges from ultra-low latency newswires to daily, weekly, or monthly editions. For every article in the dataset, the headline, the snippet and the full body are available. Furthermore, every item is enriched with a set of metadata providing information about the source, the time and place of the publication, the relevant companies and the topics, among others.</ns0:p><ns0:p>Content usage rights vary based on the specific content, API, or feed combination. These rights include the display for human consumption or text mining for machine consumption and the content retention period.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stocktwits data</ns0:head><ns0:p>Stocktwits 8 is a social media platform designed for sharing ideas between investors, traders, and entrepreneurs. It was founded in 2008 and currently counts over two million registered community members and millions of monthly visitors. Inspired by Twitter, it allows users to share and explore streams of short messages with a maximum 140 characters, organized around tickers referring to specific stocks and financial securities in general. This is achieved through the use of cashtags, which consists of the symbol '$' followed by the code of a financial security (e.g., '$AAPL', '$FB').</ns0:p><ns0:p>The dataset that we employed in our study contains the entire stream of tweets about S&P 500 published between June 2016 and March 2020. These messages were downloaded by means of the official API 9 , selecting only the ones that contained the cashtag '$SPX', which corresponds to the aforementioned stock. The whole obtained collection contains 283,473 tweets.</ns0:p><ns0:p>Beside the full text of the tweet, every item in the dataset comes with a set of metadata, including the exact time of the publication, the number of 'likes' (positive reactions by other users) received by the tweet, the sentiment score associated with the content and the number of the author's followers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Standard & Poor's time-series</ns0:head><ns0:p>Another fundamental data source exploited in our analysis consists of the stock price time series of the the Standard & Poor's 500 Index, which measures the market performance of 500 large companies listed on stock exchanges in the United States. Companies are weighted in the index in proportion to their market value. The 10 largest companies in the index account for 26% of the market capitalization of the index.</ns0:p><ns0:p>These are, in order of weighting, Apple Inc., Microsoft, Amazon.com, Alphabet Inc., Facebook, Johnson & Johnson, Berkshire Hathaway, Visa Inc., Procter & Gamble and JPMorgan Chase.</ns0:p><ns0:p>The dataset that we used for our evaluation was collected at a daily frequency and includes the following information:</ns0:p><ns0:p>• open: price of the stock at the opening time of the market;</ns0:p><ns0:p>• close: price of the stock at the closing time of the market;</ns0:p><ns0:p>• high: maximum price reached by the stock during the day;</ns0:p><ns0:p>• low: minimum price reached by the stock during the day;</ns0:p><ns0:p>• volume: number of operations performed on the stock during the day.</ns0:p><ns0:p>The aforementioned indicators are collected in an aggregated way, taking into account the values recorded for all companies included in the index.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Methodology and Settings</ns0:head><ns0:p>The goal of the experimental framework that we designed to evaluate the proposed approach is twofold: on the one hand, we wish to verify that the clustering algorithm, supported by the news-modeling method and the outlier removal, is effective at separating the news stories according to their content and, thus, at providing an insightful way to inspect events. On the other hand, we want to assess the accuracy of the alert-generation algorithm, in particular to confirm that there is a correlation between hot events spotted by our approach and remarkable real-world events. For our purposes, we performed the following set of experiments: i) comparison of different clustering techniques; ii) event-detection qualitative evaluation;</ns0:p><ns0:p>iii) alert-generation assessment.</ns0:p><ns0:p>Assessing the performance of an event-detection task is a daunting task, and getting a thorough quantitative evaluation is not trivial as well. This is partly due to a certain degree of subjectivity implied in the definition of an event, even more when we consider it with respect to a very specific context. In fact, an important occurrence such as a terrorist attack taking place in Europe, which is relevant in an absolute sense, might not be perceived as a relevant event in the sphere of U.S. finance. Moreover, to the best of our knowledge, universally recognized benchmarks of financial events are not available. For these reasons, in order to limit the subjectivity of the choice, we decided to select a list of events in a deterministic way, based on the weekly variations of the S&P 500 Index (more details on the selection method will be given in Section 5). Intuitively, we follow the assumption that important financial events are commonly associated with significant reactions of the stock markets, as suggested by the plots in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and clustering performance metrics, were based on the popular scikit-learn library 10 ; the K-Means, K-Medoids, K-Medians algorithms used for comparison were implemented through the Pyclustering library 11 ; Natural Language Toolkit 12 and gensim 13 libraries were exploited for text pre-processing. As far as word-embeddings are concerned, we relied on the pre-computed word2vec model based on <ns0:ref type='bibr' target='#b39'>(Mikolov et al., 2013)</ns0:ref>, trained on part of a Google News dataset composed of about 100 billion words. The model contains 300-dimensional vectors for 3 million words and phrases 14 . Throughout the experiments presented in the paper, the parameters were set as follows (if not specified otherwise): each daily lexicon was created on a set of news documents collected from a time window of 4 weeks, excluding all stopwords and terms that appeared in more than 90% or less than 10 documents, and the final lexicon consists of the words below the 20th and above the 80th percentiles of the ranking. The look-back window to collect the news documents to be clusterized on each day is 7 days. The cutoff threshold for the outlier removal stage is set to the 15th percentile. The tweet distance threshold for the tweet-assignment task is set to 0.5; the alert threshold is set to 3%. All the values of these parameters were carefully selected experimentally.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RESULTS</ns0:head><ns0:p>In this section we will show the results we have obtained. In particular we will show the results related to the clustering algorithm, those related to three specific events, and those related to the alert-generation algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Clustering performance evaluation</ns0:head><ns0:p>The first aspect we investigate is the choice of the clustering algorithm. As mentioned, the average Silhouette Coefficient is a standard metric to evaluate the goodness of a set of clusters. However, since it plays a role in the very construction of the clusters, we need some additional counter-checks to make the assessment more robust and less skewed. For this reason, we decided to include three more indicators in our evaluation:</ns0:p><ns0:p>• Dunn Index: similarly to the Silhouette Coefficient, it is a standard metric used to assess the performance of a clustering method when the ground truth is not available. It ranges from 0 to 1, with higher values indicating better clustering and is defined as:</ns0:p><ns0:formula xml:id='formula_4'>Dunn Index = min 1≤i≤c min i≤ j≤c,i = j δ (X i , X j ) max i≤k≤c {∆(X k )} ,</ns0:formula><ns0:p>where c is the total number of clusters, δ (X i , X j ) is the intercluster distance between clusters X i and X j and ∆(X k ) is the intracluster distance within cluster X k .</ns0:p><ns0:p>• Number of extracted clusters: this is also a useful indicator to evaluate the quality of a set of clusters, as higher values typically suggest a better separability of the data.</ns0:p><ns0:p>• Overlapping between the clusters' relevant words: it is estimated by computing the Jaccard Index 15 between the lists of top-10 relevant words for each pair of clusters, and by averaging the results.</ns0:p><ns0:p>A small average overlapping signifies that news documents belonging to different groups discuss different topics and, therefore, that the articles were properly split according to their semantic content. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We used these metrics and the Silhouette Coefficient to compare four different techniques, namely</ns0:p><ns0:p>Agglomerative clustering, K-Means, K-Medians and K-Medoids. These algorithms were executed on the same instances of the data selected for our evaluation, on each day of the time interval, using a look-back window of 1 week. Figure <ns0:ref type='figure' target='#fig_18'>12</ns0:ref> shows the outcome of this experiment, indicating that Agglomerative is the algorithm that leads to better performance in terms of Silhouette, Dunn Index and most remarkably in the number of extracted clusters, while the overlapping of relevant words does not differ much. Please consider that the metrics were computed only after the outlier removal phase, which is responsible for an improvement of approximately 50% of both Silhouette and Dunn Index.</ns0:p></ns0:div>
<ns0:div><ns0:head>Event-detection evaluation</ns0:head><ns0:p>The results presented in the previous section, although obtained through an unsupervised validation, prove by themselves the effectiveness of the clustering algorithm at detecting events from a set of news. For illustration purposes, in this section we will carry out a qualitative analysis of the output of the clustering algorithm, focusing our attention on three specific events:</ns0:p><ns0:p>• The 2016 United States Presidential Elections (8th November 2016);</ns0:p><ns0:p>• The escalation of the U.S.-China trade war (9th May 2019);</ns0:p><ns0:p>• The outbreak of the Covid-19 pandemic (28th January 2020).</ns0:p><ns0:p>We can observe that these three events are well-known world wide and there is no need to agree on those days (we invite the reader to refer to Section 3 for an analysis of the Brexit referendum in June 2016).</ns0:p><ns0:p>For each event, we picked one date among the most significant ones: the 8th of November is the actual date of the 2016 U.S. elections, which brought to the victory of Donald Trump 16 ; the 9th of May 2019 is a few days after Trump threatened to raise tariffs on China and just one day before U.S. actually increased tariffs from 10% to 25% 17 ; the 28th of January is the first day on which the total number of confirmed cases of Covid-19 worldwide surpassed the one-thousand threshold, passing from 793 to 1, 786 with a dramatic 125% daily change 18 . We applied the event-detection algorithm on the news published in the previous week, not including that date.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_19'>13</ns0:ref> we illustrate the results of the event-detection for each tested clustering algorithm. We remind that the Agglomerative clustering outperforms the others (as it can also be seen from the plots), and, therefore, we will focus our analysis on it (subfigures a, b and c).</ns0:p><ns0:p>From the 2D visualizations presented in Figure <ns0:ref type='figure' target='#fig_19'>13</ns0:ref>, it can be seen that the points representing the news documents properly group up to form clusters. Interestingly enough, Figure <ns0:ref type='figure' target='#fig_19'>13c</ns0:ref> shows a strong polarization of the news, with only two clusters obtained. This is probably ascribable to the epochal impact of the Covid-19 outbreak, that drew a considerable part of the attention of the media, with many other topics left uncovered in the press. The average Silhouette Coefficient is decidedly above 0 for all three case studies (0.28, 0.27 and 0.36, respectively), indicating a satisfactory performance of the Agglomerative algorithm. These results are confirmed by the lists of relevant words (Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>), relevant news headlines (Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>) and relevant tweets (Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>), which accurately reflect the semantics of the events (these last have been generated from the agglomerative clustering output only).</ns0:p></ns0:div>
<ns0:div><ns0:head>Alert-generation evaluation</ns0:head><ns0:p>As mentioned in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, the Alert-generation is the last step of the proposed pipeline and is performed on top of the clustering results and the tweets assignment to the generated clusters. The accuracy of the alert-generation algorithm can be gauged in terms of how many hot events it is able to spot in a given ground truth. As mentioned in Section 4.2, we selected the ground truth for our evaluation by looking at the weekly variations of the S&P 500 Index. More in detail, for every day d between June 2016 and March 2020 we compute the variation, in absolute value, between the close price of d and the close price of d + 7 (i.e., 7 days after d). This quantity is formally defined as: Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>∆ d = | close (d+7) − close d | close d .<ns0:label>(2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The days d for which ∆ d > 0.02 are marked as event days. The threshold of 0.02, which corresponds to a 2% weekly variation, is set experimentally and leads to approximately 15% of days being marked as an event day. Consecutive event days are aggregated to form events, which are thus defined as contiguous intervals, delimited by a start date and an end date. In order to avoid splitting intervals that actually refer to the same real event, we ignore interruptions of up to 3 days in the chain of consecutive event and <ns0:ref type='bibr'>(start: 2018-01-12, end: 2018-01-15)</ns0:ref>. We assess the recall of the alert-generation algorithm using the following method: for each (start date, end date) pair, we check if the algorithm produces at least one alert within that interval. In the positive case, the event is considered as spotted. The final recall is</ns0:p><ns0:p>given by the number of spotted events out of the total number of events. On the other hand, to calculate the precision we iterate over the daily alerts generated by the algorithms. If an alert lies within an event interval defined by a (start date, end date) pair, then it is considered a hit; otherwise, it is considered a false alarm. The final precision is obtained as the number of hits out of the overall number of alerts. The F-score is computed with the standard formula: 2 • precision • recall precision + recall . We have repeated the experiment for different values of alert threshold, in a range between 1 and 5, with higher values producing less alerts and thus making the algorithm more selective. Not surprisingly, recall scores become lower as the threshold is increased, while precision follows the opposite tendency, as shown in Figure <ns0:ref type='figure' target='#fig_21'>14</ns0:ref>. Note that we have considered precision, recall and F-score for each of the four clustering algorithms, although the discussion below targets the results obtained with the Agglomerative clustering only. This is a well-known phenomenon in Machine Learning evaluation, commonly referred to as trade-off between precision and recall. However, it is remarkable that, with the lowest threshold, our algorithm is able to identify almost all the events listed in the ground truth, while keeping the number of false alarms relatively small (the precision is above 0.5). It is worth noting that, in this specific application field, recall can be considered more important than precision: in fact, for a trader who relies on the Manuscript to be reviewed alert-generation algorithm to make informed decisions, receiving some false alarms is arguably a lesser evil than missing relevant warnings about events that actually take place. In order to study further this phenomenon, we manually inspected several clusters that led to a false alarm, in order to understand which kinds of events cause this behavior. In many cases, we observed events like, e.g., quarterly earnings reports, that generate a big 'hype' among Stocktwits users, but usually do not produce a proportional impact on the stock price. Furthermore, we calculated the percentage of days marked with an alert out of the whole period on which the algorithm was executed. Figure <ns0:ref type='figure' target='#fig_22'>15</ns0:ref> demonstrates that, for each of the employed clustering algorithms, even with the lower thresholds, the probability of receiving an alert is still reasonably low, confirming that the algorithm is well-aimed.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>An interesting finding is that, in several cases, the alert is produced with a delay of several days after the actual event took place. This can be partly ascribed to the asynchronism between newswires providers and social media <ns0:ref type='bibr' target='#b47'>(Petrović et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b44'>Osborne and Dredze, 2014)</ns0:ref>. In addition to this, in our specific application it is important to take into account the latency between the event itself and its effects on the market. In fact, an event might appear in financial news and on financial-related media only after some time, when its economical consequences manifest themselves. This was the case, for example, for the Covid-19 emergency: American investors, consumers and market in general basically ignored news of the virus outbreak in January 2020, even though several hundred cases had already been reported and Wuhan City (first main site of the virus) had already been quarantined by the Chinese authorities. Nevertheless, starting from February the virus became the main concern of the financial-related press and media and the stock market suffered a violent plunge towards the end of the month due to Covid-19 fears 19 .</ns0:p><ns0:p>To conclude the visual inspection of the events discussed in the previous section, Figure <ns0:ref type='figure' target='#fig_23'>16</ns0:ref> illustrates the plot of the percentage of total assigned tweets, whose behaviour determines the generation of the alerts (as explained in Section 3.8). It is straightforward to notice that the curves reach their peaks in correspondence of the date of the event for all three case studies, further confirming the sensitivity of our approach. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>In this work, we proposed an event-detection approach tailored for financial applications that leverages the integration of traditional newswires and social media data in order to extract information about real-world events, on a daily basis. Starting from a specialized domain-aware representation of textual data obtained through ad-hoc lexicons and word-embeddings, our pipeline is able to identify groups of semantically related news by means of a hierarchical clustering algorithm. An outlier-removal module refines the clusters by discarding misclassified documents, so that a noise-free, meaningful representation of events can be computed. At this point, the news clusters are enriched by data coming from social media, with the goal of estimating the impact of events on public opinions. Despite the defined tweet distance threshold that should avoid this case, it might happen that tweets containing different information are associated with the same cluster triggering the presence of a hot event that would correspond to a false positive.</ns0:p><ns0:p>Although this condition has not occurred within our experiments, we will investigate it further in future works we are headed. Finally, by monitoring the activity on social media platforms, an alert-generation algorithm produces warnings to the final users, in order to make them aware of potentially relevant events that have a big resonance on the Internet. To identify the events of a day d, our proposed approach generates the lexicon out of news articles and stock data information of previous days up to d − 1 without looking at the future. This makes our approach suitable for real-time event detection.</ns0:p><ns0:p>One of the advantages of the proposed approach is that, although it is domain-specific, it can be easily extended to various application fields with minimum modifications. However, in this work we described the specific pipeline and experimental framework that we implemented for the financial sphere. More in detail, word2vec models can be trained ad-hoc on text corpora in other languages, as the algorithm itself is not language-dependent. As an example, libraries such as spaCy 20 provide pre-trained word-embedding models in 15 languages. Although the approach is scalable and does not have high computational times, each of its steps can be run on different machines by exploiting its pipeline architecture. Also, big data frameworks such as Apache Spark, Apache Hadoop, ElasticSearch can be leveraged and managed by cloud systems (e.g. Amazon AWS) to further make the approach faster especially if the input dataset grows exponentially.</ns0:p><ns0:p>We validated our approach by means of a qualitative and quantitative evaluation, based on Dow Jones' Data, News and Analytics dataset, a stream of data collected from the Stocktwits platform and the stock price time series of the S&P 500 Index. Our experiments show that the approach is effective at identifying clusters of news that correspond to relevant real-world events and at extracting meaningful properties about the associated topic. Besides, the alert-generation algorithm produces warning about hot events with a satisfactory accuracy, covering the majority of financial events taking place in the real world and keeping the number of false alarms relatively small. An added value of our evaluation is given by the visual inspection of a selected number of significant real-world events, starting from the Brexit Referendum and reaching until the recent outbreak of the Covid-19 pandemic in early 2020.</ns0:p><ns0:p>One of the applications we envision is the creation of a set of financial indicators that can help improving the accuracy of existing robo-advisory and robo-trading systems. The idea is that identified hot events should be associated to high stock variations and this information might be leveraged to further tune financial forecasting systems.</ns0:p><ns0:p>In the future, we intend to carry out a deeper inspection of the temporal aspects related to the eventdetection task. In particular, we want to gain a better understanding of the effect produced by parameters such as the size of the time windows used for the lexicon creation or for the clustering algorithm. Together with this, we will evaluate the timeliness of the alert-generation algorithm, with the goal of reducing the delay of the generated warnings with respect to the actual starting moment of the events. Another aspect that deserves further investigation is the method used to represent social media data in a vector space. Specifically, we intend to refine the representation of tweets by applying pre-processing techniques that are required by the specificity of the language forms commonly employed by users on Internet platforms. These methods include the assessment of the veracity and reliability of the published content and the detection of slang, grammatical mistakes, misspellings and abbreviations. Last but not least, we would like to take full advantage of the power and benefit that Semantic Web technologies bring: as Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>big data frameworks previously mentioned should address potential computational or scalability problems we might encounter.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>https://developer.dowjones.com/site/global/home/index.gsp 3/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Overall architecture of the proposed approach.</ns0:figDesc><ns0:graphic coords='8,141.73,553.23,416.34,141.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>https://tinyurl.com/yygyo6wk 5 https://www.nltk.org/ 8/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Flowchart of the clustering algorithm.</ns0:figDesc><ns0:graphic coords='10,158.28,265.85,380.47,99.21' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>silhouette = (b − a) max(a, b) , where a is the mean distance between a sample and all other points in the same class and b is the mean distance between a sample and all other points in the next nearest cluster. A global score for the 9/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>( a )Figure 3 .</ns0:head><ns0:label>a3</ns0:label><ns0:figDesc>Figure3. Illustration of the silhouette maximization method. For space reasons, only the output with 4, 5 and 6 clusters is showed(Figures 3a, 3b, 3c, respectively). In this case, the algorithm would choose the number of clusters k = 5, which is the value that leads to the highest silhouette score (0.27 against 0.24 in the other two settings). The bi-dimensional visualization of news clusters is obtained by means of t-SNE, a tool to visualize high-dimensional data (van der Maaten and Hinton, 2008), which reduces the dimension of embeddings from 300 to 2. Every point represents a news in the 2D space and each color represents a different cluster.</ns0:figDesc><ns0:graphic coords='11,141.73,160.13,132.34,97.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Flowchart of the outlier removal procedure.</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.54,117.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Illustration of the outlier removal method on the weeks published in the week before the Brexit referendum. Figure5ashows the original clusters including all documents. In Figure5bthe outliers are marked in black. For this example, the 30th percentile was used as the cut-off threshold, in order to make the effects of the algorithm more visible. Centroids are indicated by '+' marks, in the same color of the respective cluster.</ns0:figDesc><ns0:graphic coords='12,141.73,380.80,202.66,150.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flowchart of the algorithm used to extract the most relevant words from each cluster.</ns0:figDesc><ns0:graphic coords='13,144.28,315.13,408.48,79.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Flowchart of the tweet assignment task.</ns0:figDesc><ns0:graphic coords='14,153.86,361.27,389.34,130.14' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>The polls are closer than the establishment cares to admit http://uk.reuters.com/article/uk-britain-eu-tns-poll Cameron and Osborne have credibility issues with British http://www.express.co.uk/news/uk/682561/david-cameron-eu-referendum-european-union -brexit-germany-boris-johnson-brussels EU referendum outcomes explained https://www.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Plot of the percentage of assigned tweets (among all clusters) with respect to the overall number of published tweets, for each day in the interval between the 10th of June 2016 and the 10th of July 2016. The red markers indicate the generated alerts, while the red horizontal line represents the alert threshold.</ns0:figDesc><ns0:graphic coords='15,150.97,479.86,395.10,168.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figures 9 ,</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figures 9,10 and 11, that show the correlation between the weekly variations of S&P 500 stock price and relevant events taking place in the U.S. and in the rest of the world.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Correlation between the weekly variations of the S&P 500 stock price and relevant events taking place in the U.S. and worldwide in 2017. Source: https://gordianadvisors.com/</ns0:figDesc><ns0:graphic coords='17,146.05,329.21,404.94,294.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Correlation between the weekly variations of the SP500 stock price and relevant events taking place in the U.S. and worldwide in 2018. Source: https://gordianadvisors.com/</ns0:figDesc><ns0:graphic coords='18,147.77,65.99,401.51,287.97' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Comparison of the Silhouette Coefficient, Dunn Index, number of clusters obtained by different clustering algorithms and overlapping between the clusters' relevant words. The horizontal orange line represents the median of the obtained scores whereas the average is indicated between parenthesis. For further details we refer the reader to the official documentation of matplotlib library: https: //matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.boxplot.html.</ns0:figDesc><ns0:graphic coords='20,141.73,354.17,202.65,160.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure13. Illustration of the news clusters extracted on the three case studies considered in our qualitative analysis. The cluster associated to the event is highlighted by a circle. The correspondence between cluster and event is easily understood by manually reading the relevant words and the headlines of the documents that were associated to that label. For information about the 2D visualization technique, please refer to caption in Figure3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>days. For example, if the event days are2018-01-01, 2018-01-02, 2018-01-03, 2018-01-12, 2018-01-13, 2018-01-15, then the resulting events are defined by the intervals(start: 2018-01-01, end: 2018-01-03) </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Precision, recall and F-score achieved by the alert-generation algorithm for different values of alert threshold and for each of the four clustering approaches.</ns0:figDesc><ns0:graphic coords='24,141.73,417.69,186.10,129.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Percentage of alerts produced by the alert-generation algorithm in the time-span considered for the experiments (June 2016 -March 2020), for different values of alert threshold.</ns0:figDesc><ns0:graphic coords='25,141.73,211.74,186.11,129.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Plot of the percentage of assigned tweets with respect to the overall number of published tweets, for each day in the interval around the dates of the events included in our qualitative evaluation, respectively. The red markers indicate the generated alerts, while the red horizontal line represents the alert threshold.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>such we would like to employ ontologies and best practices of the Semantic Web for the extraction and identification of particular events in order to improve further the obtained clustering. The employment of 20 https://spacy.io/ 26/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,98.42,380.50,161.34' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,277.69,380.49,161.74' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,158.28,457.37,380.48,161.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Lists</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>-D.C. juggernaut in manufacturing is splitting in two</ns0:cell></ns0:row><ns0:row><ns0:cell>#1</ns0:cell><ns0:cell>-Global Finance: Abu Dhabi Banks Considering Merger -Deal would create biggest lender in Middle East; industry stocks rally in region</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Global Finance: Bankruptcy Filing By Phone Firm Hits Big Brazilian Bank</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-As Fears of Brexit Ease, Wall St. Thrives</ns0:cell></ns0:row><ns0:row><ns0:cell>#2</ns0:cell><ns0:cell>-Health and Tech Shares Lead a Down Day for the Market</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Market Ends a Losing Streak</ns0:cell></ns0:row><ns0:row><ns0:cell>#3</ns0:cell><ns0:cell>-This Time Around, the Volatility Index Matters -Stock Volatility Expected to Last</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Stocks Fall 5th Day in Row -Fed rate decision likely means high-dividend shares</ns0:cell></ns0:row><ns0:row><ns0:cell>#4</ns0:cell><ns0:cell>will benefit as banks are pressured -Growth Tepid, Fed Slows Plan to Raise Rates</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Brexit fears lead Fed to postpone increase in key interest rate</ns0:cell></ns0:row></ns0:table><ns0:note>investors shift their bets as opinion surveys tilt slightly to Britain staying in EU -Relief Rally Lifts Stocks and Oil -Dow industrials gain 129.71 as bets rise that U.K. would stay in EU; crude jumps 2.9% -Global markets rally as polls show that enthusiasm for Brexit is waning of 3 most relevant titles (i.e., pertaining to the three documents that are closest to the respective centroids) for each of the 5 clusters obtained from the news collected in the week before the Brexit referendum (cluster#3 contains only 2 news documents in total).12/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>List of the 10 most relevant words for the cluster obtained on the day of the Brexit referendum.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Lists of top-10 relevant words for the three case studies considered in our qualitative evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>List of the 3 most relevant headlines (i.e. closest to the respective centroid) for the three events considered in the qualitative evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>EVENT</ns0:cell><ns0:cell>TOP-3 RELEVANT TWEETS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Hillary Clinton Wins!</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 U.S. elections</ns0:cell><ns0:cell>-The stock market's continual favoritism of Hillary Clinton proves that she has been bought. Corruption loves company.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Markets says 'Hillary Clinton Wins'. Congratulation New President</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Goldman Sachs think the increase in tariffs will be narrowly avoided.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Odds of new tariffs at 40% if the Chinese delegation still comes.</ns0:cell></ns0:row><ns0:row><ns0:cell>U.S-China trade war</ns0:cell><ns0:cell>-Tariff increase on Chinese imports will take effect on May 10 -Federal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Register</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-'Reuters: Trump's punitive tariffs will burden consumers'; yeah like it...</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Mainland Chinese, Hong Kong stocks tumble as Covid-19 death toll rises</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>-Second U.S. Covid-19 case is Chicago resident who traveled to Wuhan</ns0:cell></ns0:row><ns0:row><ns0:cell>Covid-19 outbreak</ns0:cell><ns0:cell>-3M Ceo says there factories are working 24/7 making masks &</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>protective equipment to fight the virus. Buy your calls while there</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>cheap. #stocks #covid-19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>List of the 3 most relevant tweets (i.e. closest to the respective centroid) for the three events considered in the qualitative evaluation. Please keep in mind that the time interval used for the U.S. elections does not include the outcome of the polls (hence the wrong forecasts by users that initially proclaimed Hillary Clinton's victory).</ns0:figDesc><ns0:table /><ns0:note>22/30PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>http://www.twitter.com 2 http://www.stocktwits.com 2/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='7'>https://developer.dowjones.com/site/global/home/index.gsp 8 http://www.stocktwits.com 9 api.stocktwits.com/developers/docs 15/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='10'>http://scikit-learn.org 11 http://pyclustering.github.io 12 http://www.nltk.org 13 http://radimrehurek.com/gensim/index.html 14 http://code.google.com/archive/p/word2vec/ 15 The Jaccard Index between two lists is defined as the size of their intersection divided by the size of their union 18/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55204:2:0:NEW 15 Feb 2021)</ns0:note>
</ns0:body>
" | "Minor Revision
February 15, 2021
Dear Editor,
first of all, we would like to thank you for managing our manuscript and the
reviewers for their constructive remarks. We completed the requested minor
revision of our manuscript entitled Event Detection in Finance Using Hierarchical
Clustering Algorithms on News and Tweets, PaperID: 55204, submitted to PeerJ
Computer Science, on the basis of the observations made by the reviewers.
In the following, you will find the details of each of them. The reviewers’ comments and our responses have been organized as a series of numbered questions
(Q) and answers (A).
Thanks again for all your feedback and support that led us to largely improve
the paper.
Best Regards,
Salvatore Carta, Sergio Consoli, Luca Piras, Alessandro Sebastian Podda, Diego
Reforgiato Recupero
• • •
Reviewer 1
(Q1) The revised version is a major improvement over the initial manuscript. The
authors have clarified all of my questions and addressed them in the manuscript
appropriately.
(A1) We thank the Reviewer for her/his positive comments.
(Q2) The new references in sections 2.4 and 3.1 plus figure 3 seem to have
been parsed incorrectly. There are question marks instead of author/year statements. This seems only to be the case in the tracked changes document, but a
final check of all references might be good anyway.
(A2) We thank the Reviewer for pointing out this issue. In fact, it is related to
the tracked changes version only, resulting from the use of the latexdiff tool.
However, we have carefully double-checked all references.
(Q3) I am not fond of the constructions ”the reader notes” and ”the reader
notices”. Perhaps the authors could consider different phrasing?
(A3) We agree with the Reviewer. We have therefore reworded such an expression
in all parts where it was present, accordingly to the comment.
Reviewer 2
(Q1) This is a re-review of the article that I have already revised in fully. My
previous praise about the articles proposal, experimental design and results
carries through to this revised version. In this updated version of their original
article, the authors have complied with all suggested corrections (my own remarks
and other reviewer) improving as desired their original manuscript. However
there are still some minor typos to be fixed. After fixing this I feel confident
about publishing their work.
(A1) We thank the Reviewer for her/his positive comments.
(Q2) The authors have included the statistic explanation of the lexicon generation process I suggested to them. Good. However, I feel it should not go in
section 3.1, where they put it, but rather at the end of section 3.2. In section 3.1
a general overview of the system is given, and expanding here on the mathematics
specificity of the lexicon generation breaks the generality of the description. It
looks out of context, and in fact the notation needed (Delta(k), ...) has not been
introduced, as it is part of sec. 3.2.
(A2) We thank the Reviewer for this comment. We agree that the paragraph on
the lexicon-generation process should better fit at the end of Section 3.2. Hence,
we have moved it there, accordingly.
(Q3) In the same paragraph (p 11) the equation for f(j) please replaced ∗ for ·.
To use ∗ for product is unusual, and also you use · later on for other products.
(A3) Thanks for the suggestion. We have fixed such an issue.
2
(Q4) insert t+ (t− ), after ”specified threshold”
(A4) We have correctly fixed this issue.
(Q5) Now:
• p 5 (abstract) l 28: remove ”we create”. This should be ”the algorithm
dynamically builds a lexicon by looking ...”
• l 289: although −→ accordingly,
• l 290: ”considering the future information of a day” −→ substitute by
”considering information beyond the day”
• l 736: ”The reader notices” substitute by ”The reader should notice”
(A5) Once again, we want to thank the Reviewer for this further heads up,
which enabled us to clearly improve the overall quality of our manuscript. We
have made all the required corrections.
3
" | Here is a paper. Please give your review comments after reading it. |
24 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Graph embedding techniques, which learn low-dimensional representations of a graph, are achieving state-of-the-art performance in many graph mining tasks. Most existing embedding algorithms assign a single vector to each node, implicitly assuming that a single representation is enough to capture all characteristics of the node. However, across many domains, it is common to observe pervasively overlapping community structure, where most nodes belong to multiple communities, playing different roles depending on the contexts. Here, we proposepersona2vec, a graph embedding framework that efficiently learns multiple representations of nodes based on their structural contexts. Using link prediction-based evaluation, we show that our framework is significantly faster than the existing state-of-the-art model while achieving better performance.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Graph embedding maps the nodes in a graph to continuous and dense vectors that capture relations among the nodes <ns0:ref type='bibr' target='#b39'>(Perozzi et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Grover and Leskovec, 2016;</ns0:ref><ns0:ref type='bibr' target='#b48'>Tang et al., 2015)</ns0:ref>. Resulting node representations allow direct applications of algebraic operations and common algorithms, facilitating graph mining tasks such as node classification <ns0:ref type='bibr' target='#b46'>(Sen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b39'>Perozzi et al., 2014)</ns0:ref>, community detection <ns0:ref type='bibr' target='#b16'>(Fortunato, 2010;</ns0:ref><ns0:ref type='bibr' target='#b52'>Yang et al., 2016)</ns0:ref>, link prediction <ns0:ref type='bibr' target='#b18'>(Grover and Leskovec, 2016)</ns0:ref>, visualization <ns0:ref type='bibr' target='#b48'>(Tang et al., 2015)</ns0:ref>, and computer vision <ns0:ref type='bibr' target='#b51'>(Xie et al., 2020)</ns0:ref>. Most methods map each node to a single vector, implicitly assuming that a single representation is sufficient to capture the full characteristics of a node. However, nodes often play multiple roles. For instance, people have multiple roles, or 'personas', across contexts (e.g. professor, employee, and so on) <ns0:ref type='bibr' target='#b1'>(Ahn et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b10'>Coscia et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b22'>Leskovec et al., 2009</ns0:ref><ns0:ref type='bibr' target='#b23'>Leskovec et al., , 2010))</ns0:ref>. Similarly, proteins and other biological elements play multiple functionalities <ns0:ref type='bibr' target='#b36'>(Palla et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b17'>Gavin et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b1'>Ahn et al., 2010)</ns0:ref>. Another example is the polysemy of words when their relations are modeled with graphs; many words possess multiple meanings differentiated by the contexts <ns0:ref type='bibr' target='#b8'>(Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Li and Jurafsky, 2015;</ns0:ref><ns0:ref type='bibr' target='#b19'>Iacobacci et al., 2015)</ns0:ref>. Explicit modeling of such multiplicity and overlapping clusters has been fruitful not only for community detection <ns0:ref type='bibr' target='#b44'>(Rosvall et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Coscia et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Epasto et al., 2017)</ns0:ref>, but also for improving the quality of embedding <ns0:ref type='bibr' target='#b24'>(Li and Jurafsky, 2015;</ns0:ref><ns0:ref type='bibr' /> PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56569:1:1:CHECK 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b14'>Epasto and Perozzi, 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Liu et al., 2019</ns0:ref>). Yet, with the scarcity of embedding methods embracing this idea, the full potential of this approach has not been properly explored.</ns0:p><ns0:p>In this paper, we propose persona2vec, a scalable framework that builds on the idea of ego-splitting <ns0:ref type='bibr' target='#b12'>(Epasto et al., 2017)</ns0:ref>, the process of identifying local structural contexts of a node via performing local community detection on the node's ego-network. For each detected local community (role), we transform each node into multiple personas if there are multiple local communities to which the node belongs. After the split, the original node is replaced by the new persona nodes that inherit the connection from each local community, producing a new persona graph. Instead of separating a node's persona nodes from each other completely <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>, we add directed, weighted edges between personas to capture their origin. In doing so, we allow the direct application of the existing graph embedding methods.</ns0:p><ns0:p>In addition, we take an approach of considering persona-based learning as fine-tuning of the base graph embedding, achieving both efficiency and balance between information from the original graph and the persona graph. Compared with the previous approach <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>, our framework is conceptually simpler to understand and practically easier to implement. Furthermore, it achieves better performance in the link prediction tasks while being much faster. We also would like to clarify that the primary purpose of persona splitting is not about obtaining multiple representations, each of which may be suited for a specific task; it is about teasing out multiple contexts that a single node may possess. In other words, even with a single task, we argue that learning multiple representations for some nodes are highly beneficial.</ns0:p><ns0:p>In sum, we would like to highlight that our approach (1) drastically lowers the barrier for combining existing algorithms with persona splitting, (2) significantly improves the efficiency of the ego-splitting approach, while (3) consistently excelling the previous state-of-the-art model in the link prediction task. Our implementation of persona2vec is publicly available at https://github.com/jisungyoon/ persona2vec.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In addition to graph embedding, our work is closely related to the research of identifying overlapping communities in graphs. Various non-embedding methods such as link clustering <ns0:ref type='bibr' target='#b1'>(Ahn et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b15'>Evans and Lambiotte, 2009)</ns0:ref>, clique percolation <ns0:ref type='bibr' target='#b36'>(Palla et al., 2005)</ns0:ref>, and mixed membership stochastic blockmodel <ns0:ref type='bibr' target='#b2'>(Airoldi et al., 2008)</ns0:ref> have been proposed. Another thread of works focuses on using local graph structure to extract community information <ns0:ref type='bibr' target='#b10'>(Coscia et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Epasto et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b12'>Epasto et al., , 2017))</ns0:ref>. Specifically, Epasto et al. introduce the persona graph method for detecting overlapping communities in graphs <ns0:ref type='bibr' target='#b12'>(Epasto et al., 2017)</ns0:ref>, leveraging ego-network partition. The combination of ego-network analysis and graph embedding methods is still rare. An example is SPLITTER <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>, which we use as the baseline in this paper. Instead of constraining the relations between personas with a regularization term, we propose a simpler and more efficient way of adding persona edges to the graph.</ns0:p><ns0:p>Our work is also related to the word disambiguation problem in a word embedding. Recently, word embedding techniques <ns0:ref type='bibr'>(Mikolov et al., 2013a,b;</ns0:ref><ns0:ref type='bibr' target='#b38'>Pennington et al., 2014)</ns0:ref> have been extensively applied to various NLP tasks as the vectorized word representations can effectively capture syntactic and semantic information. Although some words have multiple senses depending on the context, the original word embedding methods only assign one vector to each word. <ns0:ref type='bibr'>Li et al.</ns0:ref> shows that embedding that is aware of multiple word senses and provides vectors for each specific sense does improve the performance for some NLP tasks <ns0:ref type='bibr' target='#b24'>(Li and Jurafsky, 2015)</ns0:ref>. For this issue, some utilize the local context information and clustering for identifying word sense <ns0:ref type='bibr' target='#b41'>(Reisinger and Mooney, 2010;</ns0:ref><ns0:ref type='bibr' target='#b50'>Wu and Giles, 2015;</ns0:ref><ns0:ref type='bibr' target='#b34'>Neelakantan et al., 2015)</ns0:ref>, some resort to external lexical database for disambiguation <ns0:ref type='bibr' target='#b45'>(Rothe and Schütze, 2015;</ns0:ref><ns0:ref type='bibr' target='#b19'>Iacobacci et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b5'>Camacho-Collados et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b8'>Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Jauhar et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b37'>Pelevina et al., 2017)</ns0:ref>, while some combine topic modeling methods with embedding <ns0:ref type='bibr'>(Liu et al., 2015b,a;</ns0:ref><ns0:ref type='bibr' target='#b9'>Cheng et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b54'>Zhang and Zhong, 2016)</ns0:ref>. We adopt the idea of assigning multiple vectors to each node in the graph to represent different roles as well as exploiting local graph structure for the purpose.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED METHOD: PERSONA2VEC</ns0:head><ns0:p>persona2vec creates a persona graph, where some nodes are split into multiple personas. We then apply a graph embedding algorithm to the persona graph to learn the embeddings of the personas (see Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Let us explain the method formally. Let G = (V, E) be a graph with a set of nodes V and a </ns0:p></ns0:div>
<ns0:div><ns0:head>Refined Ego-splitting</ns0:head><ns0:p>We adopt and refine the ego-splitting method <ns0:ref type='bibr' target='#b12'>(Epasto et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Epasto and Perozzi, 2019)</ns0:ref>. For each node in the original graph, we first extract its ego-graph, remove the ego, and identify the local clusters.</ns0:p><ns0:p>Every cluster in the ego-graph leads to a new persona node in the persona graph (see Fig. <ns0:ref type='figure' target='#fig_0'>1a, c</ns0:ref>). For example, if we consider each connected component as a local community with a connected component algorithm, node C in the original graph belongs to two non-overlapping clusters {A, B} and {D, E, F} in its ego-graph. Given these two clusters, in the persona graph, C is split into C 1 and C 2 to represent the two roles in respective clusters. C 1 and C 2 inherit the connections of C from both clusters separately (see Fig. <ns0:ref type='figure' target='#fig_0'>1c</ns0:ref>). On the other hand, node A only belongs to one ego cluster {B,C}, so it does not split into multiple personas.</ns0:p><ns0:p>Any graph clustering algorithm can be employed for splitting a node into personas. The simplest algorithm is considering each connected component in the ego-network (sans the ego) as a cluster. This approach is fast and works well on sparse graphs. However, in dense graphs, ego-networks are more likely to form fewer connected component, thus other algorithms such as the Louvain method <ns0:ref type='bibr' target='#b4'>(Blondel et al., 2008)</ns0:ref>, Infomap <ns0:ref type='bibr' target='#b43'>(Rosvall and Bergstrom, 2008)</ns0:ref>, and label propagation <ns0:ref type='bibr' target='#b40'>(Raghavan et al., 2007)</ns0:ref> would be more appropriate.</ns0:p><ns0:p>In previous studies, the personas get disconnected without retaining the information about their origin, creating isolated components in the splitting process <ns0:ref type='bibr' target='#b12'>(Epasto et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Epasto and Perozzi, 2019)</ns0:ref>.</ns0:p><ns0:p>Because of this disconnectedness, common embedding methods could not be directly applied to the splitted graph. A previous study attempted to address this issue by imposing a regularization term in the cost function to penalize separation of persona nodes originating from the same node <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>.</ns0:p><ns0:p>Here, instead of adopting the regularization strategy, we add weighted persona edges between the personas, maintaining the connectedness between them after the splitting (see Fig. <ns0:ref type='figure' target='#fig_0'>1c</ns0:ref>). Because the persona graph stays connected, classical graph algorithms and graph embedding methods can now be readily applied without any modification. As we will show later, our strategy achieves both better scalability and better performance.</ns0:p><ns0:p>In the persona graph, we set the weights of the unweighted original edges as 1 and tune the strength of the connections among personas with λ . Persona edges are directed and weighted, with weight λ</ns0:p><ns0:formula xml:id='formula_0'>k o i ,</ns0:formula><ns0:p>where k o i is the out-degree of the persona node after splitting (see Fig. for each v o ∈ V do 3:</ns0:p><ns0:formula xml:id='formula_1'>P v o ← C (v o ) ⊲ find local clusters of v o 4:</ns0:formula><ns0:p>for each p ∈ P v o do 5:</ns0:p><ns0:p>Create v p , and add to G P ,V 2P(v o ) ⊲ create persona nodes for local clusters 6:</ns0:p><ns0:formula xml:id='formula_2'>P2C(v p ) ← p 7:</ns0:formula><ns0:p>for each edge</ns0:p><ns0:formula xml:id='formula_3'>(v i , v j ) in E do 8:</ns0:formula><ns0:p>w ← weight of edge 9:</ns0:p><ns0:p>for each persona node v p in V 2P(v i ) do 10:</ns0:p><ns0:formula xml:id='formula_4'>for each persona node v ′ p in V 2P(v j ) do 11: if v i ∈ P2C(v ′ p ) and v j ∈ P2C(v p ) then 12: Add original edges (v p , v ′ p , w), (v ′ p , v p , w) to E P 13:</ns0:formula><ns0:p>k o ← out-degree sequence after adding original edges 14:</ns0:p><ns0:formula xml:id='formula_5'>for each v o ∈ V do 15:</ns0:formula><ns0:p>for each pair</ns0:p><ns0:formula xml:id='formula_6'>(v i , v j ) in V 2P(v o ) do 16: Add persona edges (v i , v j , k o i × λ ), (v j , v i , k o j × λ ) to E P 17: return G P (V P , E P ),V 2P, P2C</ns0:formula><ns0:p>and persona edges with weight w i . Then the probability p i that an unbiased random walker at i visits neighbors connected with the original edge at the next step is</ns0:p><ns0:formula xml:id='formula_7'>k o i k o i + n p w i .<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>If we set constant weight w i = λ , then </ns0:p><ns0:formula xml:id='formula_8'>p i = k o i k o i + n p λ = 1 1 + n p k o i λ ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>p i = k o i k o i + n p λ k o i = 1 1 + n p λ ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>which is independent of k o i . Our experiments also show that using the out-degree yields better performance than assigning the identical weight to each persona edge. Our algorithm for refined ego-splitting is described in Algorithm 1. Note that it can be generalized to the directed graphs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Persona graph embedding</ns0:head><ns0:p>As explained above, any graph embedding algorithm that recognizes edge direction and weight can be readily applied to the persona graph. Although we use node2vec as the embedding method here, other embedding methods can also be employed. We initialize the persona vectors with the vectors from the original graph before ego-splitting (see Fig. <ns0:ref type='figure' target='#fig_0'>1b</ns0:ref>) to leverage the information from the original graph structure. Persona nodes that belong to the same node in the original graph are thus initialized with the same vector. We then execute the embedding algorithm for a small number of epochs to fine-tune the embedding vectors with the information from the persona graph (see Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Experiments show that usually only one epoch of training is enough.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56569:1:1:CHECK 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 2 persona2vec. Our method for generating persona node embeddings.</ns0:p><ns0:p>Input: G(V, E), Original graph d, embedding dimension γ b , number of walks per node for base embedding t b , random walk length for base embedding w b , window size for base embedding γ p , number of walks per node for persona embedding t p , random walk length for persona embedding w p , window size for persona embedding α, learning rate REFEGOSPLIT, refined ego-splitting method V 2P, node to personas mapping EMBEDDINGFUNC, a graph embedding method e.g. DeepWalk, node2vec Output:</ns0:p><ns0:p>Φ G P , a N P × d matrix with d-dimensional vector representations for all N P persona nodes</ns0:p><ns0:formula xml:id='formula_10'>1: function P E R S O N A2V E C(G, d, γ b ,t b , w b , γ p ,t p , w p , REFEGOSPLIT, EMBEDDINGFUNC, α) 2: G P ,V 2P ← REFEGOSPLIT(G) 3: Φ G ← EMBEDDINGFUNC(G, d, γ b ,t b , w b , α) 4:</ns0:formula><ns0:p>for each v o ∈ V do 5:</ns0:p><ns0:p>for each persona node v p in V 2P(v o ) do 6:</ns0:p><ns0:formula xml:id='formula_11'>Φ G P (v p ) = Φ G (v o ) 7: Φ G P ← EMBEDDINGFUNC(G p , γ p ,t p , w p , α, Φ G P ) 8:</ns0:formula><ns0:p>return Φ G P Also, training the embedding on the persona graphs from scratch fails to yield comparable results.</ns0:p><ns0:p>We find that initializing the embedding with the original graphs, i.e., our present method, consistently improves the performance, suggesting that mixing the structural information from both the original graph and the persona graph is crucial. Our full algorithm is described in Algorithm 2.</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity</ns0:head><ns0:p>Space complexity The persona graph is usually larger than the original graph, but not too large. Node u with degree k u may be split into at most k u personas. In the worst case, the number of nodes in the persona graph can reach O(|E|). But, in practice, only a subset of nodes split into personas, and the number of personas rarely reaches the upper bound. If we look at the persona edges, for a node u with degree k u , at most O(k 2 u ) new persona edges may be added. Thus, the whole persona graph has at most</ns0:p><ns0:formula xml:id='formula_12'>O(|V | × k 2 max ) or O(|V | 3 ) (∵ k max ≤ |V |) extra persona edges. If graph's degree distribution follows a power-law distribution P(k) ∼ k −γ , then k max ∼ |V | 1/γ−1 . Hence, it could be O(|V | γ+1/γ−1 ) and it is between O(|V | 2 ) and O(|V | 3 ) (∵ 2 ≤ γ ≤ 3 in general).</ns0:formula><ns0:p>However, real graph tends to be sparse and</ns0:p><ns0:formula xml:id='formula_13'>k i ≪ |V |. If we further assume k i < |E| holds for every node, then ∑ |V | n=1 k 2 n ≤ ∑ |V | n=1 k n |E| = 2|E| |E|.</ns0:formula><ns0:p>Under this assumption, the upper bound becomes O(|E| 3/2 ). Similarly, with the scale-free condition, the</ns0:p><ns0:formula xml:id='formula_14'>upper bound could be O(|E||V | 1/γ−1 ), which is between O(|E||V | 1/2 ) and O(|E||V |). Again, in practice,</ns0:formula><ns0:p>the number of persona edges is much smaller than this upper bound. To illustrate, we list the number of nodes and persona edges in the persona graph for the graphs we use in this paper in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. All considered, the extra nodes and edges do not bring too much space complexity burden in practice. To test the validity of our assumptions, we sample 1,000 graphs from a public network repository <ns0:ref type='bibr' target='#b42'>(Rossi and Ahmed, 2015)</ns0:ref>. We apply the refined ego-splitting with connected component algorithms on these samples and report the actual number of persona edges |E p | with respect to the practical upper bound |E| 3/2 in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, which shows that the actual number of persona edges |E p | rarely exceeds the tighter upper bound that we proposed and is usually orders of the magnitude smaller.</ns0:p></ns0:div>
<ns0:div><ns0:head>Optimization</ns0:head><ns0:p>Any kind of graph embedding method can be considered, for simplicity, we choose the classical randomwalker based embedding method (e.g. Node2Vec, DeepWalk). In the model <ns0:ref type='bibr' target='#b39'>(Perozzi et al., 2014)</ns0:ref>, the probability of a node v i co-occurring with a node v j is estimated by</ns0:p><ns0:formula xml:id='formula_15'>p(v i |v j ) = exp(Φ Φ Φ ′ ′ ′ v i • Φ Φ Φ v j ) ∑ V k=1 exp(Φ Φ Φ ′ ′ ′ v k • Φ Φ Φ v j ) ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where Φ Φ Φ v i and Φ Φ Φ ′ ′ ′ v i are the 'input' and 'output' embedding of node i. We use input embedding Φ Φ Φ which is known to be more useful and more widely used. Denominator of eq.4 is computationally expensive <ns0:ref type='bibr' target='#b52'>(Yang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Cao et al., 2016)</ns0:ref> and there are two common approximations: hierarchical softmax ( <ns0:ref type='bibr' target='#b33'>Morin and Bengio, 2005)</ns0:ref> and negative sampling <ns0:ref type='bibr' target='#b32'>(Mikolov et al., 2013b)</ns0:ref>. We adopt negative sampling not only because it is simpler and popular but also because it shows better performance as we see later.</ns0:p></ns0:div>
<ns0:div><ns0:head>CASE STUDY</ns0:head><ns0:p>Before diving into systematic evaluations, we provide two illustrative examples: Zachary's Karate club network and a word association network.</ns0:p></ns0:div>
<ns0:div><ns0:head>Case Study: Zachary's Karate club network</ns0:head><ns0:p>We use Zachary's Karate club network <ns0:ref type='bibr' target='#b53'>(Zachary, 1977)</ns0:ref>, a well-known example for the community detection. Nodes represent members of the Karate club, and edges represent ties among the members (see Fig. <ns0:ref type='figure' target='#fig_3'>3a</ns0:ref>). Although it is often considered to have two large disjoint communities, smaller overlapping communities can also be seen, highlighted by nodes such as 1, 3, 28, and 32. In Fig. <ns0:ref type='figure' target='#fig_3'>3b</ns0:ref>, we present the persona graph of the network. persona2vec successfully recognizes these bridge nodes and place their personas in reasonable places. Take node 1 for example. It splits into four persona nodes, which then end up in two different communities. The orange and green communities are clearly separated as a result. We also show the ten predictions with the highest score from the link prediction experiment in Fig. <ns0:ref type='figure' target='#fig_3'>3d</ns0:ref> and checked that the model predicts missing edges well.</ns0:p></ns0:div>
<ns0:div><ns0:head>Case Study: word association network</ns0:head><ns0:p>Word association network captures how people associate words together (free association task). The dataset was originally assembled from nearly 750,000 responses from over 6,000 peoples. Participants were shown 5,019 words and asked to write down the first word that sprang in mind and all the word pairs were collected with their frequency as the weights. This dataset forms a weighted, directed graph of words that captures their multiple senses. Although it is, in principle, possible to run our method Manuscript to be reviewed</ns0:p><ns0:p>Computer Science on the original graph, for simplicity, we convert it into an undirected, unweighted graph by neglecting weight and direction <ns0:ref type='bibr' target='#b1'>(Ahn et al., 2010)</ns0:ref>. In Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>, we shows the persona2vec clusters around the word 'Newton'. We use the Louvain method <ns0:ref type='bibr' target='#b4'>(Blondel et al., 2008)</ns0:ref> to split the personas of each word.</ns0:p><ns0:p>persona2vec successfully captures multiple contexts of the word 'Newton'. For instance, the red persona is associated with 'scientists' and 'philosopher', the grey one is linked to the physics, and the yellow one is associated with 'apple' (note that there is a cookie called '(Fig) Newton' in the U.S.).</ns0:p><ns0:p>Furthermore, persona2vec also captures different nuances of the word 'law' that are related to the crime (brown cluster) and the legal concepts (orange cluster).</ns0:p></ns0:div>
<ns0:div><ns0:head>NUMERICAL EXPERIMENT Link Prediction Task</ns0:head><ns0:p>To systematically evaluate the performance and scalability of the persona2vec framework, we perform a link prediction task using real-world graphs <ns0:ref type='bibr' target='#b18'>(Grover and Leskovec, 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abu-El-Haija et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Link prediction aims to predict missing edges in a graph with partial information, which is useful for many tasks such as suggesting new friends on social networks or recommending products. It has been employed</ns0:p><ns0:p>as a primary task to evaluate the performance of unsupervised graph embedding methods <ns0:ref type='bibr' target='#b0'>(Abu-El-Haija et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b55'>Zhang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>We follow the task setup from the literature <ns0:ref type='bibr' target='#b18'>(Grover and Leskovec, 2016;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abu-El-Haija et al., 2017)</ns0:ref>.</ns0:p><ns0:p>First, the edge set of an input graph is divided equally and randomly into E train and E test . We then refine E test using a rejection sampling based on the criterion that, even when we remove all edges in E test , the graph should be connected as a single component. E train is used to train the models, and E test is used as positive examples for the prediction task. Second, a negative edge set E (−) of non-existent random edges with the same size of E test are generated as negative examples for testing. The performance of a model is measured by its ability to correctly distinguish E test and E (−) after being trained on E train . We then report ROC-AUC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>To facilitate the comparison with the state-of-the-art baseline, we use five graph datasets that are publicly available and previously used <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>. We summarize them as follows.</ns0:p><ns0:p>PPI is a protein-protein interaction graph of Homo sapiens <ns0:ref type='bibr' target='#b47'>(Stark et al., 2006)</ns0:ref>. Nodes represent proteins and edges represent physical interactions between the proteins. ca-HepTh is a scientific collaboration graph. It represents the co-authorship among researchers from the Theoretical High Energy Physics field, derived from papers on arXiv. ca-AstropPh is also scientific collaboration graph, but from Astrophysics.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56569:1:1:CHECK 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>wiki-vote is a voting network, each node is a Wikipedia user and a directed edge from node i to node j represents that user i voted for user j to become an administrator. soc-epinions is a voting graph from a general consumer review site Epinions.com, each node is a member, and a directed edge from node i to node j means that member i trusted member j.</ns0:p><ns0:p>We use the largest component of the undirected graphs and the largest weakly connected component of the directed ones. The statistics of all the graphs are reported in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>The state-of-the-art method in this link prediction task is SPLITTER <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>, which also models multiple roles. As reported in the paper, it outperforms various exiting algorithms ranging across non-embedding methods like Jaccard Coefficient, Common Neighbors, and</ns0:p><ns0:p>Adamic-Adar as well as embedding methods like Laplacian EigenMaps <ns0:ref type='bibr' target='#b3'>(Belkin and Niyogi, 2002)</ns0:ref>, node2vec <ns0:ref type='bibr' target='#b18'>(Grover and Leskovec, 2016)</ns0:ref>, DNGR <ns0:ref type='bibr' target='#b6'>(Cao et al., 2016)</ns0:ref>, Asymmetric (Abu-El-Haija et al.,</ns0:p><ns0:p>2017) and M-NMF <ns0:ref type='bibr' target='#b49'>(Wang et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Given the state-of-the-art performance of SPLITTER, for simplicity, we compare our framework with SPLITTER using the identical task setup and datasets. In addition, because our method can be considered as an augmentation of a single-role embedding method, and because we use Node2vec as the base embedding method, we also employ Node2vec. We run the link prediction task using the original authors' implementation of Node2vec and SPLITTER. The parameters are also kept consistent with the original paper.</ns0:p><ns0:p>persona2vec and SPLITTER have multiple representations on each node, which leads to nonunique similarity estimations between two nodes. Hence, we define the similarity score of a pair of nodes on persona2vec as the maximum dot-product of embedding vectors between any pair of their personas. We found that, among experiment with three aggregation functions min, max, mean, the highest performance is achieved with max, same with SPLITTER <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref>. For SPLITTER, we use maximum cosine similarity, following the author's note in their implementation.</ns0:p><ns0:p>Node2vec (baseline method) For Node2vec, we set random walk length t = 40, the number of walks per node γ = 10, random walk parameters p = q = 1, the window size w = 5, and the initial learning rate α = 0.025. In the original paper, they learn an additional logistic regression classifier over the Hadamard product of the embedding of two nodes for the link prediction. In general, the logistic regression classifier improves the performance. Here, we report results on Node2vec with both dot products and the logistic regression classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>SPLITTER (baseline method)</ns0:head><ns0:p>For SPLITTER, we use the same parameters in their paper <ns0:ref type='bibr' target='#b14'>(Epasto and Perozzi, 2019)</ns0:ref> and Node2vec baseline. We use node2vec with random walk parameters p = q = 1.</ns0:p><ns0:p>persona2vec (our proposed method) We set the hyper-parameters of the original graph embedding with t b = 40, γ b = 10, w b = 5. For the persona embedding, we set t p = 80, γ p = 5, w p = 2 to better capture the micro-structure of the persona graph. The size of the total trajectories is determined by the random walk length t * times the number of walks per node γ * , so we keep t * γ * constant to roughly preserve the amount of information used in the embedding. For both embedding stages, we use the α = 0.025, and node2vec with the random walk parameters (p = q = 1) as the graph embedding function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment Results</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the link prediction performance of persona2vec in comparison with the baselines.</ns0:p><ns0:p>Overall, persona2vec yields superior performance across graphs and across a range of hyperparameter choice. We show that augmenting Node2vec by considering personas significantly improves the link prediction performance, evinced by the significant performance gain (see Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>).</ns0:p><ns0:p>As expected, larger dimensions lead to better performance, although persona2vec achieves reasonable results even with tiny embedding dimensions like 8 or 16. We also show how the performance of persona2vec varies with λ . For undirected graphs, larger λ is beneficial but the trend saturates quickly. For directed graphs, however, optimal performance is achieved with smaller values of λ . In practice, we suggest starting with λ = 0.5 as a default parameter because the overall variation brought by λ is not substantial and even when the performance increases with λ , near-optimal performance can be achieved at λ = 0.5. When compared with the SPLITTER baseline, persona2vec shows on par or better performances given the same embedding dimensions across a wide range of λ . We also report the performance summary for persona2vec with λ = 0.5 (our suggested default) compared with the best baselines in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, which show that persona2vec outperforms the baseline consistently. Also, we report the 'performance gains' from Node2vec, because we used Node2vec as the base embedding method and persona2vec can be considered an augmentation or fine-tuning of the base Node2vec vectors with local structural information. As shown, the persona-based fine-tuning significantly improved the performance.</ns0:p><ns0:p>Also, we show the performance of both methods across different approximations: hierarchical softmax and negative sampling in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. We also found that cosine similarity consistently yields a better result with hierarchical softmax and dot product yields a better result with negative sampling across all methods. So, we use cosine similarity for hierarchical softmax results and use dot product for negative sampling results.</ns0:p><ns0:p>We checked that both methods work well across the optimization method. We found that persona2vec tends to perform better with negative sampling and SPLITTER with hierarchical softmax. Nevertheless, persona2vec yields the best performance consistently.</ns0:p><ns0:p>In addition to the performance of the link prediction task, we also report the execution time of persona2vec and SPLITTER to compare their scalabilities in practice (see Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). Note that the reported execution time is on the link-prediction task, with half of the edges removed from the original graph. SPLITTER runs the embedding procedures for 10 epochs by default in the original implementation, whereas persona2vec only runs for one epoch. term in the cost function of SPLITTER, which persona2vec does not need. In sum, persona2vec</ns0:p><ns0:p>outperforms the previous state-of-the-art method both in terms of scalability and link prediction performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>We present persona2vec, a framework for learning multiple node representations considering the node's local structural contexts. persona2vec first performs ego-splitting, where nodes with multiple non-overlapping local communities in their ego-networks are replaced with corresponding persona nodes.</ns0:p><ns0:p>The persona nodes inherit the edges from the original graph and remain connected by newly added persona edges, forming the persona graph. Initialized by the embedding of the original graph, the embedding algorithm applied to the persona graph yields the final representations. Instead of assigning only one vector to every node with multiple roles, persona2vec learns vectors for each of the personas. With extensive link prediction evaluations, we demonstrate that persona2vec achieves the state-of-the-art performance while being able to scale better. Moreover, our method is easy to comprehend and implement without losing any flexibility for incorporating other embedding algorithms, presenting great potential for applications. The possible combination with various algorithms provides vast space for further exploration.</ns0:p><ns0:p>For instance, in the multi-layer network, inter-layer coupling connection can be interpreted as natural persona edges, persona2vec may be applied to tackle the multi-layer link prediction problem.</ns0:p><ns0:p>As we know, the graph (relational) structure is ubiquitous across many complex systems, including physical, social, economic, biological, neural, and information systems, and thus fundamental graph Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>algorithms have far-reaching impacts across many areas of sciences. Graph embedding, in particular, removes the barrier of translating methods to the special graph data structure, opening up a powerful way to transfer existing algorithms to the graphs and relational data. Furthermore, given that it is natural to assume overlapping clusters and their heterogeneous functionality in most real networks, multi-role embedding methods may find numerous applications in physical, biological, and social sciences.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Illustration of persona2vec framework. (a) A graph with an overlapping community structure. (b) Graph embedding of the original graph is obtained first to initialize the persona embeddings. (c) Transform the original graph into a persona graph. Every edge in the original graph is preserved in the persona graph, while new directed persona edges with weight λ k o i are added between the persona nodes. (d) Graph embedding is applied to the persona graph. (e) The final persona embedding where each persona node has its own vector representation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>1c). Assigning weight proportional to k o i helps the random walker exploring both the local neighbors and other parts of the graph connected to the other personas regardless of its out-degree k o i . Imagine node u, which is split into n p personas. Consider one of the personas i with out-degree k o i Refined ego-splitting for generating the persona graph. Case of the undirected graph Input: Original graph G(V, E); weight parameter λ ; non-overlapping local clustering algorithm C Output: Persona graph G P (V P , E P ); node to personas mapping V 2P; persona to local cluster mapping P2C 1: function REFEGOSPLIT(G(V, E), λ , C ) 2:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Comparison of the the number of persona edges |E p | to the practical upper bound |E| 3/2 .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Case Study: Zachary's Karate club network (a) The Zachary's Karate club network with the force-atlas layout<ns0:ref type='bibr' target='#b53'>(Zachary, 1977)</ns0:ref>. Nodes are colored by communities detected by the Louvain modularity method<ns0:ref type='bibr' target='#b4'>(Blondel et al., 2008)</ns0:ref>. (b) The persona graph. Nodes are colored by k-means clusters<ns0:ref type='bibr' target='#b29'>(MacQueen et al., 1967)</ns0:ref> from the embedding vectors. Coordinates of the persona nodes come from the 2-D projection of the embedding with t-SNE<ns0:ref type='bibr' target='#b28'>(Maaten and Hinton, 2008)</ns0:ref>. Light grey lines represent the persona edges. (c) The network with 20% of edges (16 edges) removed for the link prediction experiment. (d) The network with ten predictions with the highest score from the link prediction experiment. Blue links represent correctly predicted edges and red edges indicate incorrectly predicted ones.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The word association network, clusters around the word 'Newton'. Coordinates of the words come from the 2-D projection of the embedding vectors with UMAP (McInnes et al., 2018). Word colors correspond to the clusters obtained by k-means clustering (MacQueen et al., 1967) on the embedding vectors.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Performance of persona2vec in the link prediction task Number of epochs n is set to 1 in all experiments for persona2vec. Darker colors represent higher embedding dimensions. The confidence intervals are all within the range of the markers. Given the same number of dimensions, persona2vec is always on par or better than SPLITTER</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Comparison of link prediction performance between persona2vec and SPLITTER with different approximations. HS refers to the hierarchical softmax and NS refers to the negative sampling. The star marker indicates the best link prediction performance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Comparison of elapsed time between persona2vec and SPLITTER. Speed gains by persona2vec are shown.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:12:56569:1:1:CHECK 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Descriptive statistics in the graphs used in the evaluation. We report the number of nodes |V |, number of edges |E|, number of nodes in the persona graph |V p |, the ratio of |V p | over |V |, number of persona edges |E p | added in ego-splitting, and the ratio of |E p | over |E 3/2 | which is the upper bound of space complexity.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Dataset |V PPI Type |V | |E| Undirected 3,863 38,705 16,734</ns0:cell><ns0:cell>4.34</ns0:cell><ns0:cell>132,932</ns0:cell><ns0:cell>0.0175</ns0:cell></ns0:row><ns0:row><ns0:cell>ca-HepTh</ns0:cell><ns0:cell>Undirected</ns0:cell><ns0:cell>9,877</ns0:cell><ns0:cell>25,998</ns0:cell><ns0:cell>16,071</ns0:cell><ns0:cell>1.86</ns0:cell><ns0:cell>33,524</ns0:cell><ns0:cell>0.0800</ns0:cell></ns0:row><ns0:row><ns0:cell>ca-AstroPh</ns0:cell><ns0:cell cols='3'>Undirected 17,903 197,301</ns0:cell><ns0:cell>25,706</ns0:cell><ns0:cell>1.44</ns0:cell><ns0:cell>29,012</ns0:cell><ns0:cell>0.0003</ns0:cell></ns0:row><ns0:row><ns0:cell>wiki-vote</ns0:cell><ns0:cell>Directed</ns0:cell><ns0:cell cols='2'>7,066 103,633</ns0:cell><ns0:cell>21,476</ns0:cell><ns0:cell>3.04</ns0:cell><ns0:cell>118,020</ns0:cell><ns0:cell>0.0035</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>soc-epinions Directed</ns0:cell><ns0:cell cols='3'>75,877 508,836 220,332</ns0:cell><ns0:cell cols='2'>2.90 3,550,594</ns0:cell><ns0:cell>0.0098</ns0:cell></ns0:row></ns0:table><ns0:note>p | |V p |/|V | |E p | |E p |/|E 3/2 | The final complexity is O(|E| 3/2 + |E|T (|E|)) + O(|V |γtwd(1 + log(|V |))). Removing the constant factors and assuming close-to-linear local community detection algorithm, the whole process has time complexity of O(|E| 3/2 ) with space complexity of O(|E| 3/2 ) if k i < |E| holds. Complexity can be increased depending on the clustering algorithms on the ego-network.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performance of persona2vec with λ = 0.5. All methods use d = 128. Node2vec* refers Node2vec with the logistic regression classifier, SPLITTER* refers SPLITTER with one epoch, and persona2vec* refers persona2vec with λ = 0.5, our suggested default. Performance gain is performance difference between Node2vec and persona2vec*. We omit the standard error which is smaller than 10 −3 .</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>PPI ca-HepTh ca-AstroPh</ns0:cell><ns0:cell>wiki-vote</ns0:cell><ns0:cell>soc-epinions</ns0:cell></ns0:row><ns0:row><ns0:cell>Node2vec</ns0:cell><ns0:cell>0.585</ns0:cell><ns0:cell>0.825</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell cols='2'>0.694 0.547 ± 0.007</ns0:cell></ns0:row><ns0:row><ns0:cell>Node2vec*</ns0:cell><ns0:cell>0.662 ± 0.001</ns0:cell><ns0:cell>0.848</ns0:cell><ns0:cell cols='3'>0.914 0.705 ± 0.001 0.767 ± 0.002</ns0:cell></ns0:row><ns0:row><ns0:cell>SPLITTER</ns0:cell><ns0:cell>0.856</ns0:cell><ns0:cell>0.903</ns0:cell><ns0:cell>0.982</ns0:cell><ns0:cell cols='2'>0.931 0.961 ± 0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>SPLITTER*</ns0:cell><ns0:cell>0.853</ns0:cell><ns0:cell>0.898</ns0:cell><ns0:cell>0.984</ns0:cell><ns0:cell cols='2'>0.931 0.954 ± 0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>persona2vec*</ns0:cell><ns0:cell>0.879</ns0:cell><ns0:cell>0.927</ns0:cell><ns0:cell>0.985</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell>0.961</ns0:cell></ns0:row><ns0:row><ns0:cell>Performance gain</ns0:cell><ns0:cell>0.294</ns0:cell><ns0:cell>0.102</ns0:cell><ns0:cell>0.084</ns0:cell><ns0:cell cols='2'>0.242 0.414± 0.007</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Response to the review of submission
“Persona2vec: A flexible multi-role representations learning
framework for graphs”
We would like to thank the editor and both reviewers for the instructive comments. In the
following, we provide detailed responses to the issues raised by the reviewers. The added and
deleted sections of the main text are highlighted by blue and red, respectively, in the revised
manuscript.
Response to Reviewer#1
Thank you for the valuable comments. In the following, we go through the review point-bypoint to address the referee’s concerns.
One thing I’d improve is in presenting the idea of multiple node embeddings more
clearly. Specifically, it should be clearer the difference between this method and simply
building different embeddings with different methods. In both cases we generate different vectors for the same node, but in persona2vec the different vectors aid the *same*
task, while in the alternative, you build different vectors specialized to solve *different*
tasks (structural equivalence vs clustering, for instance).
Thank you for your suggestion. We have clarified this point in the introduction.
Introduction (p. 2 line 58) Furthermore, it achieves better performance in the link
prediction tasks while being much faster. We also would like to clarify that the primary purpose of persona splitting is not about obtaining multiple representations,
each of which may be suited for a specific task; it is about teasing out multiple contexts that a single node may possess. In other words, even with a single task, we
argue that learning multiple representations for some nodes are highly beneficial.
The experiments show some standard evaluation techniques for link prediction. I would
love to see a couple of examples of predicted links – picking a network and show the
ten predictions with highest score and whether the nodes are actually connected. This is
rarely done in link prediction papers, though.
We did an additional experiment on the Zachary’s Karate club network with λ = 0.5, d = 16.
In Fig. 1c, we visualized network with removing 20% of edges (16 edges) for the link prediction
experiment. We show the ten prediction with highest score in Fig. 1d. We colored the correctly
predicted edges with a thick blue line and incorrectly predicted edges with a thick red line. We
have updated the manuscript as following:
p. 7 line 197
The orange and green communities are clearly separated as a result. We also
show the ten predictions with the highest score from the link prediction experiment
in Fig.3d and checked that the model predicts missing edges well.
1
a
b
c
d
Figure 1: Case Study: Zachary’s Karate club network (a) The Zachary’s Karate club network with the force-atlas layout. Nodes are colored by communities detected by the Louvain
modularity method. (b) The persona graph. Nodes are colored by k-means clusters from the
embedding vectors. Coordinates of the persona nodes come from the 2-D projection of the
embedding with t-SNE Light grey lines represent the persona edges. (c) The network with
20% of edges (16 edges) removed for the link prediction experiment. (d) The network with
ten predictions with the highest score from the link prediction experiment. Blue links represent
correctly predicted edges and red edges indicate incorrectly predicted ones.
The main contribution vs the state of the art is the addition of persona edges and their
weighting scheme. While this is shown to be effective in practice, I wonder whether this
is significant enough. The improvement is statistically significant as shown in Table 2,
but also relatively small in absolute terms.
Thank you for pointing these out. For each network, we generated ten different Etrain and
Etest and repeat the experiment five times for each ensemble, yielding fifty different experiment results for each method. Also, we report the standard error in Table2, which is smaller
than significant figures, 10−3 across all networks. In other words, across all experiments, the
improvements are statistically significant.
2
In general, I’m happy with the paper. I’m suggesting major revision instead of minor
only for one reason, which I think might significantly impact some methods and/or require some additional experiments. Specifically: could the author use persona2vec for
multilayer link prediction? In this case, interlayer coupling connections would provide
some natural persona edges, but it’d be interesting to use a persona-like edge addition and
weighting scheme to them (if we have n layers, do we connect all layers to all other layers in a clique fashion, or could we ignore some connections? Which weight should each
coupling link have? Does it depend on the degree of the persona in a give layer?). There
are some embedding based multilayer link predictors, along with some non-embedding
ones.
This is an excellent research question. As the reviewer mentioned in the comment, some
multilayer link predictors had been proposed recently (for example, Liu, Weiyi, Pin-Yu Chen,
Sailung Yeung, Toyotaro Suzumura, and Lingli Chen. “Principled multilayer network embedding.” In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp.
134-141. IEEE, 2017). After going through the literature, we agree that persona2vec has the
potential to solve the multilayer link prediction problem. However, we are afraid to say that
this additional experiment would merit a whole new set of experiments and analysis. In other
words, we believe that question merits a full research paper, and thus we think this would be
beyond the scope of the present paper. We have added the following in the discussion,
p. 11 line 327 The possible combination with various algorithms provides vast
space for further exploration. For instance, in the multi-layer network, inter-layer
coupling connection can be interpreted as natural persona edges, persona2vec
may be applied to tackle the multi-layer link prediction problem.
There are a couple of minor typos (line 101 “explores” should be “exploring”, line 151
“shown;; should be “shows”). Nothing that a simple proofread won’t fix. The literature
references are adequate. Figures and tables are of acceptable quality. Overall, the article
of is of good quality.
Thanks for pointing out typos. We have fixed them and updated the manuscript accordingly.
Again, thank you for your careful reading of the paper and for your interesting comments and
ideas.
3
Response to Reviewer #2
2. The whole organizations of the paper are confusing, e.g. the related work is before
the conclusion. I would advise the authors to move it after the introduction section.
Thanks for the comment. we have moved the related work section accordingly.
3. Regarding the Persona graph embedding section, the authors should show more details, from the current version, I cannot see any detailed formulas w.r.t. this section.
As explained in the paper, any graph embedding algorithm that recognizes edge direction
and weight can be readily applied to the persona graph. For simplicity, we choose the classical random-walker based embedding method (e.g.Node2Vec,DeepWalk). We specify detailed
formula we used in the Optimization section.
4. A missed reference for graph embedding learning in CV community is “region graph
embedding for zero-shot learning published in ECCV20”
Thank you for pointing this out. We carefully read the paper and added the corresponding
reference to the introduction section.
The overal novelty is neat, e.g., the idea of using multiple features for one node is new.
One question is how to extend this idea to GCN formulation like in ’region graph embedding for zero-shot learning published in ECCV20’?
Thanks for the thoughtful comment! We have been trying to expand persona2vec into more
recent algorithms such as GCN in another project and made some promising results. We think
it can also be applied in the field of computer vision as well because each region can have
multiple aspects related to the local structure between regions.
1. The overall writing is good, however, there exist some typos, e.g.,
it show →
− it shows in L158, model →
− models in L28, please proofread the whole paper
again to modify them accordingly.
Thanks for pointing out some typos. We carefully proofread the whole paper and updated
the manuscript accordingly. In summary, we appreciate the referee’s thoughtful and constructive comments.
4
" | Here is a paper. Please give your review comments after reading it. |
25 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The information expressed in humanities datasets is inextricably tied to a wider discursive environment that is irreducible to complete formal representation. Humanities scholars must wrestle with this fact when they attempt to publish or consume structured data. The practice of 'nanopublication', which originated in the e-science domain, offers a way to maintain the connection between formal representations of humanities data and its discursive basis. In this paper we describe nanopublication, its potential applicability to the humanities, and our experience curating humanities nanopublications in the PeriodO period gazetteer.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Humanities scholars who wish to make their research materials usable with networked digital tools face a common dilemma: How can one publish research materials as 'data' without severing them from the ideas and texts that originally gave them meaning? The kinds of information produced in the humanities-biographical details, political and temporal boundaries, and relationships between people, places, and events-are inextricably tied to arguments made by humanities scholars. Converting all, or even much, of the information expressed in scholarly discourse into algorithmically processable chunks of formal, structured data has so far proven to be extraordinarily difficult.</ns0:p><ns0:p>But rather than attempt to exhaustively represent her research, a scholar can promote small pieces of information within her work using the practice of nanopublication <ns0:ref type='bibr' target='#b20'>(Mons and Velterop, 2009)</ns0:ref>. Nanopublications include useful and usable representations of the provenance of structured assertions. These representations of provenance are useful because they allow consumers of the published data to make connections to other sources of information about the context of the production of that data. In this way, they strike a balance between the needs of computers for uniformity in data modeling with the needs of humans to judge information based on the wider context of its production. An emphasis on connecting assertions with their authors is particularly well-suited for the needs of humanities scholars. By adopting nanopublication, creators of datasets in the humanities can focus on publishing small units of practically useful, curated assertions while keeping a persistent pointer to the basis of those claims-the discourse of scholarly publishing itself-rather than its isolated representation in formal logic. We offer as an example of this approach the PeriodO period gazetteer, which collects definitions of time periods made by archaeologists and other historical scholars. 1 A major goal of the gazetteer was to make period definitions parsable and comparable by computers, while also retaining links to the broader scholarly context in which they were conceived. We found that a nanopublication-centric approach allowed us to achieve this goal. In this paper, we describe the concept of nanopublication, its origin in the hard sciences, and its applicability to the humanities. We then describe the PeriodO period gazetteer in detail, discuss our experience mapping nonscientific data into nanopublications, and offer advice to other humanities-oriented projects attempting to do the same.</ns0:p></ns0:div>
<ns0:div><ns0:head>NANOPUBLICATIONS</ns0:head><ns0:p>Nanopublication is an approach to publishing research in which individual research findings are modeled as structured data in such a way that they retain information about their provenance. This is in contrast to both traditional narrative publishing, where research findings are not typically published in a structured, computer readable format, and 'data dumps' of research findings which are typically published without any embedded information about their origin or production. The nanopublication approach is motivated by a desire to publish structured data without losing the wider research context and the benefits of traditional scholarly communication <ns0:ref type='bibr' target='#b8'>(Groth et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Nanopublication emerged from work in data-intensive sciences like genomics and bioinformatics, where recent advances in computational measurement techniques have vastly lowered the barrier to collecting genetic sequencing data. As a result, millions of papers have been published with findings based on these new methods. However, the reported results are almost always published in the form of traditional narrative scholarly publications <ns0:ref type='bibr' target='#b19'>(Mons et al., 2011)</ns0:ref>. While narrative results can be read and understood by humans, they are not so easily digested by computers. In fields where computation has been the key to the ability to ask new and broader questions, it should surely be the case that research results are published in such a way that they are able to be easily parsed, collected, and compared by computer programs and the researchers who use them.</ns0:p><ns0:p>On the occasions when research data are released and shared, they are often distributed on their own, stripped of the context necessary to locate them within a broad research environment (the identity of the researchers, where and how this research was conducted, etc.). In this case, publishing practice has swung too far to the opposite extreme. In the service of creating and sharing discrete datasets, the published results have been stripped of their provenance and their position within the wider scholarly endeavor that culminated in their publication. This contextual information is crucial for researchers to determine the trustworthiness of the dataset and learn about the broader project of research from which they resulted.</ns0:p><ns0:p>Nanopublication offers a supplementary form of publishing alongside traditional narrative publications.</ns0:p><ns0:p>A nanopublication consists of three parts, all representable by RDF graphs:</ns0:p><ns0:p>1. An assertion (a small, unambiguous unit of information) 2. The provenance of that assertion (who made that assertion, where, when, etc.) 3. The provenance of the nanopublication itself (who formed or extracted the assertion, when, and by what method)</ns0:p><ns0:p>The formal definitions of these parts are specified by an OWL ontology. <ns0:ref type='bibr' target='#b9'>(Groth et al., 2013)</ns0:ref> By representing their research in nanopublications alongside their narrative reports, researchers can publish their data in such a way that the data remain within their human context while also being easily digested by computer programs.</ns0:p><ns0:p>Authors are encouraged to include the smallest possible unambiguous pieces of information as the assertions at the center of a nanopublication. In the bioscience context, these assertions could range from statements of causality, to measurements of gene expressions or gene-disease associations, to statistics about drug interactions. The scope and nature of appropriate units of nanopublication inevitably vary by discipline. Multiple statements of identical or closely related facts can be connected with different sources of provenance, thereby potentially augmenting the ability of consumers to judge the quality of assertions. <ns0:ref type='bibr' target='#b8'>Groth et al. (2010)</ns0:ref> call the collection of nanopublications all referring to the same assertion 'S-evidence', and cite the potential benefits of the ability to automatically connect findings across research publications.</ns0:p><ns0:p>Several European repositories of bioinformatic data have begun to publish their contents as nanopublications, including the Biosemantics Group, 2 neXtProt, 3 and DisGeNET. 4 These publications can be aggregated and connected in larger systems, such as the decentralized reputation system described by <ns0:ref type='bibr' target='#b15'>Kuhn (2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>NANOPUBLICATION IN THE HUMANITIES</ns0:head><ns0:p>While the bioinformatics research community has enthusiastically adopted nanopublication, other disciplines have been slow to follow. <ns0:ref type='bibr' target='#b7'>Gradmann (2014)</ns0:ref> suggested that specialized and stable terminologies, as well as sufficient funding to organize these terminologies in formal ontologies, may be prerequisites for the successful deployment of nanopublication. Thus while he expects other scientific, technical, and medical disciplines to eventually embrace nanopublication, he is less sure that nanopublication will work for the humanities. Historians, for example, use relatively little specialized terminology and pride themselves on their ability to use 'ordinary language' to represent the past. Even when humanities scholars use specialized theoretical language, their use of this language is often unstable, ambiguous, and highly contested. Perhaps, then, a publishing technique that seeks to eliminate such ambiguity is ill-suited for these fields.</ns0:p><ns0:p>A related obstacle to the adoption of nanopublication beyond the hard sciences has to do with differences in the role played by 'facts'. Researchers trained in the hard sciences understand their work to be cumulative: scientists 'stand on the shoulders of giants' and build upon the work of earlier researchers.</ns0:p><ns0:p>While scientists can in principle go back and recreate the experiments of their predecessors, in practice they do this only when the results of those experiments have not been sufficiently established as facts. Efficient cumulative research requires that, most of the time, they simply trust that the facts they inherit work as advertised. Something like this process seems to be assumed by many proponents of nanopublications.</ns0:p><ns0:p>For example, <ns0:ref type='bibr' target='#b20'>Mons and Velterop (2009)</ns0:ref> claim that a major goal of nanopublication is to 'elevate' factual observations made by scientists into standardized packages that can be accumulated in databases, at least until they are proved wrong. These standardized packages can then be automatically or semi-automatically analyzed to produce new factual observations (or hypotheses about potential observations), and the cycle continues.</ns0:p><ns0:p>Yet as <ns0:ref type='bibr' target='#b18'>Mink (1966)</ns0:ref> observed, not all forms of research and scholarship are aimed at producing 'detachable conclusions' that can serve as the basis for a cumulative process of knowledge production.</ns0:p></ns0:div>
<ns0:div><ns0:head>Anticipating Gradmann, Mink argued that</ns0:head><ns0:p>Detachable conclusions are possible in science because-and only because-of its theoretical structure. The division of labor in research requires that concepts have a uniformity of meaning, and the methodological problem of definition therefore becomes central. <ns0:ref type='bibr'>(Mink, 1966, 39)</ns0:ref> Mink contrasted science to the study of history, which, lacking both explicit methodology and uniform consensus on the meanings of its concepts, does not produce 'detachable conclusions'. But this does not mean that historical scholarship fails to produce knowledge, only that it is a separate and autonomous mode of understanding. The goal of most historical scholarship is not to establish conclusions by constructing an explanatory chain of inferences from evidence. Rather the goal is to render what Mink called a 'synoptic judgment', an interpretive act in which the scholar comes to 'see together' the disparate observable elements of some phenomena as a synthetic whole. The historian who judges the advent of printing to have constituted a 'communications revolution' <ns0:ref type='bibr'>(Eisenstein 1979)</ns0:ref> has not made an inference from the available evidence but has constructed a particular interpretation of that evidence. To communicate her synoptic judgment to others, she cannot simply state her conclusions unambiguously and rely on her audience's theoretical understanding to make them meaningful; instead she must arrange and exhibit the evidence to help them 'see together' what she saw.</ns0:p><ns0:p>So is nanopublication a poor fit for fields of knowledge production that do not follow the model of cumulative science? We believe the answer is no. First of all, even Mink did not argue that there were no facts in history, only that the significant conclusions drawn by historians do not typically take the form of factual statements. There are plenty of equivalents in history and the humanities to the databases of curated factual statements that exists in the sciences: prosopographical databases <ns0:ref type='bibr' target='#b2'>(Bradley and Short, 2005)</ns0:ref>, digital historical gazetteers <ns0:ref type='bibr' target='#b5'>(Elliott and Gillies, 2011)</ns0:ref>, not to mention the catalogs and indexes of bibliographical data that make humanities scholarship possible <ns0:ref type='bibr' target='#b3'>(Buckland, 2006)</ns0:ref>. Some of these facts may be vague or uncertain, but as <ns0:ref type='bibr' target='#b16'>Kuhn et al. (2013)</ns0:ref> observe, even knowledge that cannot be completely formally represented, including vague or uncertain scientific findings, can benefit from the nanopublication approach. We agree but would go further to say that nanopublication is useful even for information that is neither testable nor falsifiable, exemplified by Mink's synoptic judgments. We have demonstrated the utility of nanopublications for describing synoptic judgments of historical periodization in the PeriodO period gazetteer, which we describe below.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE PERIODO PERIOD GAZETTEER</ns0:head><ns0:p>In their work, archaeologists and historians frequently refer to time periods, such as the 'Classical Iberian Period' or the 'Progressive Era.' These time periods are shorthand representations of commonly Manuscript to be reviewed Computer Science referenced segments of time and space. While time periods might have commonly understood definitions, they are typically scattered throughout myriad publications and are treated as shared, assumed knowledge. This leads to difficulty and repeated effort when scholars want to visualize their data in space and over time, which requires mapping these discursive period labels to discrete spatiotemporal ranges <ns0:ref type='bibr' target='#b24'>(Rabinowitz, 2014)</ns0:ref>.</ns0:p><ns0:p>To build the PeriodO gazetteer, we compiled thousands of definitions of time periods from published sources within the fields of archaeology, history, and art history. We mapped these time periods to a consistent data model and published them as linked open data <ns0:ref type='bibr' target='#b10'>(Heath and Bizer, 2011</ns0:ref>) so that future scholars would be able to link their uses of period terms to information about the provenance of those terms. A web-based faceted browsing interface allows scholars to find and compare period definitions (see Figure <ns0:ref type='figure'>3</ns0:ref>), or software developers can use the PeriodO data directly in their own systems. The gazetteer is editable via HTTP; contributors can submit proposed changes in the form of patches, and the PeriodO editors can accept or reject them. All proposed and accepted changes are stored, and each period definition has a history of changes in the form of patch submissions and approvals <ns0:ref type='bibr' target='#b25'>(Shaw et al., 2015)</ns0:ref>. To ease the process of creating patches that conform to the PeriodO data model, we developed an editing interface that runs in a standard web browser (see Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Model</ns0:head><ns0:p>PeriodO defines a 'period definition' as a scholarly assertion about the name and spatiotemporal extent of a period. The core of a period definition consists of text quoted from the original source indicating the name of the period, its temporal range, and the geographic region to which it applies. Multiple period definitions from the same source are grouped into a period collection. For example, the article 'Domestic Architecture and Social Differences in North-Eastern Iberia during the Iron Age (c.525-200 BC)' includes the following sentence:</ns0:p><ns0:p>For the Catalan area, the complete system with the four above-mentioned categories is not as clearly documented before the fourth century as it is during the Classical Iberian Period (400-200 BC), although differences in the size of the sites, as well as the specialization of the functions of some settlements, can be already detected during the Early Iberian Period (525-400 BC). <ns0:ref type='bibr' target='#b1'>(Belarte, 2008)</ns0:ref> This sentence contains two assertions defining period extents, so it is modeled in PeriodO as two period definitions. The first definition has the label 'Classical Iberian Period' and its start and end points are labeled as '400 BC' and '200 BC' respectively. The second definition has the label 'Early Iberian Period' and its start and end points are labeled as '525 BC' and '400 BC' respectively. The spatial extent of both definitions is labeled as 'Catalan area'. All of these labels are taken verbatim from the source text and should never change.</ns0:p><ns0:p>Because they come from the same source, these two period definitions are grouped into a period collection. The bibliographic metadata for the source article is associated with this period collection. (In the event that a source defines only a single period, then the period collection will be a singleton.)</ns0:p><ns0:p>Belonging to the same period collection does not imply that period definitions compose a periodization.</ns0:p><ns0:p>A periodization is a single coherent, continuous division of historical time, each part of which is labeled with a period term. A period collection, on the other hand, is simply a set of period definitions that share the same source. When the period definitions in a period collection do compose a periodization, this can be indicated through the addition of statements relating the period definitions to one another, e.g. as belonging to the same periodization and having a specific ordering.</ns0:p><ns0:p>Because source languages, dating systems, and naming of geographical regions can vary widely, labels taken verbatim from source documents are insufficient for indexing and visualizing period definitions in a uniform way. Thus the rest of the PeriodO data model consists of properties added by PeriodO curators to normalize the semantic content of these textual labels. First, all periods originally defined in a language other than English are given an alternate English-language label. When a period definition was originally defined in English, the alternate label may make make minor changes for consistency.</ns0:p><ns0:p>For example, Belarte's definition of the 'Classical Iberian Period' period was given an alternate label of 'Classical Iberian', removing the word 'Period' for brevity and consistency with other definitions.</ns0:p><ns0:p>Next, the specification of temporal start and end points is standardized by adding ISO 8601 lexical Manuscript to be reviewed Computer Science @prefix bibo: <http://purl.org/ontology/bibo/> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix periodo: <http://n2t.net/ark:/99152/p0v#> . @prefix skos: <http://www.w3.org/2004/02/skos/core#> . @prefix time: <http://www. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science representations of proleptic Gregorian calendar years 5 : -0399 for '400 BC' and -0199 for '200 BC'. Finally, descriptions of spatial extent are normalized by adding references to 'spatial things', typically modern nation-states. In this case both definitions are linked to the spatial thing identified by http://dbpedia.org/resource/Spain. The complete PeriodO representation in Turtle of Belarte's collection of period definitions is given in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. 6</ns0:p></ns0:div>
<ns0:div><ns0:head>PERIODO AS LINKED DATA</ns0:head><ns0:p>We have taken pains to make it easy to work with the PeriodO dataset. In particular, we have tried to make the PeriodO dataset easily usable by developers who do not use an RDF-based tool stack. The PeriodO dataset is published as JSON, which is easily parsed using standard libraries in most programming environments including, of course, web browsers. But while JSON provides an easy and convenient way to work with the PeriodO dataset by itself, we also knew that many users would want to combine the PeriodO dataset with the growing amount of scholarly Linked Data being published. Most of our initial contributors of period definitions work in archaeology, a discipline that has several large, well-curated, interlinked, widely used and well-maintained Linked Data datasets. <ns0:ref type='bibr' target='#b12'>(Isaksen et al., 2014)</ns0:ref> Thus we take advantage of the recent W3C Recommendation of JSON-LD <ns0:ref type='bibr' target='#b26'>(Sporny et al., 2014)</ns0:ref> to also make the PeriodO dataset available as Linked Data. By providing a JSON-LD context for the PeriodO dataset, we make it usable within an RDF-based stack.</ns0:p></ns0:div>
<ns0:div><ns0:head>RDF Vocabularies</ns0:head><ns0:p>The JSON-LD context maps relationships between PeriodO entities to terms from RDF vocabularies. Of these, the most important is SKOS <ns0:ref type='bibr' target='#b11'>(Hobbs and Pan, 2006)</ns0:ref>. The human-readable labels for a PeriodO definition are mapped to the SKOS prefLabel and altLabel properties, implying that a PeriodO period definition can be interpreted as a SKOS Concept. The relationship between a period definition and the period collection to which it belongs is mapped to the SKOS inScheme property, implying that a period collection is a SKOS ConceptScheme. The relationship between a period collection and its source is mapped to the DCMI source term, and the various properties in the bibliographic description of the source are also mapped to the appropriate DCMI terms. Finally, the relation between a period definition and its geographical extent is mapped to the DCMI spatial term.</ns0:p><ns0:p>The relationships between a period definition and the start and end of its temporal extent are respectively mapped to the OWL-Time intervalStartedBy and intervalFinishedBy properties.</ns0:p><ns0:p>This implies that a period definition, in addition to being a SKOS Concept, is also an OWL-Time</ns0:p><ns0:p>ProperInterval (an interval of time having non-zero duration). Importantly, this also implies that the start and end of a period definition's temporal extent are themselves ProperIntervals, not points or instants. This is important because the beginnings and endings of historical periods can never be precisely determined. In the example of the Classical Iberian Period given above, both the beginning and the end of the period are interpreted as intervals with a duration of one year. Interpreting period starts and ends as ProperIntervals also allows us to make a distinction between the intervals themselves and their descriptions. The intervals themselves are not precisely specifiable, but we can create pragmatic OWL-Time DateTimeDescriptions of them for the purposes of comparison and visualization.</ns0:p><ns0:p>The start and end of a period definition's temporal extent are themselves intervals with their own starts and ends, so temporal extent can be associated with a maximum of four values. This is interoperable with other proposed representations of fuzzy, imprecise, or uncertain temporal extents, such as the four start, stop, earliest, latest keys proposed for GeoJSON-LD <ns0:ref type='bibr' target='#b6'>(Gillies, 2015)</ns0:ref>. In the current PeriodO data set these four properties only have (ISO 8601) year values, because none of our sources specified endpoints at a more granular level than year. However, we expect to have finer-grained values as we add periodizations of more recent history. At that point we will need to decide upon a unit of representation that makes it simple to compare intervals defined at different levels of granularity. Adding complexity to time interval expressions will be possible without changing our underlying data model because of the flexibility of our current approach.</ns0:p><ns0:p>The start, latest start, earliest end, end approach enables us to represent the most common patterns for defining periods found in our sources. For example a period defined as starting '3000 B.C. (+/-150 5 Proleptic refers to dates represented in some calendar system that refer to a time prior to that calendar's creation. The Gregorian calendar was adopted in 1582, but most of our dates fall in years prior to that one. 6 Turtle is a human-readable syntax for serializing RDF graphs <ns0:ref type='bibr' target='#b4'>(Carothers and Prud'hommeaux, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science years)' and ending 'about 2330 B.C.' can be represented with three values: -3149, -2849, and -2329.</ns0:p></ns0:div>
<ns0:div><ns0:head>Kauppinen et al. (2010) propose defining curves over intervals to represent fuzziness, imprecision, or</ns0:head><ns0:p>uncertainty in order to maximize precision and recall with respect to temporal relevance judgments made by experts. We have chosen not to support such more complex representations at this time because we are focused primarily on representing periods as defined in textual sources. Natural language is already a compact and easily indexable way to represent imprecision or uncertainty. Rather than imposing an arbitrary mapping from natural language to parameterized curves, we prefer to maintain the original natural language terms used. However if scholars begin defining periods with parameterized curves (which is certainly possible) then we will revisit this decision.</ns0:p></ns0:div>
<ns0:div><ns0:head>Modeling Provenance</ns0:head><ns0:p>To model the provenance of period assertions, we used the Provenance Ontology <ns0:ref type='bibr' target='#b17'>(McGuinness et al., 2013)</ns0:ref>. We record each change to the dataset (a patch) as a prov:Activity. This Activity has prov:startedAtTime and prov:endedAtTime values representing timestamps when the patch was sent and accepted, respectively. The activity also has two prov:used statements: one which refers to the specific version of the entire dataset to which the patch was applied (for example, http://n2t. net/ark:/99152/p0d?version=1), and one referring to the patch itself as a prov:Entity. The patch Entity contains a URL to the JSON-Patch file which resulted in the change Activity. <ns0:ref type='bibr' target='#b22'>(Nottingham and Bryan, 2013)</ns0:ref> Finally, the Activity has prov:generated statements for each of the period collections and period assertions (implied to be of the type prov:Entity) that were affected by the given patch.</ns0:p><ns0:p>Each of these affected entities has a prov:specializationOf statement that refers to the permanent identifier for the period assertion or collection (with no particular version specified). If the affected entities are revisions of an existing entity, they also have prov:wasRevisionOf statements that refer to the version that they were descended from.</ns0:p><ns0:p>We publish a changelog at http://n2t.net/ark:/99152/p0h#changelog that represents the sequential list of prov:Activity entities that created the current version of the dataset as an ordered RDF list. In this way, one can reconstruct the origin of each change to the dataset as a whole, or to individual period assertions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Minting Long-term URLs</ns0:head><ns0:p>In addition to mapping relationships to well-known vocabularies, interpreting PeriodO as Linked Data requires a way to assign URLs to period collections and definitions. As shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, period definitions and period collections in the dataset are given short identifiers: p06xc6mvjx2 identifies the definition of the Classical Iberian Period, and p06xc6m identifies the collection to which it belongs. But these identifiers are only useful within the context of the PeriodO dataset; they are not guaranteed to be unique in a global context and, unless one already has the PeriodO data, one cannot resolve them to obtain representations of the entities they identify. URLs, on the other hand, are globally unique and can be resolved using HTTP to obtain representations; this is the core concept behind Linked Data. So, we need a way to turn the short PeriodO identifiers into URLs.</ns0:p><ns0:p>To turn PeriodO identifiers into URLs we rely on the ARK identifier scheme <ns0:ref type='bibr' target='#b27'>(Starr et al., 2012)</ns0:ref> provided by the California Digital Library (CDL). First, we include in the JSON-LD context a @base value specifying the base URI (http://n2t.net/ark:/99152/) to use when interpreting the PeriodO dataset as Linked Data. This allows the short PeriodO identifiers to be interpreted as URLs; for example p06xc6mvjx2 is interpreted as a relative reference to the URL http://n2t Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>PERIOD ASSERTIONS AS NANOPUBLICATIONS</ns0:head><ns0:p>We created the PeriodO dataset based on the same core concerns of nanopublication authors: to extract, curate, and publish small, computable concepts from their broader sources while still preserving their provenance. A nanopublication is made up of an assertion, the provenance of that assertion, and the provenance of the nanopublication itself. In PeriodO, these are:</ns0:p><ns0:p>• Assertion: The definition of a period</ns0:p><ns0:p>• Provenance: The source this period was derived from. This may be a citation of a printed work or a URL for a resource hosted on the web.</ns0:p><ns0:p>• Provenance of nanopublication: The history of the period definition within the PeriodO system, including the date it was added or changed, the identity of the person who submitted or changed it, and the identity of the person who approved additions or changes.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows two period definitions with the same provenance. Each of these definitions is represented by an individual nanopublication. The nanopublication for the 'Early Iberian Period' is shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>While PeriodO period definitions readily map to the nanopublication scheme, we faced several challenges during our creation of the dataset due to its interpretive nature.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Unfalsifiable Nature of Time Period Definitions</ns0:head><ns0:p>The current version of the Nanopublication Guidelines includes a note suggesting that the guidelines be amended to state that an assertion published as a nanopublication should be 'a proposition that is falsifiable, that is to say we can test whether the proposition is true or false' <ns0:ref type='bibr' target='#b9'>(Groth et al., 2013)</ns0:ref>. Were this amendment to be made, PeriodO nanopublications would be in violation of the guidelines, as period definitions in PeriodO, like most of the information produced in the humanities, are neither testable nor falsifiable. Consider the assertion 'there is a period called the Late Bronze Age in Northern Europe, and it lasted from about 1100 B.C. to 500 B.C.' The 'Late Bronze Age' is a purely discursive construct. There was no discrete entity called the 'Late Bronze Age' before it was named by those studying that time and place. Consequently, one cannot disprove the idea that there was a time period called the 'Late Bronze Age' from around 1100 B.C. to 500 B.C.; one can only argue that another definition has more credence based on non-experimental, discursive arguments.</ns0:p><ns0:p>The proposed falsifiability requirement makes sense in certain contexts. Computational biologists, for example, wish to connect, consolidate, and assess trillions of measurements scattered throughout a rapidly growing body of research findings. Their goal is to create a global, connected knowledge graph that can be used as a tool for scientists to guide new discoveries and verify experimental results. In the PeriodO context, however, we are not concerned with making an exhaustive taxonomy of 'correct' periods or facilitating the 'discovery' of new periods (a non sequitur-there are no periods that exist in the world that are awaiting discovery by some inquiring historian or archaeologist). Instead we are interested in enabling the study and citation of how and by whom time has been segmented into different periods. It is not necessary that these segmentations be falsifiable to achieve this goal; they only need to be comparable. <ns0:ref type='bibr' target='#b16'>Kuhn et al. (2013)</ns0:ref> expressed concern that requiring formal representation for all scientific data published as nanopublications 'seems to be unrealistic in many cases and might restrict the range of practical application considerably.' Similarly, requiring assertions to be unambiguous and falsifiable would unnecessarily restrict the practical application of nanopublication. The nature of nanopublication assertions should ultimately be determined by the practical needs of the researchers who use them. What is important about nanopublications is not the nature of the assertions, but the expression of provenance.</ns0:p><ns0:p>Provenance is particularly important for non-scientific datasets, since the assertions made are so dependent on their wider discursive context. When assertions cannot be tested experimentally, understanding context is critical for judging quality, trustworthiness, and usefulness.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Critical Role of Curation</ns0:head><ns0:p>Another difference between the PeriodO dataset and traditional nanopublications is the unavoidable curatorial work necessary to extract practically useful assertions from textual period definitions. In all of the applications of nanopublications we found, the published assertions typically appeared in the form of measurements or well-defined relationships between discrete entities. These are types of data which humans or computers can easily and reliably extract from research findings. Our dataset, in contrast, Figure <ns0:ref type='figure'>3</ns0:ref>. Finding and comparing period definitions in PeriodO. Searching for 'early bronze' (1) results in sixty period definitions with matching labels (2), from a variety of sources (3). The time range facet (4) updates to show the distribution of temporal extents defined by these various sources. Users can also query for period definitions with temporal extents within a specific range of years using the time range facet (5), period definitions with spatial extents within a named geographic area using the spatial coverage facet (6), or period definitions in specific languages using the language facet (7). Queries may also combine values from any of these facets. required explicit curatorial decisions: a time period exists within a certain spatiotemporal context, and there is no sure way to discretely, accurately, and unambiguously model such boundaries. While a human might be able to have a nuanced understanding of temporary and ever-shifting political boundaries or the uncertain and partially arbitrary precision suggested by 'around the beginning of the 12th century BC', we cannot assume the same of computers. Therefore, in order for our dataset to be readily algorithmically comparable, we had to map discursive concepts to discrete values. Our curatorial decisions in this regard reflect a compromise between uniformity, potential semantic expressiveness, and practical usefulness.</ns0:p><ns0:p>As humanities scholars publish their own nanopublications (or linked data in general), they will also go through a curatorial process due to the interpretive, unstandardized nature of humanities datasets discussed above. There is a temptation in this process to imagine perfect structured descriptions that could express all possible nuances of all possible assertions. However, chasing that goal can lead to overcomplexity and, in the end, be practically useless. In describing period assertions as linked data, we adopted a schema that was only as semantically complicated as was a) expressed in our collected data and b) necessitated by the practical needs of our intended users. As we started to collect data, we considered the basic characteristics of a dataset that would be necessary to accomplish the retrieval and comparison tasks that our intended users told us were most important. These tasks included:</ns0:p><ns0:p>• Finding how the definition of periods have differed across time/authors, or finding contested period definitions. ('How have different authors defined the Early Bronze Age?')</ns0:p><ns0:p>• Finding all periods within a certain span of time. ('What time periods have been used to describe years between 100 AD to 500 AD?')</ns0:p><ns0:p>• Finding all periods within a certain geographic area. ('What time periods have scholars used in Northern Europe?')</ns0:p><ns0:p>• Finding periods defined for different languages. ('What time periods have been defined in Ukranian?') Figure <ns0:ref type='figure'>3</ns0:ref> shows how these various tasks can be completed using the faceted browsing interface to the PeriodO dataset. Implementing this interface required imposing consistency upon how we represented the temporal and spatial coverage of period definitions, even though this consistency does not exist in the original sources. Manuscript to be reviewed Computer Science <ns0:ref type='table' target='#tab_2'>1</ns0:ref>.</ns0:p><ns0:p>However, in practice, we found this scheme to be overly complex. The imposition of a level of uncertainty, while theoretically useful in certain cases, was often not appropriate. In almost every single case that we observed, authors did not explicitly state a precise level of uncertainty for their temporal expressions. By adding precise uncertainty ourselves, we would, in effect, have been putting words in authors' mouths. Further, Julian Days are not widely used outside of very specific disciplines, meaning that consumers of our data would have to convert to a more familiar time system before being able to understand or use our data. Instead of the Julian Day model, we settled on the four-part ISO date schema, described above. This model is less expressive for complicated forms of uncertainty, but it is less complex and more easily understood by both our target audience and typical software programs. It was also easy to convert to, since almost all of the period assertions we observed were drawn from sources based on Western calendars. If and when we encounter period definitions that require more complex time expressions or are based on varying calendrical systems, we will revisit the question of whether the four-part scheme is sufficient.</ns0:p><ns0:p>To encourage a consistent representation of temporal extent for all period definitions, we built a simple grammar and parser for date expressions that covered the vast majority of our sample data. The parser takes in a string like 'c. mid-12th century' and outputs a JSON string consistent with our data model.</ns0:p><ns0:p>This parser also produces naïve interpretations of descriptions like 'mid-fifth century', assigning them to the third of the epoch described according to the conventional segmentation of 'early,' 'mid,' and 'late.'</ns0:p><ns0:p>'Mid-fifth century' would, then, be parsed as the range of years 401 to 434. The parser is intended to be used interactively, as a generator of suggestions for standard ways to represent certain forms of time description. To keep the quality of the gazetteer high, we do not intend for the parser to be used to fully automatically 'extract' period definitions from texts. Similarly, we created an autocomplete interface to modern political entities to allow users to enter spatial coverage. These interface components help curators produce a practical approximation of spatiotemporal coverage rather than a complete, unambiguous representation. The interface we created to allow users to add and edit period definitions is shown in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROJECT STATUS AND FUTURE WORK</ns0:head><ns0:p>As of late 2015, we have gathered just over 3,500 period definitions from 78 sources, including monographs, journal articles, and online databases. Each period has been assigned a permanent URL, which can be resolved to view its definition and provenance as Manuscript to be reviewed the Portable Antiquities Scheme database of archaeological finds in the UK. 9 .</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>As more projects begin to integrate PeriodO identifiers for time periods, we hope to gather information on their citation and use. This would include both studying the historical use of attributed period definitions as well as tracking the citation of PeriodO period identifiers going forward. Such a study would allow us to observe how periods come into circulation and fall out of favor. Tracing the connections fostered by use of our gazetteer would also demonstrate the utility of a linked data approach in the humanities.</ns0:p><ns0:p>We are also in the process of reaching out to period-defining communities beyond classical archaeology and ancient history. We expect that this will require some extensions of and revisions to the current PeriodO data model. First, as we begin to collect definitions of periods closer to the present, we expect to extend our model of temporal extent to allow for more fine-grained interval boundaries than years. This will also require a unit of representation that allows comparisons between intervals defined at different levels of granularity. (The approach based on Julian Days, described in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, may be useful for this.)</ns0:p><ns0:p>Second, as we begin to include more non-Western period definitions, we will need to ensure that we can still map years to ISO 8601 representations. At the very least, this will require extending the temporal expression parser, and it may require changes to the data model as well, for example to state explicitly the calendar system used by the original authors. Finally, as more historians begin publishing their work as datasets or software, we may begin to encounter periods defined not in natural language but using some formalism, such as the curves proposed by <ns0:ref type='bibr' target='#b13'>Kauppinen et al. (2010)</ns0:ref>. These will require us to find a way of including these formalisms directly in our definitions.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>As scholars of all disciplines continue to integrate computational methods into their work, the need to preserve provenance will only become more important. This is as true in the humanities and social sciences as it is in the natural sciences. Nanopublication is an useful way to locate the production of 'data' within a wider scholarly context. In this way, it echoes old ideas about hypertext which were concerned with relations of provenance, authorship, and attribution <ns0:ref type='bibr' target='#b21'>(Nelson, 1999)</ns0:ref>. The PeriodO period gazetteer</ns0:p><ns0:p>shows that this approach is relevant and feasible even to fields outside of the experimental, observable sciences.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Turtle representation of a PeriodO period collection containing two period definitions originally published by Belarte (2008).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Part of the interface for editing period definitions. Labels for temporal extent boundaries are taken verbatim from the source, entered as free text, and automatically parsed into ISO 8601 year representations. Labels for spatial coverage are entered as free text, and using an autocompletion interface the user can specify the modern-day administrative units (e.g. nation-states) that approximate this spatial coverage.</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.57,177.51' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,226.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,225.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>w3.org/2006/time#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> .</ns0:figDesc><ns0:table><ns0:row><ns0:cell># Belarte's definition of the Early Iberian Period.</ns0:cell></ns0:row><ns0:row><ns0:cell><http://n2t.net/ark:/99152/p06xc6mq829></ns0:cell></ns0:row><ns0:row><ns0:cell>a skos:Concept ;</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:prefLabel 'Early Iberian Period' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:altLabel 'Early Iberian Period'@eng-latn, 'Early Iberian'@eng-latn ;</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:inScheme <http://n2t.net/ark:/99152/p06xc6m> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:language 'eng-latn' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:spatial <http://dbpedia.org/resource/Spain> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>periodo:spatialCoverageDescription 'Catalan area' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:intervalFinishedBy [</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:prefLabel '400 BC' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell></ns0:row><ns0:row><ns0:cell>time:year '-0399'ˆˆxsd:gYear</ns0:cell></ns0:row><ns0:row><ns0:cell>]</ns0:cell></ns0:row><ns0:row><ns0:cell>] ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:intervalStartedBy [</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:prefLabel '525 BC' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell></ns0:row><ns0:row><ns0:cell>time:year '-0524'ˆˆxsd:gYear</ns0:cell></ns0:row><ns0:row><ns0:cell>]</ns0:cell></ns0:row><ns0:row><ns0:cell>] .</ns0:cell></ns0:row><ns0:row><ns0:cell># Belarte's definition of the Classical Iberian Period.</ns0:cell></ns0:row><ns0:row><ns0:cell><http://n2t.net/ark:/99152/p06xc6mvjx2></ns0:cell></ns0:row><ns0:row><ns0:cell>a skos:Concept ;</ns0:cell></ns0:row><ns0:row><ns0:cell>Tsetskhladze,</ns0:cell></ns0:row><ns0:row><ns0:cell>2001. Greek Pottery from the Iberian Peninsula.' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:language 'eng-latn' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:spatial <http://dbpedia.org/resource/Spain> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>periodo:spatialCoverageDescription 'Catalan area' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:intervalFinishedBy [</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:prefLabel '200 BC' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell></ns0:row><ns0:row><ns0:cell>time:year '-0199'ˆˆxsd:gYear</ns0:cell></ns0:row><ns0:row><ns0:cell>]</ns0:cell></ns0:row><ns0:row><ns0:cell>] ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:intervalStartedBy [</ns0:cell></ns0:row><ns0:row><ns0:cell>skos:prefLabel '400 BC' ;</ns0:cell></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell></ns0:row><ns0:row><ns0:cell>time:year '-0399'ˆˆxsd:gYear</ns0:cell></ns0:row><ns0:row><ns0:cell>]</ns0:cell></ns0:row><ns0:row><ns0:cell>] .</ns0:cell></ns0:row><ns0:row><ns0:cell># The collection of period definitions for linking them to their common source.</ns0:cell></ns0:row><ns0:row><ns0:cell><http://n2t.net/ark:/99152/p06xc6m></ns0:cell></ns0:row><ns0:row><ns0:cell>a skos:ConceptScheme ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:source [</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:isPartOf <http://dx.doi.org/10.1111/j.1468-0092.2008.00303.x> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>bibo:locator 'page 177'</ns0:cell></ns0:row><ns0:row><ns0:cell>] .</ns0:cell></ns0:row><ns0:row><ns0:cell># The journal article from which the period definitions were taken.</ns0:cell></ns0:row><ns0:row><ns0:cell><http://dx.doi.org/10.1111/j.1468-0092.2008.00303.x></ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:title 'DOMESTIC ARCHITECTURE AND SOCIAL DIFFERENCES IN NORTH-EASTERN IBERIA DURING THE</ns0:cell></ns0:row><ns0:row><ns0:cell>IRON AGE (c.525-200 BC)' .</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:creator <http://id.crossref.org/contributor/maria-carme-belarte-2mkpvn5eyc7oh> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:issued '2008'ˆˆxsd:gYear ;</ns0:cell></ns0:row></ns0:table><ns0:note>skos:prefLabel 'Classical Iberian Period' ; skos:altLabel 'Classical Iberian Period'@eng-latn, 'Classical Iberian'@eng-latn ; skos:inScheme <http://n2t.net/ark:/99152/p06xc6m> ; skos:note 'Equivalent to Iberian III (450-350 B.C.) and IV (350-200 B.C.) -cf. M. Diaz-Andreu & S. Keay, 1997. The Archaeology of Iberia; Dominguez in C. Sanchez & G.R. # The author of the journal article . <http://id.crossref.org/contributor/maria-carme-belarte-2mkpvn5eyc7oh> foaf:name 'MARIA CARME BELARTE' . 5/14 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc># A graph with the assertions being made (i.e. the period being defined). # This is identical to the representation of the first period definition in Figure1.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>@prefix : <http://n2t.net/ark:/99152/p06xc6mq829/nanopub1#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix bibo: <http://purl.org/ontology/bibo/> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix dcterms: <http://purl.org/dc/terms/> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix foaf: <http://xmlns.com/foaf/0.1/> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix np: <http://www.nanopub.org/nschema#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix periodo: <http://n2t.net/ark:/99152/p0v#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix prov: <http://www.w3.org/ns/prov#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix skos: <http://www.w3.org/2004/02/skos/core#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix time: <http://www.w3.org/2006/time#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell># A graph of statements identifying the nanopublication and its parts.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>:head {</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><http://n2t.net/ark:/99152/p06xc6mq829/nanopub1></ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>a np:Nanopublication ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>np:hasAssertion :assertion ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>np:hasProvenance :provenance ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>np:hasPublicationInfo :pubinfo .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>:assertion {</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><http://n2t.net/ark:/99152/p06xc6mq829></ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>a skos:Concept ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:intervalFinishedBy [</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>skos:prefLabel '400 BC' ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:year '-0399'ˆˆxsd:gYear</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>]</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>] ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:intervalStartedBy [</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>skos:prefLabel '525 BC' ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:hasDateTimeDescription [</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>time:year '-0524'ˆˆxsd:gYear</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>]</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>] .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell># A graph of statements about the provenance of the assertions.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>:provenance {</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>:assertion dcterms:source [</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>dcterms:isPartOf <http://dx.doi.org/10.1111/j.1468-0092.2008.00303.x> ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>bibo:locator 'page 177'</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>].</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><http://dx.doi.org/10.1111/j.1468-0092.2008.00303.x></ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>dcterms:creator <http://id.crossref.org/contributor/maria-carme-belarte-2mkpvn5eyc7oh> ;</ns0:cell></ns0:row><ns0:row><ns0:cell>dcterms:issued '2008'ˆˆxsd:gYear ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>dcterms:title 'DOMESTIC ARCHITECTURE AND SOCIAL DIFFERENCES IN NORTH-EASTERN IBERIA DURING</ns0:cell></ns0:row><ns0:row><ns0:cell>THE IRON AGE (c.525-200 BC)' .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><http://id.crossref.org/contributor/maria-carme-belarte-2mkpvn5eyc7oh></ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>foaf:name 'MARIA CARME BELARTE' .</ns0:cell><ns0:cell>.net/ark:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>/99152/p06xc6mvjx2. The hostname of this URL (n2t.net) is the registered name of the CDL's }</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Name-to-Thing resolver, which is similar to other name resolution services for persistent URLs such as # A graph of statements about the provenance of the nanopublication itself .</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>:pubinfo { PURL. We have registered with the EZID service a single ARK identifier (ark:/99152/p0), providing <http://n2t.net/ark:/99152/p06xc6mq829/nanopub1> prov:wasGeneratedBy <p0h#change-1> ; them with the URL of the HTTP server currently hosting the canonical PeriodO dataset. Thus any request prov:generatedAtTime '2015-07-29T21:49:31'ˆˆxsd:dateTime ;</ns0:cell></ns0:row></ns0:table><ns0:note>to a URL starting with http://n2t.net/ark:/99152/p0 will be redirected to that server. An HTTP GET to http://n2t.net/ark:/99152/p0d.jsonld will return the entire dataset, while GETting (for example) http://n2t.net/ark:/99152/p06xc6mvjx2.jsonld will return a JSON-LD representation of Belarte's definition of the 'Classical Iberian Period.'7/14 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)Manuscript to be reviewed Computer ScienceFigure 2. Nanopublication of Belarte (2008)'s definition of the 'Early Iberian Period.' skos:prefLabel 'Early Iberian Period' ; skos:altLabel 'Early Iberian Period'@eng-latn, 'Early Iberian'@eng-latn ; skos:inScheme <http://n2t.net/ark:/99152/p06xc6m> ; dcterms:language 'eng-latn' ; dcterms:spatial <http://dbpedia.org/resource/Spain> ; periodo:spatialCoverageDescription 'Catalan area' ; prov:wasAttributedTo <http://orcid.org/0000-0002-3617-9378> . }8/14PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Example Scientific Notation of Julian Days Our initial approach to imposing consistency on temporal extents was to express the termini of periods as Julian Days represented in scientific notation. Julian Days are a standard form of time measurement commonly used by astronomers to represent dates in the far historical past. Julian Days work by counting the number of continuous days that have passed since January 1, 4713 BC in the Proleptic Julian calendar.Conceptually, this is a similar measurement to the common Unix time standard, which counts the number of milliseconds that have passed since midnight GMT on January 1, 1970. The idea is that by counting forward using well-defined units since an accepted epoch, one can escape the inconsistencies and periodic lapses that characterize different calendrical systems. Representing Julian Days using scientific notation allows one to express variable levels of uncertainty. See examples of this notation system in Table</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Scientific Notation Julian Day (JDN)</ns0:cell><ns0:cell>Proleptic Gregorian</ns0:cell></ns0:row><ns0:row><ns0:cell>1.3E6</ns0:cell><ns0:cell cols='2'>Between JDN 1,250,000 and JDN 1,350,000 1150 BC ±150 years</ns0:cell></ns0:row><ns0:row><ns0:cell>1.30E6</ns0:cell><ns0:cell cols='2'>Between JDN 1,295,000 and JDN 1,305,000 1150 BC ±15 years</ns0:cell></ns0:row><ns0:row><ns0:cell>1.300E6</ns0:cell><ns0:cell cols='2'>Between JDN 1,299,500 and JDN 1,300,500 1150 BC ±1.5 years</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>See the PeriodO website at http://perio.do. PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>http://www.biosemantics.org 3 http://nextprot.org/</ns0:note>
<ns0:note place='foot' n='9'>https://finds.org.uk 12/14 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6110:1:1:CHECK 4 Nov 2015)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "November 3, 2015
Dear reviewers,
Thank you for your thorough and helpful reviews.
Below we respond to the comments from Reviewer 2 and the Editor. Reviewer
1’s comments were appreciated but as s/he did not raise any specific issues, we
do not have any responses.
Reviewer 2’s comments
NBS is a challenging article for the uninitiated. As a scholarly
communications librarian I am not the typical PeerJ Computer
Science reader. However, I really enjoyed the challenge after finding
nanopub.org, perio.do, and reading Mons et al. (2011) “The value of
data.”
We added a link to the PeriodO website (footnote 1) for the benefit of other
readers who might also benefit from this.
The first paragraph of the conclusion would have provided me with
a better introduction to the article.
We followed this advice and moved the first paragraph of the conclusion into the
introduction. This gives a clear overview of the problem we are addressing in
the first section of the paper. We replaced this section in the conclusion with a
summation of the same points.
I would have gotten more out of the first reading of this article if
the explanatory information had preceded the technical details. I
would entitle this article “Nanopublication Beyond the Sciences and
the PeriodO Case Study” since the authors did not offer any other
examples of nanopubs in the humanities.
We have elected to keep the general order of the paper (technical details followed by our interpretation of those details) because we feel it follows a logical
progression. However, we agree with this reviewer that the initial draft could
have been easier to read for a newcomer to these concepts. We believe that the
previous change (moving the paragraph from the conclusion to the introduction),
as well as getting rid of repetition and tightening up the paper in our latest
revision, addresses those concerns. We have also changed the title of the paper,
as mentioned below in our responses to the Editor’s comments.
1
In response to reviewer 2’s annotations of the manuscript PDF, we added
additional examples to our overview of nanopublication, and clarified what is
meant by “smallest possible unambiguous pieces of information.” We also tried to
clarify the distinction between the PeriodO data model and the nanopublication
data model, particularly in Figures 1 and 2. Finally, we reworded the first
paragraph of the section “The Unfalsifiable Nature of Time Period Definitions”
to make it clear that PeriodO data is like other humanities data.
Editor’s comments
Please attend to those, including the suggestion for title change.
We have changed the title to better reflect the basis of our paper in our own
system.
Generally the writing is good, however, there is some repetition that
should be eliminated. This is especially annoying when repeated
comments just assert the importance of nanopublication; these sound
rather polemical.
We have eliminated repetition where we noticed it and made our case less
polemical.
One big question, requiring explanation, is about the formalism.
We have clarified the explanation and examples of the PeriodO and nanopublication data models to make the formalism clearer.
Having long figures without description seems just to consume space.
What would a scholar learn from these?
If they are included, there should be some justification and goal
achieved. Further, there is the assumption that RDF is a good idea,
with no discussion, and no evidence given of the pros and cons, or of
the limitations or benefits.
We added an explanation of the Turtle format (footnote 6), justified our use
of RDF (lines 195–197), and added descriptions to all linked data figures. The
Turtle figures illustrate our use of structured data in a manner that is far more
terse than could be accomplished by prose or graphics. We believe these diagrams
clearly show our approach to both semantic web researchers and humanities
scholars interested in our data modeling decisions.
We have not mounted a full, nuanced rationale for using RDF because we believe
that is out of the scope of this paper. RDF is a widely-used, well-understood
2
format for expressing structured data. The nature of its shortcomings, and the
shortcomings of exhaustive formal data modeling in general, is also well-trodden
territory. We feel that we justify our use of structured data based on the specific
context of the community that we serve.
In general this is rather short and lacking in detail. Some sections
are very short, and should be expanded:
Uses starting on p. 2, line 66
Future Work starting on p. 11, line 533
We merged all of the subsections of which “Uses” was a member into one section
about nanopublication in the sciences. We also added several examples of the
types of nanopublications that might emerge from the biosciences.
We added more detail about our future work and changed this section to “Project
status and future work.”
Some explanation is needed for the illustrations.
Figs. 1, 2 could have the various parts explained.
Fig. 3 should be explained in detail
We added explanation to all of these illustrations. Figures 1 and 2 (the Turtle
RDF representations) now have notes explaining all the objects described within
them. Figure 3 (Figure 4 in the latest draft), the screenshot of the period editing
form, has a detailed explanation about its constituent parts.
Additional examples would be particularly helpful
We have added further description to the examples from PeriodO and given
further examples of nanopublications in other disciplines (lines 63–65).
It also is unclear if historical documents can be automatically analyzed
through information extraction methods, or if the only way to submit
data is using the editor for each entry, as in Fig. 3.
We have added a sentence (lines 392–393) clarifying that automatic extraction
of period definitions is not currently a goal of the project.
The status of PeriodO is not clear but should be.
How big is it?
What data is included?
Who uses it?
What are the uses so far?
3
We have added a clearer description of the project status (lines 399–404).
How does one use it if interested?
We have added a link to the PeriodO website (footnote 1) which explains how
to access and use the gazetteer.
What is the software architecture, ways to access it?
We have added a short description of the software architecture (lines 137–147)
and a citation of another paper that describes the architecture in greater detail.
There is mention on p. 10, lines 494-499, of hypothetical queries, but
it is not clear if these are real examples, nor how the queries would
be entered, nor what results would come from them.
We have added a figure (Figure 3) to show how the faceted browsing interface
can be used to construct these queries.
Page 7, line 309: fill in the [cite]
line 319: unclear re “at no particular version”
line 322: he -> the
We have fixed these errors.
References
Buckland has 0(0)
This is correct; the BJIS began with volume 0, issue 0. We have fixed the
reference to show the URL so this can be verified.
Hobbs says “report” but gives no report number nor URL
Meeks gives no publisher nor URL
Sporny says “report” but gives no report number nor URL
We have fixed these references to include the URLs.
In addition to the changes noted above, we have edited the entire manuscript
for style and clarity (see the “tracked changes” PDF for details).
Thank you,
Patrick Golden & Ryan Shaw
School of Information and Library Science
University of North Carolina at Chapel Hill
4
" | Here is a paper. Please give your review comments after reading it. |
26 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The interdisciplinary field of data science, which applies techniques from computer science and statistics to address questions across domains, has enjoyed recent considerable growth and interest. This emergence also extends to undergraduate education, whereby a growing number of institutions now offer degree programs in data science. However, there is considerable variation in what the field actually entails and, by extension, differences in how undergraduate programs prepare students for data-intensive careers. We used two seminal frameworks for data science education to evaluate undergraduate data science programs at a small subset of four-year institutions in the United States; developing and applying a rubric, we assessed how well each program met the guidelines of each of the frameworks. Most programs scored high in statistics and computer science and low in domain-specific education, ethics, and areas of communication.</ns0:p><ns0:p>Moreover, the academic unit administering the degree program significantly influenced the course-load distribution of computer science and statistics/mathematics courses. We conclude that current data science undergraduate programs provide solid grounding in computational and statistical approaches, yet may not deliver sufficient context in terms of domain knowledge and ethical considerations necessary for appropriate data science applications. Additional refinement of the expectations for undergraduate data science education is warranted.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The interdisciplinary field of data science, which applies techniques from computer science and statistics to address questions across domains, has enjoyed recent considerable growth and interest. This emergence also extends to undergraduate education, whereby a growing number of institutions now offer degree programs in data science. However, there is considerable variation in what the field actually entails and, by extension, differences in how undergraduate programs prepare students for data-intensive careers. We used two seminal frameworks for data science education to evaluate undergraduate data science programs at a subset of four-year institutions in the United States; developing and applying a rubric, we assessed how well each program met the guidelines of each of the frameworks. Most programs scored high in statistics and computer science and low in domain-specific education, ethics, and areas of communication. Moreover, the academic unit administering the degree program significantly influenced the course-load distribution of computer science and statistics/mathematics courses. We conclude that current data science undergraduate programs provide solid grounding in computational and statistical approaches, yet may not deliver sufficient context in terms of domain knowledge and ethical considerations necessary for appropriate data science applications. Additional refinement of the expectations for undergraduate data science education is warranted.</ns0:p><ns0:p>engineering (development and management of data infrastructure for subsequent interrogation and analyses) and data analytics (application of statistical and predictive analyses to address unknowns in a particular domain). The training students receive when pursuing a data science education, both what is included and what is excluded, is worthy of investigation.</ns0:p><ns0:p>While data science undergraduate degree programs are relatively new, previous work has highlighted early trends. In a survey of five data science degree programs, <ns0:ref type='bibr' target='#b0'>Aasheim et al. (2015)</ns0:ref> found an emphasis on statistics and mathematics coursework. All programs required coursework in linear algebra and data mining as well as multiple programming and statistics courses. In contrast, none of the five programs required coursework in ethical considerations of data science, and only one program required coursework in communication skills. Additional descriptions of individual data science undergraduate programs are quite varied and include those that are business-focused <ns0:ref type='bibr' target='#b3'>(Anderson, McGuffee, & Uminsky, 2014)</ns0:ref> and those that require significant coursework in domains outside of mathematics, statistics, and computer science <ns0:ref type='bibr'>(Anderson et al., 2014)</ns0:ref>. The variation among programs illustrates a potentially confusing landscape for students to navigate and uncertainty for employers when assessing recent graduates' preparation <ns0:ref type='bibr' target='#b22'>(Parry, 2018)</ns0:ref>.</ns0:p><ns0:p>Quantitative evaluation of data science programs requires an explicit framework describing the components of data science education. Several frameworks exist, including general frameworks focused on mathematical and computational foundations <ns0:ref type='bibr'>(De Veaux et al., 2017)</ns0:ref>, frameworks based on individual programs <ns0:ref type='bibr'>(Anderson et al., 2014)</ns0:ref>, and emergent frameworks developed for comparative analyses <ns0:ref type='bibr' target='#b0'>(Aasheim et al., 2015)</ns0:ref>. For this work, we used two frameworks for evaluation: the broad, explicit framework presented in the NASEM (2018) report and the narrower, more conceptual framework of <ns0:ref type='bibr' target='#b14'>Donoho (2017)</ns0:ref>. The framework of the NASEM (2018), hereafter referred to as the NASEM framework, focuses on developing undergraduate data acumen through a curriculum including important data science concepts, applications to real-PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>world problems with an understanding of limitations, and ethical concerns involved in data science (Table <ns0:ref type='table'>1</ns0:ref>, Supplemental Table <ns0:ref type='table'>1</ns0:ref>). In addition to 'traditional' data science competencies in computer science and statistics, this framework gives guidelines for training in communication skills, domain-specific knowledge, and ethical considerations. The NASEM framework lists ten key concept areas ('areas' hereafter) that are further divided into specialized topics, skills, or concepts ('sub-areas' hereafter), providing comprehensive expectations for undergraduate training. This framework facilitates an extensive evaluation of how well undergraduate degree programs meet the expectations set forth by the NASEM.</ns0:p><ns0:p>The framework presented by <ns0:ref type='bibr' target='#b14'>Donoho (2017)</ns0:ref> as Greater Data Science, hereafter the GDS framework, describes programs that prepare professionals for gaining insights from data while applying best practices (Table <ns0:ref type='table'>2</ns0:ref>, Supplemental Table <ns0:ref type='table'>2</ns0:ref>). The six areas in the GDS framework are relatively high-level, and the framework has notably little discussion of the knowledge, skills, or abilities necessary to apply such foundational knowledge to domain-specific questions. While not explicitly designed for undergraduate education, the GDS framework furnishes a metric to assess undergraduate programs' potential for preparing future professionals in data science careers.</ns0:p><ns0:p>Here we take the opportunity to evaluate undergraduate data science degree programs in a comparative analysis using the two frameworks described above. Applying an evaluation rubric we developed for each of the two frameworks, we investigated data science programs from a sample of doctoral-granting universities. We reviewed major requirements and corresponding course descriptions to assess how well each program addressed elements of each framework. We also quantified the relative amount of coursework in three categories: computer science, statistics/ mathematics, and domain knowledge. Using evaluations and quantification of coursework, we provide an overview of how well each of the two frameworks is being implemented and an evaluation of training strengths and weaknesses in data science undergraduate degree programs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In an attempt to make appropriate comparisons among undergraduate data science programs, we chose institutions comparable to our home institution, the University of Arizona.</ns0:p><ns0:p>We used two means of inclusion: institutions recognized as peers by the University of Arizona and institutions in the Pac-12 Conference (https://uair.arizona.edu/content/ua-peers). The union of University of Arizona peers and Pac-12 institutions resulted in a total of 25 universities, all of which are Research I universities (Carnegie Classification of Institutions of Higher Education, 2018) and four-year, doctoral-granting institutions. While half of these 25 institutions are in the western United States, this sample also includes public universities in the midwestern, eastern, and southern United States. This sample includes members of the Association of American Universities (https://www.aau.edu/) and land-grant institutions. At the time of inception of this work, roughly 50% of these institutions (53% of University of Arizona peer institutions and 50% of Pac-12 institutions) offered an undergraduate major or minor in a data science-related field. 10 institutions had at least one undergraduate major in data science. Variation in the names of programs required careful consideration. For example, Ohio State University offered a bachelor of science in data analytics; however, the curriculum was similar to programs with the term 'data science' in the name of the degree, so this program was included in our evaluation. In contrast, the bachelor of science in business data analytics at Arizona State University had a course curriculum that was very different from other data science degrees and was, thus, excluded from evaluation. In cases when an institution offered more than one degree in data science (e.g., University of Washington), we scored each of the degree programs independently. This selection process resulted in a total of 18 scored programs (Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p><ns0:p>We evaluated how well each undergraduate data science program aligned with recommendations in the NA and GDS frameworks. For each of the two frameworks, we developed a rubric and coded the undergraduate data science curricula on a four-point scale using direct survey methodology, specifically content analysis of course descriptions <ns0:ref type='bibr'>(Stefanidis &</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fitzgerald, 2014; <ns0:ref type='bibr' target='#b0'>Aasheim et al., 2015)</ns0:ref>, indicating the familiarity with a topic that could be expected from a student graduating from the program in question. The creation of the rubric used for scoring was an iterative process. We created an initial rubric and used it to score each item in the two frameworks. For nine of the ten areas of the NASEM framework, an item corresponded to one sub-area within the larger area. For example, in the area of computational foundations, five sub-areas were listed: basic abstractions, algorithmic thinking, programming concepts, data structures, and simulations. The ability of a program to address each of these five sub-areas was assessed separately. One area in the NASEM framework, domain-specific considerations, did not list any additional specifics; so, in this case, the item scored was the area of domain-specific considerations itself. Similarly, in the GDS framework, most areas lacked additional descriptions of sub-areas, so items largely corresponded to the particular area. The two exceptions were data representation and transformation and data modeling; each of these areas had two scored subareas.</ns0:p><ns0:p>In general, a score of '1' indicated no expectation that a student graduating from the program would have familiarity with the area/sub-area; a score of '4' indicated a student is wellversed in the area/sub-area, with at least one required course covering the topic, often at length.</ns0:p><ns0:p>Scores were based on course titles and descriptions only (links to each programs' web page are available in Supplemental Table <ns0:ref type='table'>3</ns0:ref> and lists of course requirements are available in Supplemental File 1); a lack of standardization among institutions required close reading of all course descriptions listed in posted curricula. We did not include information from course syllabi as there was considerable variation in which courses had publicly available syllabi. There was considerable revision to the rubric throughout the coding process as nuances in scoring and inconsistencies were noticed. We scored each program for each framework independently and then discussed discrepancies to reach agreements on a common score. Full details of the final coding rubric are available in Appendix A.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In addition to scoring programs for the two frameworks, we quantified the total number of credits required for each program in three categories: computer science, statistics/ mathematics, and domain-specific courses. Domain-specific courses are those outside of computer science, statistics, and mathematics, such as in biology, economics, or psychology. In cases where it was difficult to categorize a course as computer science or statistics/mathematics, we used the identity of the home department to inform the categorization. For example, if an ambiguous course was offered by a computer science department, it was categorized as a computer science course. Such cases were rare and only affected the categorization of five or fewer courses. Given the flexibility in course choice in some programs, we recorded the minimum and maximum number of credits for each of the categories as well as the minimum and maximum total credits required for each program. The total credits for a program also included units that were not categorized into any of the three categories, such as internships and senior capstone projects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical Analyses</ns0:head><ns0:p>In all subsequent analyses, we excluded programs that were not data science majors, primarily to afford appropriate comparisons in coursework coverage and requirements. This resulted in the exclusion of Stanford University's data science minor and the University of Illinois at Urbana-Champaign's data science certificate.</ns0:p><ns0:p>To compare coverage among areas within each of the two frameworks, we first estimated an ordinal mixed-effects model, treating area as a fixed effect and the program as a randomintercept effect. We estimated separate models for each of the two frameworks. Based on the ordinal mixed-effects models, we performed post-hoc pairwise comparisons to assess significant differences between areas. All analyses were performed with the R programming language (R Core Team, 2020) with the aid of the tidyverse <ns0:ref type='bibr'>(Wickham, 2017)</ns0:ref> and ordinal <ns0:ref type='bibr'>(Christensen, 2018)</ns0:ref> packages. All data and R code are available at https://github.com/jcoliver/ data-sci-curricula.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We evaluated the portion of each undergraduate major programs' total credits dedicated to computer science, statistics/mathematics, and domain-specific coursework. Given that there was variation in the required credits within programs, we used the midpoint between the minimum and maximum for each of the categories and total credits in subsequent analyses. For example, the University of Iowa's bachelor of science in data science required between 17 and 23 credits in computer science coursework, so we used 20 as the expected number of computer science credits for this program. We tested two hypotheses with these data using one-tailed Student's t-tests:</ns0:p><ns0:p>1. Programs housed in computer science units have more required coursework in computer science than programs housed in other academic units.</ns0:p><ns0:p>2. Programs housed in statistics or mathematics units have more required coursework in statistics and mathematics than programs housed in other academic units.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>We based program assessments solely on course titles and descriptions rather than on syllabi. Although they often provide detailed descriptions of course material, syllabi availability is highly variable, and course content at the level of syllabi may vary by term and instructor. By focusing on course descriptions, which were accessible online for all institutions and programs investigated (Supplemental Table <ns0:ref type='table'>3</ns0:ref>), we were able to consistently assess program performance.</ns0:p><ns0:p>Course titles and descriptions themselves were variable within and among institutions. For example, some course descriptions consisted of a short enumeration of topics (e.g., the Manuscript to be reviewed Computer Science descriptions because the topics were not mentioned in descriptions even though they may have been taught in the course. However, with only five exceptions, all areas/sub-areas scored a minimum of 3 for at least one program, indicating that nearly all areas/sub-areas could be described in a course description with enough detail to warrant the highest score possible.</ns0:p><ns0:p>Finally, the focus of the current work is undergraduate degree programs offered at doctoralgranting universities in the United States; thus, care is needed in extrapolating the implications to other types of institutions (primarily undergraduate institutions, liberal arts colleges, universities outside the United States, etc.).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>For the guidelines set forth by the NASEM (NASEM framework), some areas/sub-areas were generally well covered in the programs we evaluated, while other areas/sub-areas received little to no attention in formal coursework (Figure <ns0:ref type='figure'>1</ns0:ref>). The highest scoring area was data description and visualization, which primarily involves quality assessment and exploratory data analysis (mean: 3.5, median: 3.7). Most programs also paid substantial attention to computational foundations, which includes abstraction, algorithmic thinking, and programming concepts (mean: 3.4, median: 3.4). In contrast, areas/sub-areas focusing on reproducibility and ethics in data science were generally not covered in undergraduate curricula. Reproducibility, including design of workflows and reproducible analyses, was rarely indicated in course descriptions (mean: 1.6, median: 1.0). Ethics of data science, covering privacy, confidentiality, and misrepresentations of data and results, received the lowest average score (mean: 1.6, median: 1.6). Post-hoc pairwise comparisons among areas illustrated computational foundations, statistical foundations, and data description and visualization all had significantly higher scores than reproducibility, ethics, and domain expertise (Supplemental Table <ns0:ref type='table'>4</ns0:ref>).</ns0:p><ns0:p>In regards to the GDS framework, programs scored, on average, high in all areas/subareas except science about data science (mean: 1.9, median: 2.0), which is the explicit PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science investigation of data science as a field (Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>). In post-hoc pairwise comparisons among area scores, science about data science scored significantly lower than all other areas except data gathering, preparation, and exploration (Supplemental Table <ns0:ref type='table'>5</ns0:ref>).</ns0:p><ns0:p>The majority of programs investigated were characterized by coursework focused on mathematical and statistical foundations (Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). In 12 programs, courses in mathematics or statistics made up over 50% of the required coursework. In all but two programs, computer science courses accounted for less than 50% of the required coursework. Required coursework in domains outside of computer science, statistics, and mathematics was relatively low, and in only one program did domain coursework exceed 25% of required coursework. The academic unit administering the degree program significantly influenced the proportion of the coursework dedicated to computer science and to statistics and mathematics. Programs administered by computer science units had significantly more required computer science courses than did programs administered by other units (mean percentage of computer science coursework in programs housed in computer science units: 39%, other units: 18%; t = 2.899, p = 0.016).</ns0:p><ns0:p>Similarly, programs housed in statistics or mathematics units required more coursework in statistics and mathematics than did programs housed in other units (mean percentage of statistics/ mathematics coursework in programs housed in statistics or mathematics units: 70%, other units: 52%; t = 2.441, p = 0.015).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Our comparison of undergraduate curricula to two seminal data science frameworks reflects a focus on theoretical foundations and quantitative skills (Figure <ns0:ref type='figure'>1</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>). Under both frameworks, training in computational foundations was among the highest scoring areas. Indeed, computational applications addressing big, complex problems have been a hallmark of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>underlying applications, indicates a focus on the statistical and computational underpinnings of data science and echoes similar findings in prior work on undergraduate <ns0:ref type='bibr' target='#b0'>(Aasheim et al., 2015)</ns0:ref> and graduate <ns0:ref type='bibr' target='#b31'>(Tang & Sae-Lim, 2016</ns0:ref>) data science programs. While curricula also scored well in data description and visualization (NASEM framework) and data visualization and presentation (GDS framework), the competencies described in these areas are primarily concerned with exploratory data analysis and quality assurance processes rather than using visual representations of data to communicate ideas. This statistical and computational focus is further evidenced by a heavy bias towards the number of course units in computer science, statistics, and mathematics in undergraduate data science programs (Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>).</ns0:p><ns0:p>Our evaluation of programs presents a sensu stricto (s.s.) definition of data science education for undergraduates, aligning with the description of the field in the GDS framework <ns0:ref type='bibr' target='#b14'>(Donoho, 2017)</ns0:ref>, whereby most programs emphasize the 'hard skills' associated with computer science, statistics, and mathematics. Accompanying the GDS framework, <ns0:ref type='bibr' target='#b14'>Donoho (2017)</ns0:ref> posited that academic data science degree programs focus on statistics and machine learning, with some attention given to the technology required to compute on big data. This view of data science s.s.</ns0:p><ns0:p>does little to include substantial training in domain knowledge outside of computer science, statistics, and mathematics. For example, the undergraduate data science curricula described by <ns0:ref type='bibr'>De Veaux et al. (2017)</ns0:ref> implies one or two domain-specific courses provide sufficient formal training in a domain. For the most part, the programs evaluated here likely equip graduates with a varied skill set for analyses and prediction, but graduates may lack the appropriate context for designing and evaluating domain-specific data science applications.</ns0:p><ns0:p>In contrast, data science sensu lato (s.l.) would include substantial training in communication, ethical considerations, and knowledge in the domain to which analyses and predictive modeling are applied. The NASEM framework called out the importance of domain knowledge for effective application of data science, yet few programs went beyond requiring two PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science additional courses outside of computer science, statistics, and mathematics. One notable exception is the human centered design and engineering data science option at the University of Washington, although the emphasis on domain-specific education may come at the cost of reduced training in computational skills and statistics (Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). The curriculum described in <ns0:ref type='bibr'>Anderson et al. (2014)</ns0:ref> provides another example of undergraduate training in data science s.l.:</ns0:p><ns0:p>Students were required to take substantial coursework (15-22 units) within a 'cognate,' such as biomechanics, geoinformatics, or sociology. This view of data science training supports the notion that sufficient domain background is required to understand the context of models and analyses <ns0:ref type='bibr' target='#b23'>(Provost & Fawcett, 2013)</ns0:ref>, justifying substantial consideration of domain knowledge in data science training <ns0:ref type='bibr' target='#b4'>(Berthold, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Irizarry, 2020)</ns0:ref>.</ns0:p><ns0:p>The degree to which data science education includes domain specialization is likely influenced by the academic affiliations of those who set the criteria. This reflects a common phenomenon within data science: The definition of data science (and by extension, data science education) depends on who is doing the defining <ns0:ref type='bibr' target='#b23'>(Provost & Fawcett, 2013)</ns0:ref>. Just as we found that the academic unit in which the undergraduate program was housed had a significant effect on the amount of computer science and statistics coursework, the academic fields of those people creating curricula may affect how much emphasis there is on domain knowledge. For example, the undergraduate data science curricula of De <ns0:ref type='bibr'>Veaux et al. (2017)</ns0:ref> Similar to the paucity of attention to domain knowledge, most programs did not explicitly provide training in workflows, reproducibility practices, and the ethics of data use and reuse. The dearth of training dedicated to ethical problem-solving is similar to earlier comparative findings <ns0:ref type='bibr' target='#b0'>(Aasheim et al., 2015)</ns0:ref>. Best practices in reproducibility and ethics are critical for maintaining quality of data science applications <ns0:ref type='bibr' target='#b26'>(Saltz et al., 2018)</ns0:ref>, and their omission from undergraduate data science programs potentially creates a Promethean workforce prepared to use a variety of computational and statistical tools in socially inappropriate ways. A growing body of examples illustrate bias in data science applications <ns0:ref type='bibr' target='#b20'>(O'Neil, 2017)</ns0:ref>; such biases have real-world impact in criminal justice <ns0:ref type='bibr' target='#b18'>(Isaac & Lum, 2016)</ns0:ref>, employment <ns0:ref type='bibr' target='#b8'>(Dastin, 2018)</ns0:ref>, and healthcare <ns0:ref type='bibr' target='#b21'>(Obermeyer et al., 2019)</ns0:ref>. These impacts further reinforce the necessity of appropriate training in ethical considerations in data science.</ns0:p><ns0:p>Areas marked by deficiencies in the two frameworks may also reflect the relative youth of the field of data science. For example, the GDS framework includes the area science about data science, which received relatively low scores; one contributing factor could be that there remains discussion of what data science actually entails, precluding a formal study and circumscription of the field <ns0:ref type='bibr' target='#b14'>(Donoho, 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Irizarry, 2020)</ns0:ref>. Similarly, the lack of attention to ethical considerations in undergraduate data science programs could be a consequence of the recent rise of data science.</ns0:p><ns0:p>For example, the qualitative and quantitative changes to analyses and predictions brought on by the big data revolution have created a new landscape for ethical considerations, and training in ethical issues in data science remains a growth area <ns0:ref type='bibr' target='#b26'>(Saltz et al., 2018)</ns0:ref>. In contrast to 'traditional' data science topics, such as linear algebra, which have long been recognized as important for statistical analyses, ethical precepts of data science may take more time to become integrated into undergraduate data science curricula. The low scores in the ethical problem solving area of the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>NASEM framework may also be due, in part, to our means of assessment. Course descriptions rarely went beyond mentioning the 'ethics of data science,' while the sub-areas described in the NASEM framework, and thus our evaluation rubric, included specifics such as 'the ability to identify 'junk' science' and 'the ability to detect algorithmic bias.' If these important ethical topics are included in data science curricula, course descriptions would do well to call them out explicitly.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Our assessments of undergraduate data science curricula demonstrate a focus on theoretical foundations and quantitative skills with relatively little preparation in domains outside of computer science, statistics, and mathematics. This generally aligns with the 'greater data science' definition provided by <ns0:ref type='bibr' target='#b14'>Donoho (2017)</ns0:ref>. The work here suggests that data science undergraduate students receive training similar to those enrolled in a statistics program, although additional work formally comparing statistics degree programs to data science degree programs is needed. Additionally, an evaluation of the factors influencing data science curricula is beyond the scope of this work, but future work should consider the possibility that current data science programs are reflexive responses to market demands à la academic capitalism <ns0:ref type='bibr'>(Slaughter & Leslie, 1999;</ns0:ref><ns0:ref type='bibr' target='#b28'>Slaughter & Rhoades, 2004)</ns0:ref> or isomorphic processes <ns0:ref type='bibr' target='#b12'>(Dimaggio & Powell, 1983)</ns0:ref>.</ns0:p><ns0:p>Many programs fell short of guidelines put forth by the NASEM (2018). One possibility is that the areas that were not well covered (e.g., reproducibility, ethics, domain knowledge) are not recognized by the data science community as warranting substantial training. Alternatively, it remains an open question of how realistic it is to expect an undergraduate program to effectively cover all areas described in the NASEM framework. While our assessments generally treated all competencies with equal weight, the NASEM's recommendations may afford interpretations whereby some sub-areas require significant coursework, such as a year of linear algebra, while other sub-areas merit one or a few class periods, such as ethical consideration of data science. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>description for STAT 102, Data, Inferences, and Decisions at the University of California, Berkeley), while others included course content as well as format (e.g., DS 340W, Applied Data Sciences at Pennsylvania State University). There remains potential for bias among the different areas/sub-areas described in the two frameworks. That is, areas/sub-areas we characterized as generally poorly covered (see Results) may reflect systematic poor representation in course PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>data science. Most programs also scored well in statistics and mathematics training, as well as data modeling. The emphasis on quantitative skills training, coupled with theoretical concepts PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>includes, at most, three domain-specific courses (one introductory, one intermediate, and one capstone), and the authors of this curriculum are all from departments of computer science, statistics, or mathematics. In contrast, the curriculum presented by Anderson et al. (2014) includes substantially more coursework in a domain other than computer science, statistics, or mathematics and was created by faculty from biology as well as computer science and mathematics. Such differences among curricula further illustrate data science as an evolving field and demonstrate considerable PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science heterogeneity in what can be expected from recent graduates of undergraduate data science programs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53346:1:0:NEW 21 Jan 2021) Manuscript to be reviewed Computer Science Our reliance on course titles and course descriptions may bias against topics that receive minimal, albeit potentially impactful, coverage in curricula. Future work comparing curricular content through in-depth program reviews or exhaustive syllabi sampling could demonstrate more nuance in the variation among undergraduate data science degree programs. Additional revisions to undergraduate data science education guidelines, including required levels of competencies, could prove helpful in defining what should be expected from a recent graduate of an undergraduate data science program.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Undergraduate data science programs' scores for the GDS framework.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Proportion of coursework in computer science, statistics/mathematics, and domain-specific courses for each data science degree program.</ns0:figDesc></ns0:figure>
</ns0:body>
" | "We appreciate the thoughtful comments on the submitted manuscript. Based on the editor’s and reviewers’ comments, we have revised the manuscript and accompanying materials (i.e. code and documentation) to the benefit of increasing the clarity of our approach and conclusions. Below we address each point raised by the editor and reviewers, which are shown in italics, followed by our response in plain text. Note we did not track changes in revising the manuscript, but all substantial changes are described below.
Editor
The reviewers noted some strengths of this paper, including availability of data, clarity of organization, and the quality of the analysis. Some reviewer comments deserve particular attention:
1. Both reviewers point out the limitations of relying on course names and descriptions. An exploration of the similarly between courses with similar names - or perhaps the overlap between courses with different names - might clarify the utility of this approach. Although such an analysis might be impractical for the whole dataset, an exploration of a few courses might help.
Response: Content analysis based on course titles and descriptions is an established approach for curricular assessment (see references in response to review #1). Additionally, our approach is based on the level of a program’s curriculum, not individual course comparisons.
2. Reviewer 1 raises some valid concerns regarding the choice of comparator institutions. A bit more explanation here would be helpful.
Response: We have included information in the manuscript regarding the utility of the sample of institutions. More information on this is included in our response to review #1.
3. There are some questions about the availability of some of the data (Reviewer 2)
Response: We have included links to all programs’ requirements in Supplemental Table 3 and course requirements for each program in Supplemental File 1.
4. Reviewer 2 also some useful suggestions for enhancing the discussion of ethics education.
We have elaborated on the importance of ethics education in the Discussion section.
Response: We have expanded our discussion on the necessity of including ethics in data science education with examples of bias in data science applications. More detail is included in our response to review #2.
Reviewer 1
I would suggest a more in-depth discussion about the differing definitions of data science. While it is true a full accounting of the term is beyond the scope of this paper, a single definition is not sufficient to represent this issue. I am specifically referencing 'An exhaustive history of the term
and definitions is beyond the scope of this work, but we adhere to the following general
description: Data science draws on statistics and computer science to address questions in
various domains, such as biology, education, physics, business, linguistics, or medicine'. I think what is missing is data science vs analytics vs data engineering. Nothing too deep, but given the nature of this particular field there should be more discussion.
Response: We added the following sentence to the relevant portion of the manuscript: “This definition of data science is relatively broad and includes the narrower fields of data engineering (development and management of data infrastructure for subsequent interrogation and analyses) and data analytics (application of statistical and predictive analyses to address unknowns in a particular domain).”
I do not understand the reasoning behind 'chose institutions comparable to our home institution'. It seems completely reasonable to apply the frameworks to institutions that are outside this definition. It could even be more useful and important work to broader this. It seems like the build up to this point is somewhat misleading since the title, abstract, etc do not draw notice to this point. I think there needs to be clarity on what this paper is proposing to study from the beginning if we maintain that the only schools in consideration are under this selection bias.
Response: We wholeheartedly agree that including more institutions would offer a broader view of undergraduate data science education, but we maintain that the institutions investigated provide a useful sample of Research 1 universities in the United States. We have updated the manuscript to include details about the diversity of institutions (public/private, AAU membership, and land-grant status).
Looking at course titles and descriptions is weakness of the approach it is mentioned in the paper. While not possible for all of the programs, I would suggest that scoring select programs where additional information is available and comparing that against what is determined using only titles and descriptions would be valuable. For example, the authors should reach out to contacts at some subset of these programs and ask for help reviewing their program. This could be done in a manner such that it informs how consistent the scoring is done before and after this information is available.
Response: There is considerable precedent for content analysis based on course titles and descriptions. In addition to Stefanidis & Fitzgerald (2014) and Aasheim et al., (2015), see:
• Apatu et al. 2020 (https://doi.org/10.1097/PHH.0000000000001173) used web-available course descriptions for a content analysis of public health masters programs.
• Kung et al. 2006 (https://doi.org/10.3200/JOEB.81.6.291-300) used course catalogs (which include only titles and descriptions) to compare undergraduate Information Systems programs.
• Mishra et al. 2011 (https://doi.org/10.5406/bulcouresmusedu.190.0007) based comparisons among music education degrees based on course descriptions.
• Roule et al. 2015 (https://doi.org/10.1145/2787622.2787723) evaluated computer science curricula based on topic modeling of course descriptions retrieved from web sites.
Additionally, no institution we investigated has an exhaustive sample of course syllabi readily available. Such an exhaustive sample is the only way to perform the suggested assessment, so such an approach is simply not possible. We have acknowledged the possibility of future work by including, in the Conclusions section: “Our reliance on course titles and course descriptions may bias against topics that receive minimal, albeit potentially impactful, coverage in curricula. Future work comparing curricular content through in-depth program reviews or exhaustive syllabus sampling could demonstrate more nuance in the variation among undergraduate data science degree programs.”
Please see previous comments that I believe touch on the validity or at least in inferences we can draw from the analysis.
Response: In the Limitations section of the manuscript, we acknowledge potential limitations of our conclusions based on the sample of institutions included in the current study. We do not agree that the conclusions are invalid; our statistical analyses are robust and we do not dwell on non-significant effects, which can be troublesome with limited sample size.
The study was well conducted and the paper is well organized. My main comment (and it is what I've tried to communicate in the specifics above) is that I am not sure how much of the analysis is accurate given the limitations on titles and descriptions. I feel there needs to be more work done to establish some understanding of this measurement uncertainty.
Response: Our approach of content analysis is based on an accepted methodology of course title and description surveys. We welcome specific concerns of how our approach would bias our conclusions in addition to those we have included in the manuscript.
Reviewer 2
- I commend the authors for depositing data and code on Zenodo! Bravo! I appreciate that there is a readme and a license. Some additional comments to make this useful. Please add additional information the readme, including descriptions of code files, what order in which they should be run. If possible, include a Rmarkdown file that calls the appropriate scripts and runs the analyses. You can show code blocks that would be useful to most readers (like the model fitting) but hide the data cleaning code blocks (allowing an interested reader to examine those separately).
Response: We have expanded the README to include a list of the additional R packages that are required, as well as code blocks illustrating the order in which the scripts should be run. We did not duplicate code blocks that are found in each script.
- A general comment unrelated to the manuscript but to this code deposit. Do not include rm(list=ls()) at the top of each document. That's an outdated practice to clean the current working environment. Since you are using Rstudio projects, these should run in their own environments and not be contaminated by other projects. Don't use stop statements at the top of scripts.
Response: We removed stop statements and all calls to rm(list = ls()). We look forward to future lively discussions on strategies to ensure a clean R environment.
- For the framework na, use another name in the code as na will cause problems in your code.
Response: We replaced “na” with “nasem” and “NA” with “NASEM” in all materials (code, manuscript, and supplemental documents).
- Load the packages you'll use at the top of scripts.
Response: We updated scripts to move library calls to the top.
- The code did not run right off the bat for me. I stopped fixing bugs to focus on the rest of the paper, but share the code with another R person who doesn't have access to your machine and see if they can run it.
Response: We cloned the repository (v0.91) on a fresh machine and ran the scripts in the order described on the updated README without error (after installing dependencies described in the README). If there are specific errors that are encountered, we would be grateful to hear about them.
Very early in the background, after describing the growth of data science programs (13 to 50) and the lack of clear definitions, the authors are quick to point out that the programs lack coursework on ethical use of data/comms skills. This seemed very abrupt and additional background tying this back to the frameworks (NASEM) would be helpful.
Response: We have added more detail about the prior work of Aasheim et al. (2015), to aid in smoother transition. We refrain from comparison with the NASEM framework at this point in the manuscript, as the frameworks are not introduced until the subsequent paragraph.
If the goal of the paper is to also demonstrate how the programs are not uniform, comparable, and heavily influenced by the departments running them (stats versus cs), it would also be helpful to cite a few examples of why lack of ethics training in data science has been problematic, especially in the context of workforce development. There are many to choose from (see weapons of math description for a general reference) but a few here would be helpful.
Response: We have included this additional argument on the impact of ethics training (or lack thereof), based on O’Neil’s book (a great reference, by the way). The following is now included in the relevant portion of the Discussion: “A growing body of examples illustrate bias in data science applications (O’Neil, 2017) and such biases have real-world impact in criminal justice (Isaac & Lum, 2016), employment (Dastin, 2018), and healthcare (Obermeyer et al. 2019). These impacts further reinforce the necessity of appropriate training in ethical considerations in data science.” We refrain from statements of causation in the absence of information about ethics training (i.e. we don’t know if the authors of the PredPol software did not receive training in ethics or if they just chose to ignore any ethics training they did receive).
I appreciate that there is a limitations section in the methods, given that you relied solely on course titles and descriptions. I would appreciate more acknowledgement of this in the discussion. For example, reproducibility is rarely taught as a separate course and the appetite for such courses is also currently limited even among graduate students/postdocs. I am aware of many data science courses at several of the institutions you have surveyed that incorporate elements of reproducibility in data science courses. Data science ethics may also be listed as discussion topics and not show up in general course descriptions. I believe it would be important to acknowledge this. A small recommendation is that you could suggest that a future study could topic model syllabi to assess against the two frameworks.
Response: We added the following to the second paragraph of the Conclusions section. “Our reliance on course titles and course descriptions may bias against topics that receive minimal, albeit potentially impactful, coverage in curricula. Future work comparing curricular content through in-depth program reviews or exhaustive syllabus sampling could demonstrate more nuance in the variation among undergraduate data science degree programs.”
There is no mention of the tradeoff in covering domain specific skills while trying to build up foundational computing/statistical skills.
Response: We are pretty sure that point is covered by the statement the reviewer is referring to: “One notable exception is the human centered design and engineering data science option at the University of Washington, although the emphasis on domain-specific education may come at the cost of reduced training in computational skills and statistics (Figure 3).” (emphasis added) Do we need to be explicit in stating the converse? E.g. emphasis on computational skills and statistics may come at the cost of reduced training in domain-specific education.
The discussion covers a notable exception being UW HCI program. The dataset doesn't include the list of courses and descriptions before they were scored. Are those also available?
Response: We have included links to all programs’ requirements in Supplemental Table 3 and specific course requirements for each program in Supplemental File 1.
While I agree that it is critical to teach reproducibility at the undergraduate level, and their omission has cascading effects, it would be fair to say that those topics are more likely to be taught at the graduate level, where students apply these skills to domain problems. Reproducibility is already a hard sell. So it is not that the data science community doesn’t consider them critical for training, it just remains unrealistic for UG programs to cover everything.
Response: We agree that it may be unrealistic for undergraduate programs to cover everything listed in the NASEM framework, but this framework was explicitly designed for undergraduate education.
" | Here is a paper. Please give your review comments after reading it. |
27 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The recent improvements of language models have drawn much attention to potential cases of use and abuse of automatically generated text. Great effort is put into the development of methods to detect machine generations among human-written text in order to avoid scenarios in which the large-scale generation of text with minimal cost and effort undermines the trust in human interaction and factual information online. While most of the current approaches rely on the availability of expensive language models, we propose a simple feature-based classifier for the detection problem, using carefully crafted features that attempt to model intrinsic differences between human and machine text. Our research contributes to the field in producing a detection method that achieves performance competitive with far more expensive methods, offering an accessible 'first line-of-defense' against the abuse of language models. Furthermore, our experiments show that different sampling methods lead to different types of flaws in generated text.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Recent developments in Natural Language Processing (NLP) research led to a massive leap in capability of language models. The combination of unsupervised pre-training on massive and diverse datasets <ns0:ref type='bibr' target='#b35'>(Radford et al., 2019)</ns0:ref> and the introduction of the attention-based transformer architecture <ns0:ref type='bibr' target='#b47'>(Vaswani et al., 2017)</ns0:ref> allowed increasingly complex models to learn representations of language over a context spanning more than just the next few words, thereby effectively replicating the distribution of human language.</ns0:p><ns0:p>These advances already led to a more comprehensive use of language in a great number of research areas and consumer-oriented applications, as for example in the analysis of biomedical literature <ns0:ref type='bibr' target='#b5'>(Beltagy et al., 2019)</ns0:ref>, the generation of EEG reports <ns0:ref type='bibr' target='#b8'>(Biswal et al., 2019)</ns0:ref>, the development of more advanced chatbots <ns0:ref type='bibr' target='#b10'>(Budzianowski and Vulić, 2019)</ns0:ref> and the improvement of grammar-and writing-assistance <ns0:ref type='bibr' target='#b22'>(Hagiwara et al., 2019)</ns0:ref>. However, this newly-gained quality of generated language also increased the fear of its potential abuse by malicious actors <ns0:ref type='bibr' target='#b44'>(Solaiman et al., 2019)</ns0:ref>. Abuse scenarios are mostly based on the effectively vanishing costs for the generation of large amounts of text, allowing malicious actors to leverage the effectiveness of high-volume/low-yield operations like spam, phishing or astroturfing <ns0:ref type='bibr' target='#b44'>(Solaiman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Ferrara et al., 2016)</ns0:ref>. While <ns0:ref type='bibr' target='#b44'>Solaiman et al. (2019)</ns0:ref> could not find any evidence of their models being used for automated astroturfing attacks, in which review or comment systems are flooded with generated entries promoting a certain sentiment, an example of how easily text generating models might be abused to influence even policy-making can be found in the American Federal Communications Commission's decision on the repeal of net neutrality rules in 2017 <ns0:ref type='bibr' target='#b42'>(Selyukh, 2017)</ns0:ref>. Attempting to consider the public sentiment through an online comment system, it later turned out that millions of the submitted comments, most of them in favour of repealing net neutrality, were fakes <ns0:ref type='bibr' target='#b20'>(Fung, 2017)</ns0:ref>, automatically generated using a template-based generation model. The little sophistication of the generation approach led to many duplicates and highly similar comments in phrasing and syntax <ns0:ref type='bibr' target='#b28'>(Kao, 2017)</ns0:ref>, drawing attention to the issue in the first place. It is however easy to see how one of today's State-Of-The-Art (SOTA) language models PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science might have drowned authentic, human opinions and skewed the final decision without being detected.</ns0:p><ns0:p>Similar attacks could potentially overwhelm the news with fake news contents <ns0:ref type='bibr' target='#b6'>(Belz, 2019)</ns0:ref>, manipulate the discourse on social media <ns0:ref type='bibr' target='#b19'>(Ferrara et al., 2016)</ns0:ref> or impersonate others online or in email <ns0:ref type='bibr' target='#b44'>(Solaiman et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The wider implications of an Internet in which every snippet of written word could with equal probability stem from a human being or a language model are the erosion of fundamental concepts like truth, authorship and responsibility <ns0:ref type='bibr' target='#b6'>(Belz, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b43'>Shevlane and Dafoe (2020)</ns0:ref> highlight the potential disruption caused by language models through their ability to impersonate humans in an online world where increasing numbers of human interactions and proportions of social life are hosted, be it in social media, online banking or commerce.</ns0:p><ns0:p>In line with the recommendation given in <ns0:ref type='bibr' target='#b30'>Lewandowsky et al. (2012)</ns0:ref>, one approach of mitigating the damaging effects of language models is to educate the public about the increasing probability of encountering untrustworthy content online, thereby increasing scepticism and avoiding that factually unsustained information enters a person's belief, from where it would be difficult to retract. However, as argued by <ns0:ref type='bibr' target='#b43'>Shevlane and Dafoe (2020)</ns0:ref>, such a loss of trust in the habitual informational environment is burdensome. This highlights the need for reliable detection systems in order to tell human and machine generated content apart, preventing the rise of an Internet in which generic nonsense and propaganda-like spam campaigns dominate the public discourse. This paper contributes to the research on the automated detection of machine generated text by being the first to apply a feature-based detection approach to the most recent language models and simultaneously proposing a range of features to be used to that end.</ns0:p><ns0:p>Our experiments with samples from different language generating models show that the proposed feature-based detection approach is competitive with far more complex and computationally more restrictive methods. For its ability to generalize well across different sizes of the same language model, we consider the feature-based classifier a potential 'first line-of-defense' against future releases of ever bigger generators. Our research confirms the hypothesis that different sampling methods introduce different kinds of flaws into the generated text, and delivers first insights into which characteristics of text might show these differences the most.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE DETECTION PROBLEM</ns0:head><ns0:p>We frame the task of detecting automated language models as a binary classification task where a model needs to determine if an input text is produced by a human or by automated means through a language model. The methods for the detection of machine-generated text presented in this paper take a textual input and assess its provenance based only on the properties of the text, without considering its metadata or veracity, as proposed in similar detection problems <ns0:ref type='bibr' target='#b3'>(Baly et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Thorne and Vlachos, 2018)</ns0:ref>.</ns0:p><ns0:p>To prevent the scenario described above, we expect a detection method to fulfil the following three requirements:</ns0:p><ns0:p>1. <ns0:ref type='bibr' target='#b44'>Solaiman et al. (2019)</ns0:ref> voice concern for a well-considered trade-off between the maximization of a detector's accuracy and the false positives it produces. False positives in the present detection context, the incorrect labelling of a human-written text as machine-generated, are especially critical by potentially suppressing human opinions. In a large-scale detection system that automatically filters out texts it considers machine-generated, this could effectively block any written contributions of human authors that happen to have a style similar to what the detector considers typical for language models. This might not only potentially be considered unethical or unlawful, but could also further erode public confidence and trust in the written word online. A practical detection method must therefore be highly accurate to be able to cope with large-scale adversarial attacks, but may not achieve that at the cost of a high false-positive rate.</ns0:p><ns0:p>2. Another major fear in the current research into detection methods is the perspective of a 'cat and mouse' game <ns0:ref type='bibr' target='#b44'>(Solaiman et al., 2019)</ns0:ref> between generator and detector, where detection methods are hardly transferable between different adversarial generators. Any improvement in language models would then create a temporary advantage for the generating side, persisting until the detector catches up by adapting to the new situation through changes in model architecture or fine-tuning. This would imply that the detection problem could never be resolved, but only temporarily patched.</ns0:p><ns0:p>Signs of such a situation arising have been reported by <ns0:ref type='bibr' target='#b35'>Radford et al. (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b50'>Zellers et al. (2019)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>2/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>who observe that detection models struggle with the increasing complexity of the generating model, <ns0:ref type='bibr' target='#b25'>Ippolito et al. (2020)</ns0:ref> who find that detection models fail to generalize across different decoding methods used in the generation of texts, and <ns0:ref type='bibr' target='#b2'>Bakhtin et al. (2019)</ns0:ref>, who note that their detection model does not transfer well across different training corpora. A detection method needs to be as universal as possible, working well for detecting generations from different language models, trained across different domains and decoded using different sampling methods.</ns0:p><ns0:p>3. <ns0:ref type='bibr' target='#b21'>Gehrmann et al. (2019)</ns0:ref> developed their detection method with the intention to be easy to explain to non-experts and cheap to set up. This follows the recent controversy around availability and reproducibility of SOTA language models, which to a large degree differ only in their increasing financial and computational development costs, effectively restricting the access to them. The access-restriction can become harmful when defensive detection methods also rely on the access to such language models. Shevlane and Dafoe (2020) mention the difficulty and cost of propagating defensive measures to potentially harmful AI technologies as an important dimension in the assessment of risks associated with them, implying that a solution is desired that can effectively and easily be used by a large number of users. Given the anticipated broad impact of language models on human interaction online and usability of the Internet, detection methods should be universally available and easy to set up and adapt.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>This research is aimed at broadening the range of existing detection methods beyond the predominant reliance on the availability of language models by proposing a feature-based approach. To design meaningful features, a good understanding of the properties and limitations of the language generation process is necessary. The following subsections therefore provide an overview of SOTA language generation methods and their limitations, before discussing existing detection methods, and subsequently introducing the feature-based approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Language Generation</ns0:head><ns0:p>The currently predominating models for language generation are based on the transformer architecture introduced by <ns0:ref type='bibr' target='#b47'>Vaswani et al. (2017)</ns0:ref>. Its big advantage over previous language models is the more structured memory for long-term dependencies. Even though the bidirectional representation of language, learned by models like BERT <ns0:ref type='bibr' target='#b14'>(Devlin et al., 2019)</ns0:ref>, performs better in many downstream benchmark tasks, unidirectional left-to-right models like GPT-2 <ns0:ref type='bibr' target='#b35'>(Radford et al., 2019)</ns0:ref> are often the first choice for generating more coherent text <ns0:ref type='bibr' target='#b41'>(See et al., 2019)</ns0:ref>. They allow to intuitively generate text by using the preceding context to estimate a probability distribution over the model's vocabulary, which then only needs to be decoded by sampling the next token from it.</ns0:p><ns0:p>Apart from the new architecture, recent language models profit mainly from the training on ever bigger datasets. <ns0:ref type='bibr' target='#b35'>Radford et al. (2019)</ns0:ref> trained their model on the WebText dataset, a representation of natural language constructed to be as diverse as possible by spanning many different domains and contexts. The approach to train on as much human-written text as possible is described by <ns0:ref type='bibr' target='#b7'>Bisk et al. (2020)</ns0:ref> as one of the big milestones in NLP, passing from the usage of domain-specific corpora for training to basically using the whole 'written world'.</ns0:p><ns0:p>Together with the size of the datasets used for training, the whole training paradigm shifted from task-specific architectures and inputs to unstructured pre-training of language models. First introduced at word-level by <ns0:ref type='bibr' target='#b31'>Mikolov et al. (2013)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>Radford et al. (2019)</ns0:ref> took this approach to the sentence-level. By processing as many unstructured, unlabelled, multi-domain and even multilingual texts as possible, the idea is that the models not only get a good understanding of language, but also implicitly learn a variety of potential downstream tasks. The feasibility of this approach was recently confirmed by <ns0:ref type='bibr' target='#b9'>Brown et al. (2020)</ns0:ref>, whose GPT-3 exhibits strong performance on different NLP benchmarks, even without any form of task-specific fine-tuning but only through natural language interaction.</ns0:p><ns0:p>In order to effectively leverage the information contained in the ever increasing training datasets into improved language generation ability, the language models have to equally grow in size and complexity.</ns0:p><ns0:p>GPT-3 therefore has 175B parameters, more than 100 times as many as its predecessor. <ns0:ref type='bibr' target='#b41'>See et al. (2019)</ns0:ref> consider current language models to already have enough capacity to effectively replicate the distribution of human language.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Even if a language model perfectly learns the distribution of human language, an equally crucial component in language generation is the choice of the decoding method, i.e. how the next token is sampled from the probability distribution generated by the model. <ns0:ref type='bibr' target='#b41'>See et al. (2019)</ns0:ref> find that flaws in language generation can be traced back to the choice of decoding method, rather than model architecture or insufficient training. The choice of decoding method can be seen as a trade-off between diversity and quality <ns0:ref type='bibr' target='#b45'>(Sun et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Hashimoto et al., 2019)</ns0:ref>, where sampling from the full distribution leads to diverse, but poor-quality text as perceived by humans, while a likelihood-maximizing sampling method generating only from the most probable tokens leads to high-quality text that lacks diversity and is unnaturally repetitive. <ns0:ref type='bibr' target='#b24'>Holtzman et al. (2019)</ns0:ref> find the problem of sampling from the full distribution in the increased cumulative likelihood of picking an individually highly unlikely token, causing downwardspirals of text quality which are easy to notice for human readers. When trying to avoid this problem by choosing a likelihood-maximization approach for sampling (e.g. top-k, sampling at every step only from the k most likely tokens), they observe repetition feedback loops which the model cannot escape from and outputs that strongly differ from human language by over-relying on high-likelihood words, making it easy for automated detection approaches to pick up on statistical artifacts. <ns0:ref type='bibr' target='#b44'>Solaiman et al. (2019)</ns0:ref> introduce a simple categorization of different detection approaches based on their reliance on a language model. In the following, the existing approaches are categorized accordingly and briefly discussed along the dimensions introduced above.</ns0:p></ns0:div>
<ns0:div><ns0:head>Detection Approaches</ns0:head><ns0:p>The first category of detection approaches are simple classifiers, trained from scratch based on text samples labelled as either human-or machine-generated. They tend to have relatively few parameters and to be easily deployable. An example is the logistic regression classifier trained on term frequency -inverse document frequency (tf-idf) features, proposed as a detection baseline by <ns0:ref type='bibr' target='#b11'>Clark et al. (2019)</ns0:ref>. <ns0:ref type='bibr' target='#b1'>Badaskar et al. (2008)</ns0:ref> trained a feature-based Support Vector Machine (SVM) classifier, using high-level features to approximate a text's empirical, syntactic and semantic characteristics, trying to find textual properties that differed between human and machine text and could thus be used for discrimination between the two types. Their experiments were limited to the now outdated trigram language models. The main advantages of simple classifiers are their low access-and set-up costs. Because they do not rely on the access to an extensively pre-trained or fine-tuned language model, they can be handled even on individual commodity computers. However, they are hard to adapt, requiring entirely new training on changing corpora. Because of the sparse literature on them, their performance and transferability are not yet clear, but will be investigated in our experiments.</ns0:p><ns0:p>Zero-shot detection approaches from the second category rely on the availability of a language model to replicate the generation process. An example is the second baseline introduced by <ns0:ref type='bibr' target='#b11'>Clark et al. (2019)</ns0:ref>, which uses the total probability of a text as assessed by a language model for detection. <ns0:ref type='bibr' target='#b21'>Gehrmann et al. (2019)</ns0:ref> elaborate on this approach by calculating histograms over next-token probabilities as estimated by a language model and then training logistic regression classifiers on them. While not requiring fine-tuning, zero-shot detection approaches need a language model to work, the handling of which is computationally restrictive. Their performance lags far behind the simple tf-idf baseline <ns0:ref type='bibr' target='#b11'>(Clark et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Ippolito et al., 2020)</ns0:ref> and their transferability is questionable, given the need for the detection method in this approach to basically 'reverse-engineer' the model-dependent generation process to be successful.</ns0:p><ns0:p>The third category uses pre-trained language models explicitly fine-tuned for the detection task. reliance on a pre-trained and fine-tuned language model, this approach has so far achieved the highest accuracy on the detection task <ns0:ref type='bibr' target='#b44'>(Solaiman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b50'>Zellers et al., 2019)</ns0:ref>. However, the discussed lack of transferability across model architectures, decoding methods and training corpora has also been observed with fine-tuned models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature-Based Text-Classification</ns0:head><ns0:p>The feature-based approach to discriminate between human and machine text is grounded on the assumption that there are certain dimensions in which both types differ. Stylometry -the extraction of stylistic features and their use for text-classification -was introduced by Argamon-Engelson et al. ( <ns0:ref type='formula'>1998</ns0:ref>), and has since been successfully employed for tasks as diverse as readability assessment <ns0:ref type='bibr' target='#b18'>(Feng et al., 2010)</ns0:ref>, <ns0:ref type='table' target='#tab_4'>-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:ref> Manuscript to be reviewed Computer Science authorship attribution <ns0:ref type='bibr' target='#b29'>(Koppel et al., 2002)</ns0:ref> and, more recently, the detection of fake news <ns0:ref type='bibr' target='#b34'>(Pérez-Rosas et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Rubin et al., 2016)</ns0:ref>. Even though <ns0:ref type='bibr' target='#b39'>Schuster et al. (2019)</ns0:ref> consider the detection models of <ns0:ref type='bibr' target='#b50'>Zellers et al. (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>Bakhtin et al. (2019)</ns0:ref> examples of well-working, feature-based detectors, their input features are mere vector-space representations of text. <ns0:ref type='bibr' target='#b37'>Rubin et al. (2016)</ns0:ref> hypothesize that highlevel features, specifically designed for the classification problem, expand the possibilities of stylometry classifiers and would thus improve their performance. By building on differences between human and machine text, high-level features make the detection transparent and explainable, offering insights into characteristic behaviour of language models <ns0:ref type='bibr' target='#b1'>(Badaskar et al., 2008)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>4/17 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>A feature-based detection approach relies on features that discriminate between human and machine text by modelling properties and dimensions in which both types of text differ. Logical starting points for the creation of such features are therefore the flaws and limitations of language generation methods. In the following subsection, we categorize the known shortcomings and propose features to capture them, before discussing the choice of a detection model architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>Features</ns0:head><ns0:p>Depending on the choice of the decoding method, the flaws in the generated language differ. However, we establish four different categories to organize them. A comprehensive description and further explanation of the features can be found in the corresponding supplemental information.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lack of Syntactic and Lexical Diversity</ns0:head><ns0:p>Gehrmann et al. ( <ns0:ref type='formula'>2019</ns0:ref>) describe that language models fail to use synonyms and references as humans do, but rather stick to the repetition of the same expressions, leading to a lack of syntactic and lexical diversity in machine text. <ns0:ref type='bibr' target='#b49'>Zellers et al. (2020)</ns0:ref> observe their models confusing the 'who-is-who' in story-telling, and failing to use different references for a text's entities to increase diversity. <ns0:ref type='bibr' target='#b41'>See et al. (2019)</ns0:ref> find that generated texts contain more verbs and pronouns, and fewer nouns, adjectives and proper nouns than human text, indicating a different use of word types. This behaviour can be approximated by the use of named entities (NE) and the properties of the co-reference chains, as introduced by <ns0:ref type='bibr' target='#b18'>Feng et al. (2010)</ns0:ref>. Compared to a human author who de-references and varies expressions, language models can be expected to use a larger share of unique NEs and to produce shorter and fewer coreference chains with a higher share of NEs. Additional features can be based on the shift in the part-of-speech (POS) distribution between human and machine texts <ns0:ref type='bibr' target='#b11'>(Clark et al., 2019)</ns0:ref>.</ns0:p><ns0:p>As NE-based features, we use the relative distribution over NE-tags, their per-sentence counts and a number of simple count-based features. The co-reference features are similar to those of <ns0:ref type='bibr' target='#b18'>Feng et al. (2010)</ns0:ref>, all based on co-reference chains that indicate the different references made to entities throughout a text. As POS-based features, we use the relative distribution of a text's POS-tags, their per-sentence counts as well as a number of features based on the nouns, verbs, adjectives, adverbs and prepositions proposed by <ns0:ref type='bibr' target='#b18'>Feng et al. (2010)</ns0:ref>. We use the NE-recognizer and POS-tagger provided in the Python spaCy 1 package to find the NE-and POS-tags, as well as the neuralcoref 2 extension to detect co-reference clusters.</ns0:p></ns0:div>
<ns0:div><ns0:head>Repetitiveness</ns0:head><ns0:p>The problem of over-using frequent words as described by <ns0:ref type='bibr' target='#b24'>Holtzman et al. (2019)</ns0:ref> can lead to a large degree of repetitiveness and a lack of diversity in machine-generated texts. <ns0:ref type='bibr' target='#b25'>Ippolito et al. (2020)</ns0:ref> observe that machine-generated language has 80% of its probability mass in the 500 most common words and <ns0:ref type='bibr' target='#b24'>Holtzman et al. (2019)</ns0:ref> expose the low-variance of the next-token probabilities over a text as assessed by a language model, showing that machine-generated text almost never dips into low-probability zones as human text characteristically does. Another big problem of machine-generated text is its highly parallel sentence structure <ns0:ref type='bibr' target='#b21'>(Gehrmann et al., 2019)</ns0:ref> and the occasional repetition of whole phrases <ns0:ref type='bibr' target='#b26'>(Jiang et al., 2020)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We try to expose those statistical differences, assumed to be easiest to be picked up by automated detection methods, through the share of stop-words, unique words and words from 'top-lists' in a text's total words. We expect a more diverse, human-written text to have a higher share of unique words and a lower share of stop-words and words from 'top-lists'. We propose to expose the repetitiveness by calculating the n-gram overlap of words (lexical repetition) and POS-tags (syntactic repetition) in consecutive sentences. Human text is expected to be less repetitive both in sentence structure and word choice. We introduce the 'conjunction overlap' as a measure of the n-gram overlap around andconjunctions to make explicit the reported failure of language models of plainly repeating words around those conjunctions.</ns0:p><ns0:p>We use the stop-words defined by the spaCy package and take a list with the top 10000 words 3 used in English determined by Google to calculate the share of a text's words that are in the top 100, top 1000 and top 10000 words of that list. The n-gram (n = [1,2,3]) overlap of consecutive sentences is represented on a document level by histograms (from 0 to 1 in 10 uniform bins) over the share of repeated word and POS-tag n-grams in consecutive sentences.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lack of Coherence</ns0:head><ns0:p>Even with SOTA language models, the most severe problem of machine-generated text remains the lack of coherence, especially over longer sentences and paragraphs <ns0:ref type='bibr' target='#b24'>(Holtzman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Brown et al., 2020)</ns0:ref>. Language model generations are therefore often described as surprisingly fluent on the first read, but lacking any coherent thought and logic on closer inspection <ns0:ref type='bibr' target='#b41'>(See et al., 2019)</ns0:ref>. Closely related is the 'topic-drift', where language models struggle to focus on a single topic but cover different, often unrelated topics in a single text <ns0:ref type='bibr' target='#b1'>(Badaskar et al., 2008)</ns0:ref>. The lack of coherence is especially blatant for generations sampled with likelihood-maximization, which nevertheless remain hardest to detect for automated detectors due to their lack of sampling-artifacts <ns0:ref type='bibr' target='#b25'>(Ippolito et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The coherence of a text might be approximated by the development of its entities, as introduced by <ns0:ref type='bibr' target='#b4'>Barzilay and Lapata (2008)</ns0:ref> and used for classification by <ns0:ref type='bibr' target='#b1'>Badaskar et al. (2008)</ns0:ref>. The entity-grid representation tracks the appearance and grammatical role of entities through the separate sentences of a text. The assumption is that (locally) coherent text exhibits certain regularities, for example the repetitive presence of a text's main entities in important grammatical roles and only sparse occurrences of less important entities in lesser grammatical roles. We use the neuralcoref extension to detect coreference clusters and track the appearance of their entities through the text. As a second layer, we implement an identity-based proxy, considering reappearing, identical noun phrases as the same entity. Using the spaCy dependency parser, we assign the roles Subject (S), Object (O), Other (X) or Not Present (-) to the found entities. Based on the resulting entity grid, we obtain the counts of the 16 possible transitions of entities between consecutive sentences and transform them to relative transition frequencies by normalizing with the total number of transitions. <ns0:ref type='bibr' target='#b1'>Badaskar et al. (2008)</ns0:ref> further propose the use of Yule's Q statistic as described in <ns0:ref type='bibr' target='#b15'>Eneva et al. (2001)</ns0:ref> to approximate a text's intra-sentence coherence. Based on the available corpora of human-and machinegenerated texts, the assumption is that co-appearances of content-words differ between both types. By requiring a minimal distance of five between the content-words forming a co-appearance pair, the focus is shifted to the model's ability to produce coherent output over a medium-range context length. To discriminate between human and machine text, the texts available in the training corpora are used to calculate a correlation measure for the co-occurrence of content-words in texts from the two different sources. We define content-words as the top 5000 words from the Google top 10000 list, excluding spaCy stop-words and sub-word snippets. Given these correlation scores, separate human-and machine-scores can be calculated for every text, indicating the agreement of that text's content-word co-appearances with the different corpora. The Q statistic is the only corpus-based feature, not exclusively reliant on the text itself. <ns0:ref type='bibr' target='#b1'>Badaskar et al. (2008)</ns0:ref> also use the topic redundancy, approximated by the information loss between a text and its truncated form, as a measure of coherence. The assumption is that human-generated text is more redundant, since it coherently treats a single or few topics without drifting from topic to topic. The text is transformed to a sentence-based vocabulary-matrix representation which can in turn be brought to its eigenspace using a Singular Value Decomposition. By replacing the lowest entries of the eigenvalue diagonal-matrix with 0, the reconstructed matrix is a truncated version of the original. By always setting Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the lowest 25% of entries to 0, we dynamically adapt to differing text-lengths. Given the original and truncated matrix representation, the information loss is calculated as the squared norm of the element-wise difference between the two matrices. We additionally calculate and include the mean, median, min and max of the truncated matrix and the element-wise difference between the full and truncated matrix.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lack of Purpose</ns0:head><ns0:p>A final, more qualitative limitation of machine-generated text is its lack of purpose and functionality.</ns0:p><ns0:p>While for human text function is generally considered as the 'source of meaning' <ns0:ref type='bibr' target='#b7'>(Bisk et al., 2020)</ns0:ref>, language models naturally do not have human-like needs or desires <ns0:ref type='bibr' target='#b21'>(Gehrmann et al., 2019)</ns0:ref> and their generations must therefore be considered as void of meaning and purpose.</ns0:p><ns0:p>We approximate the purpose of a text by calculating its lexicon-based topicality scores. We expect human text to contain more sentiment-related keywords and thus score higher in these categories, while being more focussed on fewer categories overall, expressing a single message rather than generating purposelessly drifting text. We also take the share of a text's non-generic content words as a measure of its originality, assuming that human text trying to convey a real message has a higher share.</ns0:p><ns0:p>Based on the 194 categories available by default from the Python empath 4 lexicon-package <ns0:ref type='bibr' target='#b16'>(Fast et al., 2016)</ns0:ref> and 5 tailored categories (representing spatial properties, sentiment, opinion, logic and ethic), we calculate the mean, median, min, max and variance of a text's scores over all categories as features.</ns0:p><ns0:p>The same statistics are extracted based only on the 'active' categories (empath scores > 0). Additionally, the scores of the text in the tailored categories are used as features.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other Features</ns0:head><ns0:p>The last set consists of more general, potentially helpful features. The basic features are simple character-, syllable-, word-and sentence-counts, both in absolute and relative terms. The readability features reflect the syntactic complexity, cohesion and sophistication of a text's vocabulary <ns0:ref type='bibr' target='#b13'>(Crossley et al., 2011)</ns0:ref>. To test the models' ability of structuring and formatting its generations, we calculate the distribution over punctuation marks, their per-sentence counts as well as the number and average length of paragraphs, shown to be successful in detecting fake news <ns0:ref type='bibr' target='#b37'>(Rubin et al., 2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Classifier</ns0:head><ns0:p>The feature-based detection method proposed in this paper can be considered as a special, binary case of the general automated text categorization problem. We thus follow <ns0:ref type='bibr' target='#b48'>Yang and Liu (1999)</ns0:ref> in the definition of the task as the supervised learning of assigning predefined category labels to texts, based on the likelihood suggested by the training on a set of labelled texts. Given a text and no additional exogenous knowledge, the trained model returns a value between 0 and 1, indicating the evidence that the document belongs to one class or the other. A hard classifier takes this evidence, compares it to a pre-defined threshold and makes the classification decision <ns0:ref type='bibr' target='#b40'>(Sebastiani, 2002)</ns0:ref>. From the range of available classification models, we consider Logistic Regression (LR), Support Vector Machines (SVM), Neural Networks (NN) and Random Forests (RF), which have often been reported to show similar performances on the text categorization task <ns0:ref type='bibr' target='#b51'>(Zhang and Oles, 2001;</ns0:ref><ns0:ref type='bibr' target='#b27'>Joachims, 1998)</ns0:ref>. We use the implementations of the different models available from the scikit-learn 5 package for our validation trials. We focus our following experiments on the evaluation of Neural Networks for the proposed detection problem, based on their superior performance in our validation trials (Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>We evaluate our feature-based classifier in a variety of settings, testing it across different generation model architectures, training datasets and decoding methods, thereby covering all main potential influences of a detector's performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>In our experiments, we use publicly available samples of language model generations and try to detect them among the model's training data, which was either scraped from the Internet or more randomly <ns0:ref type='bibr' target='#b50'>(Zellers et al., 2019)</ns0:ref> and from Open-AI's most recent GPT-3 model <ns0:ref type='bibr' target='#b9'>(Brown et al., 2020)</ns0:ref>.</ns0:p><ns0:p>We noticed that a significant share of the randomly scraped and unconditionally generated texts turned out to be website menus, error messages, source code or weirdly formatted gibberish. Since we consider the detection of such low-quality generations as neither interesting nor relevant for the limited impact of their potential abuse, we repeat our experiments on a version of the data that was filtered for 'detection relevance'. We take inspiration from <ns0:ref type='bibr' target='#b36'>Raffel et al. (2019)</ns0:ref> in the construction of our filters, filtering out samples that show excessive use of punctuation marks, numbers and line-breaks, contain the words cookie, javascript or curly brackets, or are not considered as being written in English with more than 99% probability as assessed by the Python langdetect 6 package. Like <ns0:ref type='bibr' target='#b25'>Ippolito et al. (2020)</ns0:ref>, we only consider texts that have at least 192 WordPiece <ns0:ref type='bibr' target='#b38'>(Schuster and Nakajima, 2012)</ns0:ref> tokens. The sizes of the resulting datasets are documented in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. We compare the results of our detectors trained and evaluated on the unfiltered dataset to their counterparts trained and evaluated on the filtered dataset. We expect the filtering to decrease the share of texts without meaningful features, thus hypothesizing that our classifiers perform better on the filtered datasets. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>To evaluate the performance of our detection model, we report its accuracy as the share of samples that are classified correctly, as well as the area under curve (AUC) of the receiver operating characteristic curve (ROC), resulting from the construction of different classification thresholds. While the accuracy is often the sole metric reported in the literature, we argue that it should not be the only metric in assessing a detector's quality. Its inability to include a notion of utility of the different types of errors <ns0:ref type='bibr' target='#b40'>(Sebastiani, 2002)</ns0:ref> is a major drawback, given the potential severity of false positives as discussed above. This is in line with related detection problems, e.g. the bot detection in social media, where a deliberate focus is on the detector's precision to avoid the misclassification of human users as machines <ns0:ref type='bibr' target='#b32'>(Morstatter et al., 2016)</ns0:ref>. Another problem is the sensitivity of accuracy to class skew in the data, influencing the evaluation of detectors <ns0:ref type='bibr' target='#b17'>(Fawcett, 2006)</ns0:ref> and in extreme cases leading to the trivial classifier <ns0:ref type='bibr' target='#b40'>(Sebastiani, 2002)</ns0:ref> that effectively denies the existence of the minority class and thus fails to tackle the problem. We therefore decided to report the accuracy, allowing for comparison with existing detection approaches, but also provide the AUC of the ROC as a more comprehensive evaluation metric, effectively separating the evaluation of the classifier from skewed data and different error costs <ns0:ref type='bibr' target='#b17'>(Fawcett, 2006)</ns0:ref> by combining the notions of specificity (share of correctly detected human texts) and sensitivity (share of correctly detected machine texts).</ns0:p><ns0:p>All reported results are calculated on a held-out test set, using the classifier found to be optimal by a grid search over a range of different parameter constellations and evaluated on validation data. Each of the individual classifiers has thus been optimized across a range of different parameter constellations as defined by a parameter grid using the respective classifier's default optimization method provided in the scikitlearn package that was used for training. The classifiers were trained for a maximum of 250 iterations or until convergence on validation data was observed. Please refer to the corresponding supplemental information for the underlying parameter grid and the resulting optimal parameter constellations.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The following results are organized along the different data constellations we trained and evaluated our classifiers on.</ns0:p></ns0:div>
<ns0:div><ns0:head>Single-Dataset Classifiers</ns0:head><ns0:p>In the main part of our experiments, we evaluate detectors trained on samples from a single generation model. We evaluate the resulting detectors not only on the language model they were specifically trained on, but also try their transferability in detecting generations from other models. The feature-based classifier performs better for generations from likelihood-maximizing decoding strategies (Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>; s-k and xl-k vs. s and xl), as do all the approaches tested in the literature so far. Similarly, the detection of machine-generated texts becomes more difficult with increasing model complexity (Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>; xl and xl-k vs. s and s-k), indicating that bigger models presumably better replicate human texts statistically. This follows from the baseline results of <ns0:ref type='bibr' target='#b11'>Clark et al. (2019)</ns0:ref> and is also implied by the decreasing performance of our feature-based approach. The performance of the detector</ns0:p></ns0:div>
<ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science learned and evaluated on the GPT-3 model is surprisingly good, being even higher than for the GPT-2 xl generations. Given that GPT-3 has more than 100 times as many parameters, we would have expected GPT-3 generations to be more difficult to detect. However, this might partly be due to the decoding choice, with the top-p=0.85 sampling used for the GPT-3 generations marking a trade-off between the easier to detect top-k sampling and the harder to detect sampling from the full distribution. Similar reasoning applies to the detection of Grover generations (top-p=0.94 sampling), which our classifier struggles with most. Another reason might be that the detection of fine-tuned generation models, as is the case with the pre-conditioned article-like Grover generations, is generally more difficult <ns0:ref type='bibr' target='#b11'>(Clark et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> shows acceptable transferability of our classifiers between models with the same architecture and sampling method, but different complexity. It is easier for a detector trained on samples from a bigger generator (xl and xl-k) to detect samples from a smaller generator (s and s-k) than vice versa. There is no transferability between the different sampling methods, confirming the observations by Holtzman et al.</ns0:p><ns0:p>(2019) that different sampling methods produce different artifacts, making it impossible for a feature-based detector to generalize between them. To rule out the possibility that the lack of transferability is caused by the corpus-based Q features, we repeat the experiments for detectors trained on all but the Q features (Appendix 1 Table <ns0:ref type='table' target='#tab_7'>A1</ns0:ref>). The transferability across sampling methods remains abysmal, indicating that the feature-based approach is indeed unable to pick out common flaws produced by different sampling methods.</ns0:p><ns0:p>We finally test the performance of our classifiers when trained and evaluated on the texts from the filtered datasets which are potentially more characteristic and richer in features. As expected, our classifiers perform better, gaining between 1 and 3 percentage-points accuracy across the GPT-2 generations (Appendix 1 Table <ns0:ref type='table' target='#tab_8'>A2</ns0:ref>). However, this does not hold for GPT-3 and Grover, again hinting at better-curated data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature-Set Classifiers</ns0:head><ns0:p>To get an idea of which features are truly important for the performance of the feature-based classifiers, we train and evaluate detectors on the individual subsets of features. <ns0:ref type='table'>4</ns0:ref>. Feature-Set Classifiers. Highlighted in bold are the feature-dataset combinations where a feature-set is far better for either the untruncated or top-k sampling for both GPT-2 dataset sizes. The underscored values correspond to the feature-set and dataset combinations the highlighted values are compared against. The features are sorted in decreasing order of their average accuracy across all datasets.</ns0:p><ns0:p>From the results in Table <ns0:ref type='table'>4</ns0:ref> it is apparent that the most important feature subsets in terms of their individual performance are the syntactic, lexical diversity and basic features. While the subsets generally have similar performance for the different sampling methods, we observe that the NE and coreference features are consistently stronger for the untruncated sampling method, and the lexical diversity and Q features for the top-k sampling. This is in line with the assumption that untruncated sampling is easier to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science detect based on more qualitative text characteristics such as coherence and consistency, while generations from top-k sampling methods can more easily be detected based on statistical properties.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multi-Dataset Classifiers</ns0:head><ns0:p>Simulating a more realistic detection landscape in which different types of language models are used for the generation of texts, we construct datasets that combine generations from different language models.</ns0:p><ns0:p>The combined datasets are composed to optimally balance the contributions of the individual data sources.</ns0:p><ns0:p>Their exact composition is documented in Appendix 1 Table <ns0:ref type='table' target='#tab_9'>A3</ns0:ref>. Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> shows that classifiers trained on combined datasets from the same sampling method (GPT2un and GPT2-k) lead to good results on the respective individual datasets (s,xl and s-k,xl-k) without outperforming the optimized single-dataset classifiers (Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>). Their transferability is similar to that of the single-dataset classifier trained on the respective, more difficult dataset <ns0:ref type='bibr'>(xl,xl-k)</ns0:ref>. When training a classifier on all GPT-2 generations (GPT2), it shows relatively good performance across all individual GPT-2 datasets, but breaks down on the xl-k data. This might hint at the possibility that the detector learns sub-detectors for every single data source, rather than obtaining a universal understanding of the difference between human text and GPT-2 generations.</ns0:p><ns0:p>Finally, we train and evaluate a classifier on the combination of all the different data sources, including generations from GPT-3 and Grover (All). The resulting detector, especially when trained on the subset of features that excludes the corpus-based Q features (Appendix 1 Table <ns0:ref type='table'>A4</ns0:ref>, is surprisingly robust and shows decent performance across all generation models. Its strong performance for the GPT-3 and Grover generations -which are under-represented in the multi-dataset classifiers' training data -might be due to the overall increase in training when compared to the single-dataset classifiers. In total, the multi-dataset classifier is trained on much more and more diverse training samples than the respective single-dataset classifiers for GPT-3 and Grover.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ensemble Classifiers</ns0:head><ns0:p>After observing that our feature-based classifier is more accurate than the tf-idf baseline in detecting texts from untruncated sampling (s and xl, Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>), while it is the other way around for texts generated with top-k=40 sampling (s-k and xl-k, Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>), we construct ensemble classifiers to take advantage of the differing performances. In the separate (sep.) ensemble model variant, we take the individually optimized feature-based-and tf-idf-baseline models' probability estimates for a text to be machine-generated as input to a meta-learner, which in turn produces the final label estimate. In the super ensemble model, we use the probability estimates of all the different, optimized feature-set classifiers, as well as the estimate from the tf-idf-baseline model, as input to a meta-learner. For each of the different ensembles, we train a Logistic Regression and a Neural Network classifier, following the previously introduced grid-search approach in order to approximate the optimal parameter constellation.</ns0:p><ns0:p>The ensemble models, and especially the NN sep. variant built on top of the optimized tf-idf-baseline and feature-based model, outperform the individual classifiers and even improve on their best accuracy by at least 1 percentage-point on each dataset (Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>). This holds, even though the combination of using neural networks and the high-dimensional tf-idf baseline necessarily implies strong overfitting to the relatively small input dimensionality, a fact which we observe in the classifiers' near-perfect performances on the training data itself. However, since we explicitly optimized our models on independent validation datasets and not on the training data, we confidently ignore that issue.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison to Results in the Literature</ns0:head><ns0:p>Comparing the performance of our feature-based detector to results reported in the literature, we see that the RoBERTa models fine-tuned for the detection task by <ns0:ref type='bibr' target='#b44'>Solaiman et al. (2019)</ns0:ref> show unmatched accuracies across all model sizes and sampling methods. The accuracies of 96.6% on the xl and 99.1%</ns0:p><ns0:p>on the xl-k dataset are impressive, with our best ensemble model lagging behind 18 percentage-points in accuracy on the generations from the full distribution (xl; Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>). However, <ns0:ref type='bibr' target='#b44'>Solaiman et al. (2019)</ns0:ref> evaluated their detector only on samples with a fixed length of 510 tokens, potentially giving its accuracy a boost compared to the many shorter, thus harder to detect samples in our test data. The results therefore are not directly comparable. <ns0:ref type='bibr' target='#b25'>Ippolito et al. (2020)</ns0:ref> report detection results for a fine-tuned BERT classifier on generations from the GPT-2 large model (774M parameters) with a sequence length of 192 tokens. They report an accuracy of 79.0% for generations from the full distribution and 88.0% for top-k=40 samples.</ns0:p><ns0:p>The use of 1-token-priming for generation makes their results not directly comparable to ours. However, as stated by the authors, the priming should only negatively affect the accuracy on the top-k generations.</ns0:p><ns0:p>Our strongest ensemble model achieves an accuracy of 78.2% on samples from the untruncated GPT-2 xl model, a generation model twice the size of that used in <ns0:ref type='bibr' target='#b25'>Ippolito et al. (2020)</ns0:ref> and therefore presumably more difficult to detect. Given the unclear effect of restricting the text length to 192 tokens, compared to our data which includes both longer and shorter texts, we consider our feature-based ensemble classifier to be at least competitive with the reported BERT results. Our best ensemble classifier struggles most with the detection of Grover. While only the fine-tuned Grover model of <ns0:ref type='bibr' target='#b50'>Zellers et al. (2019)</ns0:ref> scores a strong accuracy of 92.0% on the Grover-Mega data, the fine-tuned BERT and GPT-2 detectors perform similar to our classifier, with reported accuracies of 73.1% and 70.1%, respectively. This suggests that the inability of these detectors might less be due to the detection approach but rather be caused by the highly-curated Grover training data, differing strongly from the more diverse Internet text used to train the non-Grover classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION AND FUTURE WORK</ns0:head><ns0:p>Our research into the possibility of using feature-based classifiers for the detection of SOTA language models offers not only an understanding of the method's general performance, but also delivers many insights into more general language model detection issues. We observed low transferability between the detectors of different sampling methods, as well as differing performance of the individual feature sets, indicating that the sampling method choice indeed influences the type of flaws a language model produces in its generations. Our experiments with multi-dataset classifiers indicate that it might be impossible to account for these differences in one single classifier, and that a solution might instead be the construction Manuscript to be reviewed</ns0:p><ns0:p>Computer Science of sub-classifiers for every single dataset and the combination of their outputs using an ensemble approach.</ns0:p><ns0:p>We have also shown that our more quality-focussed features work better than the more statistical tf-idfbaseline for the detection of texts generated from the full distribution, and that ensemble detectors which combine these simple approaches can be competitive with more computationally expensive, languagemodel-based detectors. Given the transferability observed between different generation model sizes with the same sampling method, we are hopeful that our feature-based approach might work as a 'first line-of-defense' against potential releases of ever bigger language models of the same architecture, as was the trend with the last GPT models, without the immediate need to extensively retrain the detector. Given Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Solaiman et al. (2019) and Zellers et al. (2019) add a classifier-layer on top of the language model and Bakhtin et al. (2019) train a separate, energy-based language model for detection. While being by far the most expensive method in terms of training time and model complexity, and the least accessible for its</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>1 https://spacy.io/ 2 https://github.com/huggingface/neuralcoref5/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>3 https://github.com/first20hours/google-10000-english6/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>6 https://github.com/Mimino666/langdetect 8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>that the dataset used for training has been explicitly crafted to be as diverse as possible and therein covers a wide range of places of discourse on the Internet, we feel confident that our trained classifiers might already in their current form help assessing the origin of text online. However, an important strain of future research would be to systematically evaluate the classifiers' performances in more realistic settings like forum discussions, blog posts or wider social media discourse. Since the training data was constructed to mirror wide parts of the Internet, almost necessarily the potential issue of underrepresentation of minorities arises. The question whether our classifiers show consistent performance across different sources of human text is an important ethical question that requires careful investigation before deployment, with the aim to ensure that no minorities or non-native speaking groups are discriminated against by a classifier that struggles to detect their speech as human as reliably as it does for other groups. Future work into feature-based detection methods might also include the more detailed evaluation of the contribution of individual features to the overall performance of the classifier, with a possible focus on the search for features that increase transferability between the different sampling methods. Similarly, based on the hypotheses formulated during feature development regarding the role of the different features in distinguishing between human and machine text, a deeper investigation of these linguistic differences could inform the future development and improvement of machine generated language.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Dataset Sizes. The human text datasets used in our experiments were taken from the samples of human text published with the respective language models and resized to match the size of the machine text datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>7/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021) curated from existing corpora, but in any case of human origin. The biggest part of our data comes from the different GPT-2 model versions, published by<ns0:ref type='bibr' target='#b11'>Clark et al. (2019)</ns0:ref>. We use generations from the smallest (117M parameters; s) and largest GPT-2 model (1542M parameters; xl), sampled both from the full and truncated (top-k=40) distribution, to test the transferability of our detectors across model sizes and sampling methods. To evaluate the transferability across model architectures, we include generations from the biggest Grover model</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Single-Dataset Classifiers. Accuracy scores of the classifiers evaluated on generations from the different language models. Along the diagonal (bold), training and test data belong to the same language model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Test Data</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Multi-Dataset Classifiers. </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Test Data</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>AUC Acc. AUC Acc. AUC Acc. AUC Acc. AUC Acc. AUC Baselines featurebaseline 0.897 0.964 0.759 0.836 0.927 0.975 0.858 0.932 0.779 0.859 0.692 0.767 Ensemble-Classifiers. The size of the tf-idf vectors in the tf-idf baseline is n = 100k.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>11/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table A1 .</ns0:head><ns0:label>A1</ns0:label><ns0:figDesc>Single-Dataset Classifiers, no Q. AUC Acc. AUC Acc. AUC Acc. AUC Acc. AUC Acc. AUC s 0.930 0.982 0.769 0.884 0.473 0.307 0.459 0.273 0.320 0.2139 0.431 0.43 xl 0.849 0.971 0.802 0.883 0.446 0.329 0.426 0.303 0.387 0.328 0.494 0.477 s-k 0.321 0.172 0.443 0.292 0.947 0.985 0.801 0.939 0.609 0.812 0.505 0.667 xl-k 0.216 0.099 0.360 0.242 0.910 0.974 0.861 0.933 0.637 0.660 0.514 0.721 GPT3 0.417 0.131 0.432 0.254 0.806 0.884 0.734 0.820 0.754 0.834 0.614 0.668 Grover 0.334 0.286 0.423 0.395 0.764 0.842 0.711 0.762 0.731 0.747 0.676 0.769</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Test Data</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Training</ns0:cell><ns0:cell>s</ns0:cell><ns0:cell>xl</ns0:cell><ns0:cell>s-k</ns0:cell><ns0:cell>xl-k</ns0:cell><ns0:cell>GPT3</ns0:cell><ns0:cell>Grover</ns0:cell></ns0:row><ns0:row><ns0:cell>Data</ns0:cell><ns0:cell>Acc.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table A2 .</ns0:head><ns0:label>A2</ns0:label><ns0:figDesc>Single-Dataset Classifiers, Filtered. </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Machine</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Human</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>s</ns0:cell><ns0:cell>xl</ns0:cell><ns0:cell>s-k</ns0:cell><ns0:cell>xl-k</ns0:cell><ns0:cell cols='3'>GPT3 Grover webtext GPT3-</ns0:cell><ns0:cell>realNews</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>webtext</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Train Datasets</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>GPT2-un 125000 125000 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>250000 -</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>GPT2-k</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell cols='3'>125000 125000 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>250000 -</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>GPT2</ns0:cell><ns0:cell cols='5'>62500 62500 62500 62500 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>250000 -</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>All</ns0:cell><ns0:cell cols='5'>60099 60099 60099 60099 1604</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>236396 1604</ns0:cell><ns0:cell>8000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Valid and Test Datasets</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>GTP2-un 2500</ns0:cell><ns0:cell cols='2'>2500 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>GPT2-k</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2500</ns0:cell><ns0:cell cols='2'>2500 -</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>GPT2</ns0:cell><ns0:cell>1250</ns0:cell><ns0:cell>1250</ns0:cell><ns0:cell>1250</ns0:cell><ns0:cell>1250</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>All</ns0:cell><ns0:cell>950</ns0:cell><ns0:cell>950</ns0:cell><ns0:cell>950</ns0:cell><ns0:cell>949</ns0:cell><ns0:cell>201</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>201</ns0:cell><ns0:cell>1500</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table A3 .</ns0:head><ns0:label>A3</ns0:label><ns0:figDesc>Multi-Dataset Compositions. </ns0:figDesc><ns0:table /><ns0:note>16/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54411:1:3:NEW 18 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='4'>https://github.com/Ejhfast/empath-client 5 https://scikit-learn.org/</ns0:note>
</ns0:body>
" | "Dear Editors
We thank the reviewers for their generous comments and instructive feedback on our
work and hope to have edited the manuscript as to address their concerns
appropriately. In what follows we respond to each of the comments and describe the
changes made to address them.
We hope that you will find the manuscript now suitable for publication in PeerJ
Computer Science.
Leon Fröhling and Arkaitz Zubiaga
__________________________________________________________
Reviewer 1 (DW Heck)
Basic reporting:
The manuscript is very well written and clear. Even without a background in NLP, it is possible to
understand the methods, simulations, and results. The classifier itself is based on plausible
assumptions, allows an intuitive interpretation of (most) features, and performs comparably well as more
computationally-intensive competitors. Overall, I think the present manuscript does not have any major
flaws. There are still several points that should be improved in a revision:
Detection problem (p.2):
A forth goal that is desirable is the applicability of the classifier to text generated by people who may not
be native speakers or members of minorities. This is question of the ethics of using AI methods within a
social context. It is sufficient to shortly mention this issue somewhere in the main text (maybe in the
discussion or after l.112)
Upon your very true suggestion, we added a comment on the potential problem of a classification model
that is not able to recognise minority or non-native speech as human with sufficient reliability, and
pointed towards the fact that the consistency of our classifiers across different types of human texts
remains to be investigated.
Abbreviations:
The authors should check whether all abbreviations are defined when first mentioned (e.g., SVM,
GPT, BERT, tf-idf, POS-tag, ...).
We made sure to properly introduce the more technical abbreviations (SVM, tf-idf, POS, …), but
refrained from doing so for the names of the most prominent language generation models such as GPT,
BERT and Grover. Even though they might be acronyms of longer, more descriptive titles, they are more
importantly the names these models are generally known by.
Appendix:
The headings of the feature sets could mention to which subsection in the main text they belong (e.g.,
coherence, repetitiveness, ...)
We added the category of flaws that the feature sets might represent to the feature sets’ headings in the
corresponding appendix tables.
Tables:
The Labels for rows and columns should clearly distinguish between 'Test Data' and 'Training Data' as
opposed to 'Classifier' - since the same type of classifier (neural network) is used for all cases (e.g.,
Table 3). Moreover, columns and rows should not be switched between tables (e.g., Table 9 & 10).
We made sure to maintain consistency between the different tables in indicating the data used for
training as row headers and the data used for testing as column headers. We furthermore changed the
‘Classifier’ header to a more precise ‘Test Data’ header.
Please note that the changes to the tables are not highlighted in the new manuscript version with the
tracked changes due to issues of latexdiff with properly processing these tables. We hope that this is no
major problem, especially since these changes were pure layout changes.
Discussion:
It could be highlighted that the present work has another benefit: it shows directions for future
improvements for machine-generated language.
We now mention this as a further benefit (and research prospect) in the concluding discussion.
### Minor Issues:
l. 34: Instead of 'bad actors', it would be more appropriate to refer to 'actors with
questionable/immoral/unethical intentions' (or similar)
In line with e.g. Solaiman et al (as referenced in the manuscript) we now refer to the former ‘bad
actors’ as ‘malicious actors’.
l. 56: methods of educating the public may also be difficult to implement effectively from a
psychological perspective, e.g.: Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., &
Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing.
*Psychological Science in the Public Interest*, *13*(3), 106–131.
https://doi.org/10.1177/1529100612451018
A very interesting and thematically relevant read. I hope that our interpretation of seeing this as an
actual argument in line with our point that a high awareness in the public is needed in order to avoid
information that was in the best case randomly generated by a language model (which would in the
worst case be tailored to spread misinformation) to enter the public’s belief – from where it once
established is difficult to retract – is in line with your understanding.
l.143: 'not to overfit' -> is this a typo? I think this should be 'not to underfit'
Good point, the original formulation was not very clear. Now it should correctly state the relationship
between increasing training dataset sizes and increasing model complexity to effectively leverage it.
Table 2: Are the human data sets merged for all training and test sets or matched to the corresponding
data set of the language-generation method?
We added a better description of how the human datasets correspond to the different datasets that
were used for training the different language models.
l.383-386: it should be clarified that for each point of the grid, the neural network is trained with an
optimization algorithm (backwards propagation?). Currently, this reads as if the models were trained
with a grid search - but this appears to be only the case for some of the tuning parameters.
We rephrased and clarified how the optimization of the different classifier classes and the grid-search
to find the optimal classifier among them work together.
Table 6: It was difficult to detect the pattern highlighted in italics. Underscored numbers are better to
detect. Moreover, the column order could be adjusted to have the truncated (s,xl) and full-distribution
(s-k,xl-k) datasets next to each other (this holds for all tables). This would in general facilitate
recognizing the qualitative patterns discussed by the authors.
We changed italic numbers to underscored numbers and generally changed the order from s – xl – s-k
– xl-k in order to facilitate table readability.
l.466: how was overfitting detected? if the model performs well, this cannot be too bad?
Overfitting was detected based on the classifiers’ performance on the data it was trained on, to which
they are strongly overfitted due to their high-complexity and the relatively low dimensionality of the
data. However, this is more on a sidenote since the optimisation was done on separate validation sets
and the classifiers therefore still chosen to be the best-performing on independent datasets.
Experimental Design
Feature definition:
As the authors state correctly, a major benefit of the classifier is that the features have a direct intuitive
interpretation and can be communicated to lay audiences and practitioners using these methods. To
further highlight this fact, the authors could add a few concrete examples of sentences in which some
of the less common features of the classifier are illustrated (either in the main text or on the tables in
the appendix). For instance, the new 'conjunction overlap' (l.252) seems to refer to matches of the
form: ''[x y z] and [x y z]'. Similarly, specific examples would facilitate the discussion of named entities
and coreference chains. I think this would further strengthen the argument that the feature-based
classifier has an intuitive interpretation in contrast to competitors. However, this is merely an optional
suggestion and not a mandatory requirement for a revision.
Very good point, we included additional explanation and examples for the mentioned conjunction
overlap, coreference chains, empath features and entity-grid features to the corresponding appendix.
We feel like these are the least known of the features and hope to provide a better intuition by our
explanations and examples. We consider other features to be either well-established or impossible to
break-down into a simple example.
Accuracy measures:
I think it is a good idea to use AUC instead of accuracy. The authors could discuss this in l.79-88.
Moreover, it might help to discuss the concepts of sensitivity (probability of detecting machine-generated
language) and specificity (counter-probability of the false-positive rate) of binary classifications. The
AUC is a measure that takes both criteria into account within a single number.
We kept the introduction and discussion of the AUC in the evaluation section, separated from the more
qualitative discussion of the need to balance accuracy and false positives in the detection problem
section. We added reference to the specificity and sensitivity.
Ensemble methods:
Why are the classifiers combined with the 'tf-idf-baseline models'? According to the argument in l.409413, it seems more appropriate to combine two feature-based classifiers - one trained with a truncated
training set (s-k, xl-k) and another one with the full distribution (s, xl). This makes sense as both
classifiers outperform the tf-idf models (l.453). Maybe this is what the authors did, but it is currently not
clear.
We feel like our reasoning to combine the tf-idf baseline with the feature-based classifiers optimised on
the untruncated samples should stand. Comparing the results of the feature-based classifiers and the
results of the tf-idf baseline classifiers in Table 12 shows that our feature-based classifiers perform better
for the untruncated samples (s, xl), while the tf-idf baseline performs better for the top-k samples (s-k,
xl-k). We thereby hope to cover both top-k and untruncated samples with the combination of the
individually optimal classifiers.
Validity of the Findings
Expectations/hypotheses:
The authors state several hypotheses about the expected direction of differences between machineand human-generated text. For instance, 'We expect a more diverse, human-written text to have a
higher share of unique words' (l.248) or 'We expect human text to contain more sentiment-related
keywords' (l.310). The authors could pick up these hypotheses later in the results section and discuss
whether their hypotheses hold in the trained classifiers (this would not require additional tables).
A very good suggestion of a potentially insightful additional analysis. However, since these
hypotheses have mainly been used as qualitative motivations during development of the features and
lack any more profound validation, we would not be completely confident in assessing the features
individual influence on the classification decision without a more solid linguistic basis. We included this
as a potential future research question.
Transferability:
The authors discuss transferability across training sets of different language-generation methods. At
some place (e.g., Discussion or l.89-101), it is important to also discuss the issue whether the training
sets are representative of 'real' text. Put differently, it is not clear whether the classifiers trained with
these data would work on data from twitter, facebook, etc. Maybe the authors can specify boundary
conditions that need to hold for their classifier to be applicable.
We added our intuition on this question (training data covers wide parts of the internet, therefore
decent transferability could be expected), however, transferability to specific social media platforms
etc. would also require further investigation.
Data:
The GPT-2 samples are currently not available (https://storage.googleapis.com/gpt-2/outputdataset/v1/). The authors state correctly that: 'These addresses would need to be updated in the code
should they ever change.'
The data is actually still available under that address. The quoted URL is only the first part of an URL
which is then combined with indications of the specific dataset to be downloaded in the code. The
webtext test data would for example be available from https://storage.googleapis.com/gpt-2/outputdataset/v1/webtext.test.jsonl.
__________________________________________________________
Reviewer 2 (Anonymous)
Basic reporting
I consider the text to be a very good entry into the issue of detecting automatically generated text. The
authors reflect on related topics (detection of fake news, authorship attribution, etc.), which gives
readers a broader context - the methods are common or similar.
Experimental design
As far as I can judge, I do not see any inconsistencies in the text, the analytical part is given clearly
and is based on verifiable methodology and data.
Validity of the findings
The results are absolutely credible.
Comments for the Author
In further research, from my perspective, the article asks questions to observe the repetition (word
forms, grammatical forms) in a small and, on the contrary, a large range of text, probably the cohesion
of the text in these different scopes is controlled by opposite tendencies. Even if this were not the
case, the text turns out to be inspiring questions based on clearly analyzed data.
Many thanks for your generous feedback. We agree to your comment regarding the potential
difference in text cohesion in different scopes of text and hope that this open question is now
sufficiently reflected in our indication of future research direcitons.
" | Here is a paper. Please give your review comments after reading it. |
28 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Recent addition to the repertoire of gene expression regulatory mechanisms are polyadenylate (polyA) tracks encoding for poly-lysine runs in protein sequences. Such tracks stall translation apparatus and induce frameshifting independently of the effects of charged nascent poly-lysine sequence on the ribosome exit channel. As such they substantially influence the stability of mRNA and amount of protein produced from a given transcript. Single base changes in these regions are enough to exert a measurable response on both protein and mRNA abundance, and makes each of these sequences potentially interesting case studies for effects of synonymous mutation, gene dosage balance and natural frameshifting. Here we present the PATACSDB, a resource that contain comprehensive list of polyA tracks from over 250 eukaryotic genomes. Our data is based on Ensembl genomic database of coding sequences and filtered with algorithm of 12A-1 which selects sequences of polyA tracks with a minimal length of 12 A's allowing for one</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>regions are enough to exert a measurable response on both protein and mRNA abundance, and makes each of these sequences potentially interesting case studies for effects of synonymous mutation, gene dosage balance and natural frameshifting. Here we present the PATACSDB, a resource that contain comprehensive list of polyA tracks from over 250 eukaryotic genomes. Our data is based on Ensembl genomic database of coding sequences and filtered with algorithm of 12A-1 which selects sequences of polyA tracks with a minimal length of 12 A's allowing for one mismatched base. The PATACSDB database is accesible at: http://sysbio.ibb.waw.pl/patacsdb. Source code is available for download from GitHub repository at http://github.com/habich/PATACSDB, including the scripts to recreate the database from the scratch on user's own computer.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background</ns0:head><ns0:p>The classical view of the genetic information flow inside living cells, that is transcription from DNA to RNA and finally translation of mRNA into protein, is a subject of continous Manuscript to be reviewed</ns0:p><ns0:p>Computer Science modification for both, direction of the flow and the number of players involved. Over decades of research we keep accumulating evidences of several control points at different levels of these processes. The past studies were focused on transcriptional regulation, but more recently regulation of gene expression at the level of translation drew researchers' attention. Translational regulation generally controls the amount of protein synthesised from a given mRNA through several mechanisms, targeting recruitment of ribosomes to the transcript, elongation speed, termination and as a proxy to all these processes mRNA stability. Ribosome stalling, that is pausing of ribosome during translational cycle, is recognized by components of several mRNA surveillance pathways. As a result of impeded rate of ribosome along the mRNA, the transcript is endonucleolytically cleaved and nascent albeit incomplete protein product is degraded by proteasome <ns0:ref type='bibr' target='#b11'>(Shoemaker & Green, 2012)</ns0:ref>. Over the years we have got to know that certain sequence features can trigger ribosome stalling. These are damaged bases <ns0:ref type='bibr' target='#b5'>(Cruz-Vera et al., 2004)</ns0:ref>, stable stem-loop structures <ns0:ref type='bibr' target='#b7'>(Doma & Parker, 2006)</ns0:ref>, rare codons <ns0:ref type='bibr' target='#b8'>(Letzring, Dean & Grayhack, 2010)</ns0:ref>, mRNAs lacking stop codons (so called non-stop mRNAs) <ns0:ref type='bibr' target='#b6'>(Dimitrova et al., 2009)</ns0:ref>, runs of codons that encode consecutive basic aminoacids <ns0:ref type='bibr'>(Kuroha et al., 2010;</ns0:ref><ns0:ref type='bibr'>Brandman et al., 2012)</ns0:ref>, or finally, runs of adenines encoding poly-lysine tracks <ns0:ref type='bibr' target='#b0'>(Koutmou et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Arthur et al., 2015)</ns0:ref>.</ns0:p><ns0:p>We have recently shown that polyA tracks trigger a response in a different manner than runs of basic aminoacids <ns0:ref type='bibr' target='#b0'>(Arthur et al., 2015)</ns0:ref>. In addition to stalling, occasionally they lead to ribosome sliding on mRNA transcript which results in production of additional frameshifted product next to the known and well annotated gene protein product. As such polyA track sequences may support programed translational frameshifts in such mRNA transcripts giving rise to alternative protein products from those genes. This feature of polyA track genes resembles programmed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>frameshifting observed in viral genes with slippery sequences however without a need for additional mRNA structures that induces ribosome stalling in known viral transcripts <ns0:ref type='bibr' target='#b3'>(Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Yan et al., 2015)</ns0:ref>. The ultimate control over the production and stability of alternative Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>define the minimal polyA pattern in other organisms. We have analyzed eukaryotic Ensembl genomes <ns0:ref type='bibr'>(Flicek et al., 2014)</ns0:ref> for the presence of this pattern in coding sequences, using only these entries for which coding sequence matched reported translated sequence. This was done not only on standard Ensembl genomes but its additional eukaryotic databases like Ensembl Protists and Ensembl Metazoa. As a result, we have identified 197964 genes in 254 genomes that carry 446206 polyA tracks.</ns0:p></ns0:div>
<ns0:div><ns0:head>PolyA tracks across eukaryotic organisms</ns0:head><ns0:p>In the previous studies <ns0:ref type='bibr' target='#b0'>(Koutmou et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Arthur et al., 2015)</ns0:ref> we focused mainly on polyA tracks from human and yeast genomes, using NCBI <ns0:ref type='bibr'>(Pruitt et al., 2014)</ns0:ref> database and SGD <ns0:ref type='bibr' target='#b4'>(Cherry et al., 1998)</ns0:ref> as data sources, respectively. Overall there is a good agreement between our previous analysis and this study for high eukaryotes, while we see some discrepancies for lower eukaryotes, such as yeast. For example, in the previous study we have underestimated the number of polyA-carrying genes in yeast by an order of magnitude (29 vs 369) -a result of different data source.</ns0:p><ns0:p>The percentage of polyA carrying transcripts varies from organism to organism and exceeds 60%</ns0:p><ns0:p>for Plasmodium species, well known for their AT-rich genome (see Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref> for summary).</ns0:p><ns0:p>However, the distribution of lengths of polyA tracks is quite similar across whole observed spectrum of AT-content (Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). It might be that the single Plasmodium genus is skewing the distribution, as the species distribution of genomic databases is heavily biased. In human, around only around 1% of transcripts coming from ca. 2% of genes carry polyA track and as such, are subjects of translational attenuation. This is close to a median across all analyzed genomes. Furthermore, we did not find any correlation between organismal complexity and number of </ns0:p></ns0:div>
<ns0:div><ns0:head>Software architecture</ns0:head><ns0:p>The main table consists of protein common name, gene and transcripts Ensembl ids, location of the polyA track expressed as percentage (allows for quick identification of cases where polyA track is either at the end or at the beginning of the protein) and finally, the identified polyA track with a context of surrounding sequence. All columns are sortable. By default, the table is sorted by protein name, alphabetically. Sorting gene and transcript ids is also alphabetical. Location is sorted numerically. The rows with polyA sequences is sortable by polyA track length, so the user can quickly identify sequences with the longest track in particular organism. Obviously, due to used pattern, the shortest polyA tracks have length of 12 nucleotides. To facilitate quick interaction with tables, we have used Bootstrap-table library that allows for easy and intuitive sorting and searching through all fields in particular genome.</ns0:p><ns0:p>Project was created using Python 2.7. To parse biological data we used Biopython 1.65.</ns0:p><ns0:p>To compare protein and cdna sequences we used local version of NCBI blast+ software v.</ns0:p><ns0:p>2.2.31. To run the web service we used Flask v.0.10.1 . We used SQLite3 database engine and SQLAlchemy for database access. To query Ensembl database we used mysql client. We also used two other Python libraries: xmltodict and requests. The most difficult task was to ensure short page load times given the large dataset we worked on. To solve this problem we have created additional tables in database which contain metadata with the heaviest queries. This solution decreased time of loading more than 20 times. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We have designed two step architecture. In the first step we analyse data from Ensembl database and create our database with 12A-1 pattern. In the second step we use created database to provide information to web service. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:11:7956:1:0:CHECK 11 Jan 2016) Manuscript to be reviewed Computer Science polyA-affected genes. This might indicate that such feature is a constituent element of translational machinery, unrelated to external factors and regulatory mechanisms.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Fig. 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig.1. Distribution of polyA lengths vs AT-ratio of analyzed genomes. Data for lengths of polyA were</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>transcripts from polyA track genes in Eukaryotes would be based on mRNA surveillance mechanisms, mainly non-sense mediated mRNA decay (NMD) or if the kinetic stall persists by no-go mRNA decay (NGD). PolyA tracks are highly conserved in genes among Eukaryotes and it is likely that they represent a universal translational attenuators or programed translational frameshift signals. Intrinsically this novel RNA motif plays an important role in balancing gene dosage and homeostasis of cellular environment. The level of attenuation, frameshifting and exact role of polyA tracks in organisms homeostasis is still to be elucidated.</ns0:figDesc><ns0:table /><ns0:note>PATACSDB serverWhile there are several resources devoted to polyadenylation signals in genomic sequences, these have different sequence signature and refer to the processing of mRNA, not translation. No genomic database reports polyA tracks in coding sequences, therefore we have designed PATACSDB (PolyA Translational Attenuators in Coding Sequences DataBase), a resource devoted to collection of such features among eukaryotic organisms. In concordance with our experimental data from the controlled expression of reporter sequences or natural gene expression profiles we have designed a 12A-1 pattern, that is pattern of twelve adenines in coding region allowing for one mismatch. Based on our experiments, this is a minimal pattern that should result in reduction of expression by roughly 30%, a magnitude that can potentially have a measurable biological impact in human cells<ns0:ref type='bibr' target='#b0'>(Arthur et al., 2015)</ns0:ref>. We have extrapolated this pattern to other organisms, because without further experimental work we have no way to PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7956:1:0:CHECK 11 Jan 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>This architecture allows to separate obtaining data and running web service thus during analysis of new version of Ensembl data we still can provide data about old version, and change between versions can be done in seconds without user noticing. In the future we will work on parallelization process of Ensembl data analysis to speedup first step. It is likely that polyA segments are not the only sequence determinants of translation efficiency in coding sequences and further studies will discover more of such motifs or different lengths of minimal polyA pattern for a particular organism. Design of the PATACSDB engine allows for easy modification towards finding and cataloguing of novel sequence patterns.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7956:1:0:CHECK 11 Jan 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the content of PATACSDB</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Total number of polyA-carrying transcripts</ns0:cell><ns0:cell>197964</ns0:cell></ns0:row><ns0:row><ns0:cell>Highest percentage of polyA-carrying transcripts</ns0:cell><ns0:cell>Plasmodium berghei 68.259%</ns0:cell></ns0:row><ns0:row><ns0:cell>(first 5)</ns0:cell><ns0:cell>Plasmodium yoelii 17x 64.957%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Plasmodium falciparum 63.539%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Plasmodium chabaudi 63.372%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Plasmodium reichenowi 62.933%</ns0:cell></ns0:row><ns0:row><ns0:cell>Lowest percentage of polyA-carrying transcripts</ns0:cell><ns0:cell>Pythium vexans 0.025%</ns0:cell></ns0:row><ns0:row><ns0:cell>(first 5)</ns0:cell><ns0:cell>Saprolegnia diclina vs 20 0.038%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Leishmania major 0.048%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Phytophthora sojae 0.058%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Salpingoeca rosetta 0.060%</ns0:cell></ns0:row><ns0:row><ns0:cell>Median and average percentage of polyA-</ns0:cell><ns0:cell>2.0% and 7.6% respectively</ns0:cell></ns0:row><ns0:row><ns0:cell>carrying transcripts</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>The longest polyA tracks (first 10)</ns0:cell><ns0:cell>132 nt -CDO62875 [Plasmodium reichenowi]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>131 nt -CDO63348 [Plasmodium reichenowi]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>111 nt -ETW31025 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>fch 4]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>109 nt -ETW57402 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>palo alto uganda]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>107 nt -ETW41820 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>nf135 5 c10]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>107 nt -ETW15539 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>vietnam oak knoll fvo]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>97 nt -CDO66404 [Plasmodium reichenowi]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>95 nt -EUT78604 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>santa lucia]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>89 nt -ETW44841 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>nf135 5 c10]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>88 nt -ETW48723 [Plasmodium falciparum</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>malips096 e11]</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7956:1:0:CHECK 11 Jan 2016)</ns0:note></ns0:figure>
</ns0:body>
" | "Editor's comments
The review reports are rather critical. Among others, it is required that the authors must be able to justify the validity of the database content.
Dear Editor,
We are uncertain of the request for validation of the database content, as it is a derivative of Ensembl database. The underlying data are as good as Ensembl’s, but maybe we didn’t understand the reviewer’s doubts. The manuscript has been improved as outlined below.
Tak-Wah Lam
Academic Editor for PeerJ Computer Science
Reviewer Comments
Reviewer 1 (Ka-Chun Wong)
Basic reporting
The abstract looks concise to me.
I cannot find any figure or table for the manuscript. It would be best if the authors could include some figures and tables for user-friendly explanation. It is important as the manuscript is a database paper.
We have added a figure where length of polyA-track is plotted against AT-content of a genome. Additionally, a table with the summary of the database content was added.
Experimental design
The experimental setting 'pattern 12A allowing for one mismatch' has to be carefully supported on the manuscript. Have the authors tried other experimental setting since this setting can significantly affect the database content ?
The choice of the “12A-1” pattern is based on the experimental validation of the polyA-track sequence attenuation effect in human, drosophila, chinese hamster ovary (Arthur et al, 2015), S. cervisiae and E. coli cells (Kotmou et al, 2015) experiments. In these manuscripts, we have experimentally validated effects of coding polyA-tracks on translation of several different reporter constructs using biochemical and molecular biology aproaches. We have also analyzed effects of polyA-track attenuators in the endogenous genes using RNA-seq and ribosome-profiling analyses of human cells (Arthur et al., 2015). This was clarified in the revised manuscript.
Validity of the findings
The sentence 'Given that no genomic database reports polyA tracks in coding sequences' may be arguable, please look at the following databases and state your difference to them:
http://exon.umdnj.edu/polya_db/
http://bioinfolab.miamioh.edu/polya/
http://mosas.sysu.edu.cn/utr/
https://www.google.com/webhp?q=polya%20database
We are sorry for the misunderstanding that was caused by use of the term polyA. Indeed, there are several databases that look for adenylation signals (which is referred as “polyA” signal/sequence) and some of them report such sequences if they occur within CDS. Our interest is into coding polyA-tracks which act as translation attenuators and not as polyadenylation signals. We have clarified this issue and made a clear distinction between polyA-track attenuator and polyadenylation signal.
Comments for the author
It is very nice that the authors have released the scripts which can recreate the
database from the scratch on user's own computer.
The sentence '...is of course a bit simplistic....' is not appropriate on a scientific manuscript.
The first studies
The past studies
gathered researchers' attention
drew researchers' attention
We are thankful to the reviewer for these comments. All suggestions have been implemented.
Reviewer 2 (Anonymous)
Basic reporting
The paper reports generation of a database of coding region polyA sequences from 250 genomes and web service to look this up.
While databases are useful for researchers to look things up in their studies, I am not sure this justifies a scientific report as there is not much experiments or science in this report.
There was also no mention of the validation of the search results at all:should we just trust that the results by the processes reported are reliable? Are all the sequences reported in coding regions? without sequencing errors? I think it is troubling that there is no mentioning of quality examination of the results.
As suggested by the first reviewer, we have clarified that our work reports presence of polyA-track attenuators in the coding sequences of endogenous genes, as opposite for polyadenylation signals, which have a different sequence signature.
We have an issue with this comment, as we don’t understand what “trust” means here. We use Ensembl database as a data source, which provides high quality sequence data for only ~250 eukaryotic genomes. Therefore we don’t deal with sequencing errors, as we don’t process raw data (this doesn’t mean Ensembl data is without error, but we don’t manually curate analyzed sequences). We don’t use opaque technique or manual selection process, but instead we provide a source code for our work, that can be independently verified. Also, all basic checks (such as length matching between transcript and encoded protein, validity of coordinates mapping, hand checks of random entries, correctness of aminoacids assigned to polyA regions [mostly lysines, as expected]) were done. We don’t understand what quality measure can be implemented if our algorithm gives correct results 100% times.
Also, the database was tested by two experimental groups and none of them reported an error in annotation.
As for the experimental validation of the bioinformatics results we point out to the reviewer that the choice of the “12A-1” pattern is based on the experimental validation of the polyA-track sequence attenuation effect in human, drosophila, chinese hamster ovary (Arthur et al, 2015), S. cervisiae and E. coli (Koutmou et al, 2015) cells experiments. In these two manuscripts, we have experimentally validated effects of coding polyA-tracks on translation of several different reporter constructs using biochemical and molecular biology aproaches. We have also analyzed effects of polyA-track attenuators in the endogenous genes using RNA-seq and ribosome-profiling analyses of human cells (Arthur et al., 2015). This was clarified in the revised manuscript.
Experimental design
The paper reported the process of collecting the polyA sequences in coding regions. There was no mention of validity check of the results and quality control.
As mentioned above, we cannot implement quality control for a procedure that is 100% correct. As stated above, we don’t do quality control of data from Ensembl database, as this is already done by EMBL-EBI and the Sanger Centre. If the reviewer implies more to the experimental validation of our results they are published in two previous manuscripts (Koutmou et al.,2015, and Arthur et al., 2015).
Validity of the findings
No way to judge.
We link the repository which contains the source code.
Comments for the author
It is essential to validate the results in the database. It is also important to survey the distribution of such sequences in a few model genomes.
Detailed survey for human was already published (Arthur et al, 2015) and since majority of the polyA-carrying transcripts are conserved, these results can be partially extented to other mammals, such as mouse or rat. Instead, we have added a new data, which is a figure where length of polyA-track is plotted against AT-content of a genome. Additionally, we have confronted the numbers from PATACSDB with the data reported in our experimental papers (Arthur et al, 2015 and Koutmou et al, 2015) which used NCBI resource instead of Ensembl.
" | Here is a paper. Please give your review comments after reading it. |
30 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This paper proposes a novel network model to achieve better accurate residual binarized convolutional neural networks (CNNs), denoted as AresB-Net. Even though residual CNNs enhance the classification accuracy of binarized neural networks (BNNs) with increasing feature resolution, the degraded classification accuracy is still the primary concern compared with real-valued residual CNNs. AresB-Net consists of novel basic blocks to amortize the severe error from the binarization, suggesting a well-balanced pyramid structure without downsampling convolution. In each basic block, the shortcut is added to the convolution output and then concatenated, and then the expanded channels are shuffled for the next grouped convolution. In the downsampling when stride > 1, our model adopts only the max-pooling layer for generating low-cost shortcut. This structure facilitates the feature reuse from the previous layers, thus alleviating the error from the binarized convolution and increasing the classification accuracy with reduced computational costs and small weight storage requirements. Despite low hardware costs from the binarized computations, the proposed model achieves remarkable classification accuracies on the CIFAR and ImageNet datasets.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Neural networks have achieved significant performance enhancements in many fields, including computer vision, speech recognition, and natural language processing, etc. Notably, CNNs have shown many outstanding performances in the field of computer vision. Even though it is possible to implement simple CNNs such as <ns0:ref type='bibr'>LeNet-5 (LeCun et al., 1998)</ns0:ref> on embedded devices, more sophisticated CNNs require high computational costs and large weight storage requirements, which prevent them from being adopted in lightweight cost-efficient systems. Various studies try to reduce memory requirements and power consumption at the expense of the appropriate performance degradation. The network quantization sacrifices the precision of model parameters and approximates the operations in neural networks to achieve small memory requirements and power consumption <ns0:ref type='bibr' target='#b41'>(Wu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b46'>Zhou et al., 2017a)</ns0:ref>. Notably, early approaches for the BNN models in <ns0:ref type='bibr' target='#b6'>Courbariaux et al. (2015</ns0:ref><ns0:ref type='bibr' target='#b7'>Courbariaux et al. ( , 2016))</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Rastegari et al. (2016)</ns0:ref> quantize weights or activations into {+1, −1}, which replaces floating-point multiplications with binary bitwise operations, thus approximating the floating-point multiply-accumulate operation using bitwise XNOR and bit counting operations. Besides, the quantized binary weights can reduce weight storage requirements, which makes BNNs a highly appealing method for implementing CNNs on embedded systems and programmable devices <ns0:ref type='bibr' target='#b11'>(Guo, 2018;</ns0:ref><ns0:ref type='bibr' target='#b47'>Zhou et al., 2017b;</ns0:ref><ns0:ref type='bibr' target='#b43'>Yi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Liang et al., 2018)</ns0:ref>. Despite many benefits above, the low precision of the binarized operations in BNNs degrades the classification ability on modern CNNs, thus limiting their applications. Improved BNN structures have been developed for reducing the gap of the classification accuracy degraded over real-valued CNNs <ns0:ref type='bibr' target='#b25'>(Lin et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Liang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Liu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>He et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b49'>Zhuang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Shen et al., 2019;</ns0:ref><ns0:ref type='bibr' /> PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b5'>Chakraborty et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bethge et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Phan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bethge et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Liu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Besides, new training methods and optimizing tricks for BNNs have been researched for obtaining better classification accuracy <ns0:ref type='bibr' target='#b0'>(Alizadeh et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bulat and Tzimiropoulos, 2019;</ns0:ref><ns0:ref type='bibr' target='#b48'>Zhu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hubara et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b9'>Ghasemzadeh et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b10'>Gu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Helwegen et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Ding et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Martinez et al., 2020)</ns0:ref>. However, there are still significant accuracy drops compared with the baseline floating-point models. The insufficient feature resolution from the binarized convolution layer can be compensated using real-valued shortcuts in <ns0:ref type='bibr' target='#b27'>(Liu et al., 2018)</ns0:ref>, thus making a noticeable advance in increasing the classification accuracy. However, it is concerned that the stacking structure of residual convolution layers accumulates errors from each binarized layer, which can limit the performance of residual CNNs. The feature reuse of BNNs in <ns0:ref type='bibr' target='#b2'>Bethge et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b1'>Bethge et al. ( , 2020) )</ns0:ref> concatenates shortcuts to expand output channel, making features from the shortcut pass to the next block. These existing methods in <ns0:ref type='bibr' target='#b27'>Liu et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Bethge et al. (2019)</ns0:ref> adopt point-wise convolutions in the channel expansion, which can increase computational costs in BNNs.</ns0:p><ns0:p>Our approach combines the advantages of the feature resolution enhancement and feature reuse schemes, eliminating the convolutions in the channel expansion. The proposed network model called AresB-Net is developed to consider these motivations. The basic block connects the real-valued shortcut per each binarized convolution layer by adding the shortcut and concatenating it to output channels. Two different kinds of shortcuts are mixed for expanding channels. In the downsampling, only the max-pooling layer is used for obtaining the shortcut. Then, the doubled expanded channels are shuffled and split for the grouped convolution, so that computational costs are reduced without downsampling 1 × 1 convolution compared with baseline BNNs. The ratio of unfiltered features is naturally maintained in the shuffled channels. Similar to the baseline residual CNNs of <ns0:ref type='bibr' target='#b13'>He et al. (2016)</ns0:ref>, the proposed basic block are easily stacked to create the pyramid structure of CNNs. In experiments, with well-known data augmentation and regularization techniques, this novel BNN structure provides 91.90% and 73.01% Top-1 accuracies with the 18-layered models on the CIFAR-10 and CIFAR-100 datasets <ns0:ref type='bibr' target='#b22'>(Krizhevsky et al., 2014)</ns0:ref> and 78.15% Top-5 accuracies on the ImageNet dataset <ns0:ref type='bibr' target='#b34'>(Russakovsky et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In the following, we introduce several related works and explain our motivation for the proposed structure. Then, the details of the proposed BNN structure are described. Finally, experimental results</ns0:p><ns0:p>show the classification accuracy and computational cost analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head></ns0:div>
<ns0:div><ns0:head>Residual convolutional neural networks</ns0:head><ns0:p>Between stacked layers in a network model, the skip connection called shortcut can jump over one or several non-linear activations, so it is summed to the other layer output. Thus, the shortcut contains unfiltered features from previous layers, which enable the residual networks to achieve fast training speed with the reduced impact of vanishing gradients and obtain the benefits from ensemble effects <ns0:ref type='bibr' target='#b38'>(Veit et al., 2016)</ns0:ref>. In general, there are two different shortcut summing schemes in residual CNNs; (1) adding the shortcut to each channel without changing the number of output channels <ns0:ref type='bibr'>(He et al., 2016): (2)</ns0:ref> concatenating the shortcut for expanding channels <ns0:ref type='bibr' target='#b17'>(Huang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b44'>Zhang et al., 2018)</ns0:ref>. Whereas adding the shortcut to the layer output can dilute the unfiltered features, the channel expansion requires computational costs in the point-wise convolution between channels. In our study, it is motivated that both features from the two summing schemes above can be mixed in each block, thus expanding channels without increasing computational costs.</ns0:p><ns0:p>Several network models using the grouped convolution adopt the residual structure for summing the shortcut to their basic block. Especially, the shortcut is summed to the shuffled channels for the grouped convolutions in <ns0:ref type='bibr' target='#b44'>Zhang et al. (2018)</ns0:ref>. Besides, MobileNetv2 <ns0:ref type='bibr' target='#b35'>(Sandler et al., 2018)</ns0:ref> introduces the inverse residual structure containing depth-wise convolutions. The works in <ns0:ref type='bibr' target='#b44'>Zhang et al. (2018);</ns0:ref><ns0:ref type='bibr' target='#b35'>Sandler et al. (2018)</ns0:ref> prove that summing the shortcut to the grouped convolution output obtains considerable classification accuracy with decreased computational costs. In our proposed block, whereas the grouped convolution reduces the computational costs for expanded input channels, the difference from <ns0:ref type='bibr' target='#b44'>Zhang et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b35'>Sandler et al. (2018)</ns0:ref> is that the features shuffled from two different residual shortcuts are used in each group.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Binarized convolutional neural networks</ns0:head><ns0:p>As the complexity of neural networks increases, large memory requirements and high computational costs are significant burdens when applying CNNs on edge computing devices. Notably, increasing multiplications require high power consumptions that embedded devices cannot accept. BNNs quantize weights <ns0:ref type='bibr' target='#b6'>(Courbariaux et al., 2015)</ns0:ref> or both weights and activations <ns0:ref type='bibr' target='#b18'>(Hubara et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Rastegari et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hubara et al., 2017)</ns0:ref> of neural networks into {−1, +1}. The analysis of the inference step in <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref> shows ≈ 32× memory saving and ≈ 58× computation speedup, thus making BNNs an appealing neural network scheme in embedded systems. However, when applied directly to the baseline real-valued neural network model, errors from the binarization degrade the classification accuracy.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b6'>Courbariaux et al. (2015)</ns0:ref>, the training scheme for BNNs is introduced. In XNOR-Net <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref>, the binarized network structure and convolution with the deterministic scaling factor make significant classification improvements, which are verified empirically in the residual CNN of <ns0:ref type='bibr' target='#b13'>He et al. (2016)</ns0:ref>. Beyond these early works of BNNs, new basic blocks that utilize residual networks have been developed. Especially, in <ns0:ref type='bibr' target='#b27'>Liu et al. (2018)</ns0:ref>, each basic residual block has the real-valued shortcut for skipping only one non-linear activation, which has been adopted in other BNNs. The grouped convolutions are applied for adopting the binarized depth-wise separable convolution in <ns0:ref type='bibr' target='#b14'>He et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>Phan et al. (2020)</ns0:ref>. In <ns0:ref type='bibr' target='#b2'>Bethge et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b1'>Bethge et al. ( , 2020))</ns0:ref>, shortcut is concatenated to expand channels in dense neural networks.</ns0:p><ns0:p>From these previous works, we conclude that the residual binarized basic blocks have layered structures different from the real-valued baselines. Our BNN is based on the shuffled grouped convolution and combines different shortcuts in the residual binarized blocks, which are discriminated from other residual BNNs. In <ns0:ref type='bibr' target='#b4'>Bulat and Tzimiropoulos (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Liu et al. (2020)</ns0:ref>, trainable parameters are used in the activation and scaling. In <ns0:ref type='bibr' target='#b0'>Alizadeh et al. (2018)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>ARESB-NET: ACCURATE RESIDUAL BINARIZED NEURAL NETWORK MODEL</ns0:head><ns0:p>The proposed AresB-Net model contains novel basic blocks using residual shortcuts, expanding channels by adding and concatenating shortcuts. This basic blocks can be stacked using a pyramid structure. Most CNN structures reduce the width and height of feature maps and increase the number of channels when they encounter a layer with downsampling (stride > 1). Because the baseline residual networks <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref> and XNOR ResNet <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref> simply adopt stride = 2 and double channels in the downsampling, the AresB-Net also follows this pyramidal method using a factor of 2. In this downsampling, whereas the width and height of features are reduced in half, the number of channels are doubled. Therefore, the number of computations on each layer is kept similar. This section explains the basic block for this pyramid structure and its binarization of features and weights. Then, the model structure stacking the basic blocks is described. Finally, we summarize the training method for the proposed AresB-Net.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed basic block</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows the proposed basic block. Two kinds of shortcut summing for expanding channels are illustrated: (1) adding the shortcut from the first batch normalization (BN) to the output of the second BN layer; (2) concatenating the shortcut from the first BN layer to the output channels. The BinConv3x3 layer stands for the binarized convolution layer with 3 × 3 sized filter. This concatenated shortcut does not go through the BinConv3x3 layer, expanding the output channels without additional computational costs. When stride = 2, the number of output channels from the BinConv3x3 layer is doubled, and 3 × 3 max-pooling layer is used to make the shortcut. Therefore, the number of output channels from the first subblock is doubled.</ns0:p><ns0:p>Before forwarding input features into each BinConv3x3 layer, the input channels are shuffled and then split for the grouped convolution. This basic block uses the shuffle layer described in <ns0:ref type='bibr' target='#b44'>Zhang et al. (2018)</ns0:ref>.</ns0:p><ns0:p>The input channels contain the features generated from different types of shortcuts. This shuffling can mix the features from the channels and divide them into two groups (denoted as g = 2). In this shuffling, the information crossing input channels is mixed in the next grouped convolution, which is helpful for <ns0:ref type='table' target='#tab_5'>-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:ref> Manuscript to be reviewed removing real-valued 1 × 1 convolution like <ns0:ref type='bibr' target='#b44'>Zhang et al. (2018)</ns0:ref>. This scheme manages the ratio of the reused unfiltered features from the previous layers. As the input features produced from a subblock go though other subblocks, the ratio of unfiltered features from the subblock decreases. When half of the features produced from a subblock are concatenated to the output channels of the next subblock, the features are not unfiltered in the next subblock. We denote the number of features from a subblock by num(I). In the output channels of the next subblock, num(I) 2 features are unfiltered. As the features go through k subblocks, num(I)</ns0:p><ns0:formula xml:id='formula_0'>3/13 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_1'>2 k</ns0:formula><ns0:p>features are also unfiltered.</ns0:p><ns0:p>The structure with the BN layer before the binary activation follows the baseline work in <ns0:ref type='bibr' target='#b13'>He et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Rastegari et al. (2016)</ns0:ref>, where learnable shifting and biasing parameters γ and β for each channel transform values for determining which value is binarized into −1 or +1 in the binarized activation.</ns0:p><ns0:p>Compared to ResNet <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref> and XNOR ResNet <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref> models, the proposed model contains doubled shifting and biasing parameters, which could give more optimizable points in training.</ns0:p><ns0:p>Each BinConv3x3 layer consists of the deterministic binarized activation and convolution. Let us assume that term I ∈ R c×w in ×h in denotes the input tensor of a basic block, where c, w in , h in mean the number of input channels, and the width and height of a feature map, respectively.</ns0:p><ns0:p>In the first subblock, the deterministic binarized activation sign(I) rounds</ns0:p><ns0:formula xml:id='formula_2'>I into {−1, +1} c×w in ×h in .</ns0:formula><ns0:p>When the weight filter for each input channel has the width of w and height of h, the real-valued weight filter for an output channel is denoted as W ∈ R n= 1 2 c×w×h , where w ≤ w in , h ≤ h in . In the BinConv3x3 layer, w = 3 and h = 3, respectively. Because the BinConv3x3 layer performs the grouped convolution (g = 2), 1 2 c input channels are adopted in each group.</ns0:p><ns0:p>Depending on the stride, the number of output channels from the first subblock is different. As shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, whereas the first BinConv3x3 layer of stride = 1 has 1 2 c output channels, that of stride = 2 has c output channels. When stride = 1, c input channels for the second subblock are produced by concatenating the shortcut from 1 2 c shuffled input channels. On the other hand, for stride = 2, 2c input channels are for the second subblock produced, where c channels from the max-pooling layer are concatenated to produce 2c channels. By applying stride = 2, the width and height of the feature from the subblock are w in 2 and h in 2 , respectively. In the second subblock, the numbers of input and output channels are the same. Therefore, the output tensor sizes of stride = 1 and stride = 2 are c × w in × h in and 2c × w in 2 × h in 2 , respectively.</ns0:p><ns0:p>In a group of the BinConv3x3 layer, when the numbers of input and output channels are the same as c 2 , the number of parameters need for the group convolution can be 1 4 c 2 × w × h. Total number of parameters </ns0:p><ns0:formula xml:id='formula_3'>= 1 c × w in × h in first 1 4 c 2 × w × h c × w in × h in second 1 4 c 2 × w × h c × w in × h in stride = 2 first 1 2 c 2 × w × h 2c × w in 2 × h in 2 second c 2 × w × h 2c × w in 2 × h in 2 a</ns0:formula><ns0:p>Weight size denotes the number of weight filter's parameters.</ns0:p><ns0:p>for two groups can be 1 2 c 2 × w × h. When the number of output channels in a group is doubled as c in the first subblock for stride = 2, the total number of parameters for two groups can be c 2 × w × h. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> summarizes the number of parameters in weight filters denoted as weight size and output tensor sizes in basic blocks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Binarization</ns0:head><ns0:p>When binarizing W with sign(W), only the binary weight B ∈ {−1, +1} n= 1 2 c×w×h for each input channel is used in the inference. In the binarized activation and weights, function sign(x) is defined as:</ns0:p><ns0:formula xml:id='formula_4'>x ∈ {I, W}, sign(x) = +1 if x ≥ 0, −1 else. (1)</ns0:formula><ns0:p>Thus, the binarized convolution output is approximated as:</ns0:p><ns0:formula xml:id='formula_5'>I × W ≈ (sign(I) ⊛ sign(W)) ⊙ α,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where α denotes the scaling factor for weights. As shown in <ns0:ref type='bibr' target='#b32'>Rastegari et al. (2016)</ns0:ref>, the scaling factor is 1 n , where n = 1 2 c × w × h. Symbols ⊛ and ⊙ mean the convolution using bitwise XNOR & bitcounting operations and element-wise scalar multiplication, respectively. After binarizing weights, the multiplication with the binarized activations is approximated using the bitwise XNOR operation. Because each operand consists of one bit, bitwise XNOR operation can perform the parallel multiplication in a bit-level. The accumulation operation can be replaced by the bit-counting operation. In Eq. ( <ns0:ref type='formula' target='#formula_5'>2</ns0:ref>), the binarized convolution only adopts the deterministically scaled weights by 1 n . Calculating the element-wise scaling factor matrix K for I ≈ sign(I) ⊙ K in the inference <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016</ns0:ref>) is a significant burden in lightweight BNNs, as described in <ns0:ref type='bibr' target='#b4'>Bulat and Tzimiropoulos (2019)</ns0:ref>. Instead, in our design, this convolution output is adjusted by the learnable affine parameters in the following BN layer.</ns0:p><ns0:p>The erroneous binarized convolution can increase unexpected dying ReLU problems. Several existing works adopted the learnable leaky activation functions <ns0:ref type='bibr' target='#b10'>(Gu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Phan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b29'>Martinez et al., 2020)</ns0:ref>. The leaky activation function allows small negative values when input is less than zero.</ns0:p><ns0:p>Therefore, we evaluated whether the leaky activation function can be suitable for the proposed model.</ns0:p><ns0:p>Evaluations were performed by changing the activation function to the LeakyReLU <ns0:ref type='bibr' target='#b28'>(Maas et al., 2013)</ns0:ref>,</ns0:p><ns0:p>ReLU <ns0:ref type='bibr' target='#b30'>(Nair and Hinton, 2010)</ns0:ref>, parametric ReLU (PReLU) <ns0:ref type='bibr' target='#b12'>(He et al., 2015)</ns0:ref> in the AresB-18 model on the CIFAR-100 dataset. In this evaluation, the negative slope of the LeakyReLU was fixed as 0.01. Top-1 final test accuracies with the LeakyReLU, ReLU, parametric ReLU (PReLU) were 73.01%, 71.94%, 71.23%, respectively. The evaluation result using the LeakyReLU outperformed other activation functions, so that we decided that the binarized convolution output passed through the LeakyReLU layer.</ns0:p><ns0:p>The first BN layer in the second subblock normalizes all features from the first subblock, where the unfiltered features from previous blocks can be adjusted in this BN layer. On the other hand, the first subblock does not have the BN as the first layer. We think that if all features pass through the BN layer in each subblock, errors from the repeated normalization could have negative effects, which produced 72.95% Top-1 final test accuracy on the CIFAR-100 dataset in our experiments. When the first subblock did not adopt the BN layer, 73.01% Top-1 final test accuracy was obtained, so the difference was not significant. However, additional BN layer increased computational costs, so that it was expected that there was no need to insert that layer. Therefore, we determine that a basic block has this BN layer every two subblocks in AresB-Net.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science other convolutions are binarized. After copying the output of the LeakyReLU layer <ns0:ref type='bibr' target='#b28'>(Maas et al., 2013)</ns0:ref>, the BN layer adjusts the features. In the first subblock of the first basic block, the channels are not shuffled because the repeated channels contain the same features. Except for the first real-valued convolution layer, the subblock having a convolution layer is connected with its shortcut, which is denoted as the rounded red arrow. The dotted round arrow indicates that the features from the previous basic block pass through the max-pooling layer with stride = 2, thus making the number of channels doubled per two basic blocks.</ns0:p><ns0:p>The second subblock of the final basic block does not concatenate the shortcut, so that the output channels are obtained just by adding the shortcut from the first BN layer to the output of the second BN layer, so that left channels are used in average pooling in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. Therefore, the number of output channels is the same with that of the baseline residual networks in <ns0:ref type='bibr' target='#b13'>He et al. (2016)</ns0:ref>. After performing the average pooling, the real-valued linear fully-connected layer makes the image classification result. The model structure for the ImageNet dataset has the same concept for stacking the basic blocks. The detail modification for the ImageNet dataset is described in the section of the experimental results and analysis.</ns0:p><ns0:p>Our model structure expands channels compared to the baseline residual CNNs <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Because the grouped convolution is applied to the channel expansion for concatenating features, there is no increase in computational costs. When stride = 2, the max-pooling layer obtains the downsampled real-valued features to be concatenated. Our method does not adopt 1 × 1 binarized convolutions in the downsampled shortcut to reduce storage size and computational costs. In <ns0:ref type='bibr' target='#b32'>Rastegari et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Liu et al. (2018)</ns0:ref>, the downsampled shortcut adopts 1 × 1 real-valued convolutions to preserve the information between blocks. However, we concern that the real-valued convolutions in the downsampled shortcut reduce the degree of the binarization in BNNs, which increases the memory requirements for storing weights. Besides, it is assured that the computational costs of the max-pooling layer are much smaller </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS AND ANALYSIS</ns0:head><ns0:p>Our proposed model was trained and then tested in terms of image classification accuracy. In this experiment, the CIFAR <ns0:ref type='bibr' target='#b21'>(Krizhevsky and Hinton, 2009)</ns0:ref> and ImageNet <ns0:ref type='bibr' target='#b34'>(Russakovsky et al., 2015)</ns0:ref> datasets were adopted, where different experimental environments were setup. For apple-to-apple comparison, we adopted commonly used optimizers such as SGD (stochastic gradient descent) <ns0:ref type='bibr' target='#b3'>(Bottou, 2010)</ns0:ref> and ADAM <ns0:ref type='bibr' target='#b20'>(Kingma and Ba, 2014)</ns0:ref> optimizer in this training. Even though we did not apply the specific training scheme, it was concluded that our model could achieve significant accuracy enhancements in residual binarized neural networks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiments on CIFAR dataset</ns0:head><ns0:p>In the training and testing, CIFAR-10 and CIFAR-100 datasets were used. The CIFAR dataset consists of 60,000 32 x 32 colour images, where 50,000 and 10,000 images are used in the training and test, respectively. Whereas the CIFAR-10 dataset is classified into 10 different classes, the CIFAR-100 dataset has 100 classes containing 600 images for each class. As the number of classes increased, it is noted that more sophisticate classification was required.</ns0:p><ns0:p>In our experiments, three different models denoted as the AresB-10, AresB-18 and AresB-34 models, were trained and then tested, where the AresB-10, AresB-18, and AresB-34 models stacked 4, 8, and 16</ns0:p><ns0:p>basic blocks, respectively. The structure of the AresB-18 model is described in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. We used SGD optimizer with momentum=0.9 and weight decay=1e − 5. Unlike Lin et al. ( <ns0:ref type='formula'>2017</ns0:ref> The solid and dashed lines represent test and training accuracies, respectively. In our experiments, the final test accuracy drops from full-precision models were ranged in 0.91%∼2.60%. As the number of stacked blocks increased, there was additional accuracy enhancement. Compared to the experiments on the CIFAR-10 dataset, classification results were more improved on the CIFAR-100 by increasing the number of stacked blocks.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, the final test accuracies are summarized, comparing with full-precision counterparts. On the CIFAR-10 dataset, the final test accuracies of the proposed model were slightly degraded over those of full-precision ResNet models. On the CIFAR-100 dataset, Top-1 accuracy of the AresB-18 model reached up to 73.01%, which degraded the classification accuracy by only 2.6% compared to the full-precision</ns0:p><ns0:p>ResNet-18 model.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a Full-precision (denoted as FP) counterparts of the AresB-18 and AresB-34 models are based on the evaluation results of the ResNet-18 and ResNet-34 models.</ns0:p><ns0:p>The efficiencies of the speedup and storage size were analyzed, assuming the combining factor between real-value and binary operations as 1 64 <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref>. We assumed that the scaling in the BN layer and non-linear activation (e.g., ReLU) layer for one element increase FLOPs (floating-point operations per second) by one, respectively. The FLOPs of each convolution layer were calculated based on flo (2020). The first convolution layer with RGB channels inputs and final fully-connected layer were operated on 32-bit full-precision data. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> summarizes the comparisons with other models that contain the baseline basic blocks in terms of the speedup and storage size, where ResNet-18 <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref>, XNOR-Net <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref>, BiReal-Net <ns0:ref type='bibr' target='#b27'>(Liu et al., 2018)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiments on ImageNet dataset</ns0:head><ns0:p>The ImageNet dataset <ns0:ref type='bibr' target='#b34'>(Russakovsky et al., 2015)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b32'>(Rastegari et al., 2016)</ns0:ref> 1/1 Bconv 89.83% 12.9 1.41 BiReal-Net <ns0:ref type='bibr' target='#b27'>(Liu et al., 2018)</ns0:ref> 1 We trained our AresB-10, AresB-18 and AresB-34 models from scratch for 100 epochs with a batch size of 256. For ADAM optimizer <ns0:ref type='bibr' target='#b20'>(Kingma and Ba, 2014)</ns0:ref> with β = (0.9, 0.999), momentum= 0.9, and weight decay= 1e − 5. The initial learning rate lr = 0.1 was decayed by multiplying 0.1 at <ns0:ref type='bibr'>(40,</ns0:ref><ns0:ref type='bibr'>50,</ns0:ref><ns0:ref type='bibr'>65,</ns0:ref><ns0:ref type='bibr'>80)</ns0:ref> epochs. Like ResNet <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref>, The AresB model started at the 7 × 7 convolutional layer with channel = 64 and stride = 2, followed by the 3 × 3 max-pooling layer with stride = 2. The test with ImageNet validation dataset adopted only one random crop. <ns0:ref type='bibr' target='#b25'>(Lin et al., 2017)</ns0:ref>, Bi-ReslNet-18 <ns0:ref type='bibr' target='#b27'>(Liu et al., 2018)</ns0:ref>, and MoBiNet-k4 <ns0:ref type='bibr' target='#b31'>(Phan et al., 2020)</ns0:ref> comparable to our AresB-18 pyramid structure. In Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, FPConv and BConv denote floating-point and binarized 1 × 1 convolutions. Data in <ns0:ref type='bibr' target='#b27'>(Liu et al., 2018)</ns0:ref> assumed that XNOR-ResNet-18 adopted FPConv downsampling, which is referenced in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. Our work outperformed other works except for Bi-RealNet-18 that adopted the FPConv downsampling and needed more massive storage.</ns0:p><ns0:p>Compared to results on CIFAR datasets, because the kernel size of the real-valued first convolution layer increased, the improvements in terms of FLOPS decreased. In addition, because all models started with real-valued 7 × 7 convolution layer and ended with real-valued fully connected layer for 1000 labels, the ratio of the reduced storage by removing the real-valued 1× convolution also decreased. The MoBiNet model <ns0:ref type='bibr' target='#b31'>(Phan et al., 2020)</ns0:ref> can reduce FLOPS ≈ ×3 over other BNN-based works. However, the removal of FPConv downsampling reduced storage size significantly over those of the MoBiNet model. Therefore, we conclude that AresB-Net can have merits in reducing storage size with acceptable test accuracies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ablation studies</ns0:head><ns0:p>We conducted ablation studies with experimental results on the CIFAR datasets.</ns0:p><ns0:p>• Effects of repeating channels in grouped convolution We performed experiments to know the effects of the increasing number of channels in the AresB-Net model. Compared to the baseline ResNet <ns0:ref type='bibr' target='#b13'>(He et al., 2016)</ns0:ref>, the basic block doubled the number of channels, but the grouped convolution maintained computational costs. By extending this idea, another experiment repeated channels and increased groups in the convolution by a factor of 2, multiplying the trainable shift and bias parameters in the BN layer with expanded channels. The experimental results enhanced overall test accuracies even though the computational costs in the grouped convolution maintained.</ns0:p><ns0:p>Compared to the original setup of the AresB-18 model, Top-5 test accuracies of the extended versions increased by 92.86% for ×2 channels and 93.07% for ×4 channels on the CIFAR-100 dataset, respectively.</ns0:p><ns0:p>• Effects of the first BN layer in the second subblock</ns0:p><ns0:p>When omitting this BN layer, we experienced the gradient exploding in training on the CIFAR-100 dataset. Without this layer, several features from previous blocks can have direct effects on the filtering results, so that our version contained this BN layer per two subblocks.</ns0:p><ns0:p>• Pooling layer in downsampling As shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> (B), the first subblock provided the downsampled shortcut with stride = 2, where 3 × 3 max-pooling layer with padding = 1 was adopted.</ns0:p><ns0:p>Different types of pooling layers were applied to the AresB-18 model on the CIFAR-10 dataset. In addition to 2 × 2 max-pooling, 2 × 2 and 3 × 3 average pooling layers were adopted in modified versions. In these evaluations, the final Top-1 classification accuracies with different pooling layers were ranged in 91.54% ∼ 91.58%, which were slightly degraded compared with the version using 3 × 3 max-pooling in the downsampling.</ns0:p></ns0:div>
<ns0:div><ns0:head>• Data augmentation</ns0:head><ns0:p>An experiment was conducted to know how much the specific data augmentation affected the performance improvement. Without the random erasing <ns0:ref type='bibr' target='#b45'>(Zhong et al., 2017)</ns0:ref> in the data augmentation, the AresB-18 model on CIFAR-10 achieved 91.68% Top-1 final test accuracy. Compared to the final classification result without the random erasing (91.90%), slight accuracy enhancements were shown with this specific data augmentation technique. Therefore, it was expected that this augmentation technique could improve the performance. But the increase was not significant, which means that the performance enhancement was mainly caused by the proposed AresB-Net model.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The proposed network model achieves significant test accuracy improvements with reduced costs, by expanding output channels and applying shuffled grouped convolutions. The advantages of existing Manuscript to be reviewed</ns0:p><ns0:p>Computer Science network blocks are combined along with the convenience of making the pyramid structure. For apple-toapple comparisons, we focused on the basic block structure, so that we did not apply any specific training schemes and weight initialization. In addition, our model did not consider trainable parameters for scaling convolution outputs <ns0:ref type='bibr' target='#b4'>(Bulat and Tzimiropoulos, 2019)</ns0:ref>, tuning binary activation <ns0:ref type='bibr' target='#b39'>(Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Liu et al., 2020)</ns0:ref>, parameterized ReLU <ns0:ref type='bibr' target='#b12'>(He et al., 2015)</ns0:ref>. We definitely expect that there is no difficulty in </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>; Zhu et al. (2019); Wang et al. (2019); Hubara et al. (2017); Ghasemzadeh et al. (2018); Gu et al. (2019); Helwegen et al. (2019); Ding et al. (2019); Martinez et al. (2020), the training and optimization techniques for BNNs have been studied. Even though trainable parameters and optimizing techniques can be useful in increasing classification accuracy, this version does not consider any other specific trainable parameters and training techniques.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Proposed basic blocks: (A) basic block for stride = 1; (B) basic block for stride = 2.</ns0:figDesc><ns0:graphic coords='6,162.41,63.78,372.23,234.15' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Pyramid structured model stacking eight basic blocks denoted as the AresB-18 model for the CIFAR dataset. The number in the box and term g = 2 denote the numbers of the output channels and groups in the convolution.</ns0:figDesc><ns0:graphic coords='8,162.41,63.78,372.22,254.86' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>1 × 1 real-valued convolution. Training of proposed binarized neural networks When training our proposed AresB-Net model, weights are binarized in the forward pass and backward propagation, following the training method described in Rastegari et al. (2016). In the forwarding pass, the binarized activation and convolution are performed based on Eqs. (1) and (2). When updating parameters, real values are maintained to keep the tiny change in parameters. Especially, in the backpropagation, the derivative of the binary activation using sign() function should be approximated because the ideal derivative of sign() function is the delta function. Even though it is known that more sophisticate approximated derivatives such as Liu et al. (2018) can provide better results, we adopt the baseline straight-through-estimator in Courbariaux et al. (2016) for the training.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>);<ns0:ref type='bibr' target='#b27'>Liu et al. (2018)</ns0:ref>;<ns0:ref type='bibr' target='#b4'>Bulat and Tzimiropoulos (2019)</ns0:ref>;<ns0:ref type='bibr' target='#b29'>Martinez et al. (2020)</ns0:ref>, the real-valued pretrained weights for initializing the models were not adopted, thus starting the training from randomly initialized weights.For the data augmentation for input images, 32 × 32 input image was cropped and horizontally flipped randomly from 40 × 40 padded image (padding = 4). This random crop and random horizontal flip were applied to the evaluations of the AresB-Net and other counterparts. Then, the random erasing introduced in<ns0:ref type='bibr' target='#b45'>Zhong et al. (2017)</ns0:ref> was applied in training. However, the data augmentation above was not applied in the testing. The random erasing was only adopted in the training of the AresB-Net, so that other counterparts did not use this augmentation technique. For the CIFAR-10 dataset, we ran the training for 200 epochs with a batch size of 256. The learning rate started at 0.1 and was decayed by multiplying 0.2 at (80, 100, 130, 160) epochs. For the CIFAR-100 dataset, the training was performed for 360 epochs with a batch size of 256, where the learning rate started at 0.1 and was decayed by multiplying 0.2 at(160, 200, 260, 320) epochs. For the CIFAR-100 dataset, the dropout<ns0:ref type='bibr' target='#b37'>(Srivastava et al., 2014)</ns0:ref> layer was placed just before the fully-connected layer.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 illustrates Top-1 classification accuracies across training epochs on the CIFAR datasets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Training and test classification accuracies across training epochs: (A) training and test Top-1 accuracies with the AresB-10, AresB-18, and AresB-34 models on the CIFAR-10 dataset; (B) training and test Top-1 accuracies with the AresB-10, AresB-18, and AresB-34 models on the CIFAR-100 dataset.</ns0:figDesc><ns0:graphic coords='10,162.41,63.78,372.23,147.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>are compared. Because there was no 1 × 1 convolution in the downsampling, the FLOPs and storage size of our model became the smallest. Besides, TOP-1 test accuracy increased by 2.07% on the CIFAR-10 dataset. Compared to the theoretical speedup on ImageNet database in<ns0:ref type='bibr' target='#b32'>Rastegari et al. (2016)</ns0:ref>, the computation ratio of the first real-valued convolution layer was lower on the CIFAR dataset. The speedup over real-valued ResNet-18 was calculated byFLOPS(ResNet−18) FLOPS(AresB−18) ≈ ×44.73 on the CIFAR dataset, which was smaller than ×62.27 speedup on ImageNet database.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Training and test classification accuracies across training epochs: (A) training and test Top-1 accuracy with the AresB-10, AresB-18, and AresB-34 models on the ImageNet dataset; (B) training and test Top-5 accuracy with the AresB-10, AresB-18, and AresB-34 models on the ImageNet dataset.</ns0:figDesc><ns0:graphic coords='11,162.41,396.77,372.23,147.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 illustrates Top-1 and Top-5 classification accuracies across training epochs on the Imagenet datasets. The solid and dashed lines represent test and training accuracies, respectively. The validation images were used to test the trained model. Like the baselined pyramid structure in He et al. (2016), as the number of stacked blocks increased, accuracies were enhanced. Compared to the test accuracies in Fig.3,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>applying state-of-the-art training schemes and tuning methods to our model. When adopting the basic training optimization and training from scratch, our model can achieve acceptable performance for the CIFAR and ImageNet datasets and reduce hardware costs by removing 1 × 1 downsampling. Notably, this proposed model achieves significant benefits in terms of storage size and speedup on CIFAR datasets. By removing the intervention of the real-valued 1 × 1 convolution in the middle of operating a BNN model, BNN's operating steps become more simple. Most of all, it is concluded that the proposed model can provide good classification results with low computational costs.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of weight and output tensor sizes in basic blocks block type input tensor size subblock weight size a output tensor size stride</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of test accuracies (%) on CIFAR datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Model</ns0:cell><ns0:cell cols='6'>Top-1 Top-5 FP Top-1 a FP Top-5 a Top-1 gap Top-5 gap</ns0:cell></ns0:row><ns0:row><ns0:cell>a</ns0:cell><ns0:cell cols='2'>AresB-10 90.74</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell cols='2'>AresB-18 91.90</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>93.02</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.12</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>AresB-34 92.71</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>93.62</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>a AresB-10 69.45 91.70</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-100</ns0:cell><ns0:cell cols='3'>AresB-18 73.01 92.57</ns0:cell><ns0:cell>75.61</ns0:cell><ns0:cell>93.05</ns0:cell><ns0:cell>2.60</ns0:cell><ns0:cell>0.48</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>AresB-34 74.73 93.25</ns0:cell><ns0:cell>76.76</ns0:cell><ns0:cell>93.37</ns0:cell><ns0:cell>2.03</ns0:cell><ns0:cell>0.12</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>contains 1.2 million training and 50,000 validation color images classified into 1,000 categories. The image sizes were not fixed, so that images were resized into 256 × 256 images in the initial conversion. Then, each image was obtained by cropping the original image in the range of 0.466 ∼ 0.875 and resized by 224 × 224. After applying the random horizontal flip, images were used in training. The random erasing in<ns0:ref type='bibr' target='#b45'>Zhong et al. (2017)</ns0:ref> was not applied in training, unlike the cases on the CIFAR dataset. Therefore, the same augmentation scheme described in He et al.(2016) was adopted in the AresB-Net model. Additionally, the dropout layer was not adopted. When doing inference, 224 × 224 images were cropped from the center of original images without random flip.</ns0:figDesc><ns0:table /><ns0:note>8/13PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison with other models containing baseline basic blocks on CIFAR-10 dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>W /A b down sampling c</ns0:cell><ns0:cell cols='3'>Top-1 d storage FLOPS (Mbits) (×10 7 )</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet-18 (He et al., 2016) a</ns0:cell><ns0:cell>32/32 FPconv</ns0:cell><ns0:cell>93.02%</ns0:cell><ns0:cell>358</ns0:cell><ns0:cell>58.6</ns0:cell></ns0:row><ns0:row><ns0:cell>XNOR-Net</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Terms W and A denote the precision of target weights and activation. c Prefix FP and B mean the full-precision and binarized 1 × 1 convolutions, respectively. d Top-1 accuracy indicates the final Top-1 test accuracy on the CIFAR-10 dataset. Comparison with other models on ImageNet dataset(%)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>/1</ns0:cell><ns0:cell>FPconv</ns0:cell><ns0:cell>89.30%</ns0:cell><ns0:cell>18.2</ns0:cell><ns0:cell>3.82</ns0:cell></ns0:row></ns0:table><ns0:note>b a When the model is trained from scratch, a pretrained model are not used in the weight initialization.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>summarizes test accuracies and other important characteristics with XNOR-ResNet-18 (Rastegari et al., 2016), ABC-Net-res18</ns0:note></ns0:figure>
<ns0:note place='foot' n='4'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55497:1:1:NEW 8 Feb 2021)</ns0:note>
</ns0:body>
" | "Response to the reviewers
Thank you for offering the opportunity to revise the paper #55497. The paper has been revised with
careful study in aspects of paper format, language, terminology, conveyed meaning, and grammar.
Several typos and missing information have been corrected. In addition, the revised paper addressed
academic editor and all reviewers’ comments.
Editor
Point 0.1 — As reviewer #1 indicated, authors are encouraged to provide detailed explanations
on terminology and experiments. Experimental results should be elaborated. Please provide more
detailed results.
Reply: According to reviewer’s comments, we have added explanations on terminology and number of
parameters for describing the basic block and model structure. For details of experimental environments,
additional explanations about the data augmentation and ablation studies have been included in
EXPERIMENTAL RESULTS AND ANALYSIS section. We assure that this revised manuscript has
considered all reviewers’ concerns.
Reviewer 1
Point 1.1 — (Page 4 line 127-159) Even though I have sufficient background in CNN and BNN
research, it has been very difficult for me to understand what exactly was done in the novel basic
block. To make this work accessible, it would be necessary to carefully rework the presentation. For
example, the output sizes of the basic block are explained thoroughly, however, it is recommended
to add a table describing the output sizes of the basic block to make it more readily understandable.
Reply: Thanks for your good points. In agreement with your concerns, the explanation has been added
and sophisticated. The number of parameters in weight filter denoted as weight filter size and the output
tensor size has been explained depending on the subblock and stride as:
(Lines 165 ∼ 183) In the first subblock, the deterministic binarized activation sign(I) rounds
I into {−1, +1}c×win ×hin . When the weight filter for each input channel has the width of w and
1
height of h, the real-valued weight filter for an output channel is denoted as W ∈ Rn= 2 c×w×h ,
where w ≤ win , h ≤ hin . In the BinConv3x3 layer, w = 3 and h = 3, respectively. Because the
BinConv3x3 layer performs the grouped convolution (g = 2), 12 c input channels are adopted in each
group.
Depending on the stride, the number of output channels from the first subblock is different. As
shown in Figure 1, whereas the first BinConv3x3 layer of stride = 1 has 12 c output channels, that
of stride = 2 has c output channels. When stride = 1, c input channels for the second subblock
are produced by concatenating the shortcut from 12 c shuffled input channels. On the other hand,
for stride = 2, 2c input channels are for the second subblock produced, where c channels from the
1
Table 1: Summary of basic blocks’ filter and output size
block type
input tensor size
subblock
c × win × hin
first
second
first
second
stride = 1
stride = 2
a
weight size[a]
output tensor size
1 2
4c × w × h
1 2
4c × w × h
1 2
2c × w × h
c2 × w × h
c × win × hin
c × win × hin
2c × w2in × h2in
2c × w2in × h2in
Weight size denotes the number of weight filter’s parameters.
max-pooling layer are concatenated to produce 2c channels. By applying stride = 2, the width and
height of the feature from the subblock are w2in and h2in , respectively. In the second subblock, the
numbers of input and output channels are the same. Therefore, the output tensor sizes of stride = 1
and stride = 2 are c × win × hin and 2c × w2in × h2in , respectively.
In a group of the BinConv3x3 layer, when the numbers of input and output channels are the
same as 2c , the number of parameters for the group can be 14 c2 × w × h. Total number of parameters
for two groups can be 12 c2 × w × h. When the number of output channels in a group are doubled as
c in the first subblock for stride = 2, total number of parameters for two groups can be c2 × w × h.
Table 1 summarizes the number of parameters in weight filters denoted as weight size and output
tensor sizes in basic blocks.
Point 1.2 — Please provide some more explanation or revise the terminologies. (line141 subblocki,
line142 1/n Symbols *)
Reply: According to your concern, we have provided additional explanation and revised terminology.
Notably, because term n was used to indicate the scaling factor, we have used num(I) instead to denote
the number of features produced from a subblock. The revised explanation has been added as follows:
(Lines 151 ∼ 155) When half of features produced from a subblock are concatenated to the
output channels of the next subblock, the features are not unfiltered in the next subblock. We
denote the number of input features from a subblock by num(I). In the output channels of the next
subblock, num(I)
features are unfiltered. As the features go through k subblocks, num(I)
features
2
2k
are also unfiltered.
Point 1.3 — Abbreviation of BinConv3x3 layer should be properly represented (Page 4 line 131).
Reply: In agreement with your concerns, a brief explanation of BinConv3x3 layer has been added as
follows:
(Lines 138 ∼ 139) The BinConv3x3 layer stands for the binarized convolution layer with 3 × 3 sized
filter.
2
Point 1.4 — Please enrich the manuscript with references or detailed experimental results for
these; The evaluation using LeakyReLU having better performances than those using normal ReLU
(Page 4 line 151-154) and the better classification accuracy with BN layer every two subblocks (Page
4 line 158)
Reply: Thanks for good points. We performed the experiments to find the optimized structure of a
basic block. The additional explanation has been added as follows:
(Lines 195 ∼ 213) The erroneous binarized convolution can increase unexpected dying ReLU problems.
Several existing works adopted the learnable leaky activation functions (Gu et al. 2019; Phan et
al, 2020; Martinez et al. 2020). The leaky activation function allows small negative values when
input is less than zero. Therefore, we evaluated whether the leaky activation function can be suitable
for the proposed model. Evaluations were performed by changing the activation function to the
LeakyReLU (Mass et al. 2013), ReLU (Nair and Hinton 2010), parametric ReLU (PReLU) (He
et al. 2015) in the AresB-18 model on the CIFAR-100 dataset. In this evaluation, the negative
slope of the LeakyReLU was fixed as 0.01. Top-1 final test accuracies with the LeakyReLU, ReLU,
parametric ReLU (PReLU) were 73.01%, 71.94%, 71.23%, respectively. The evaluation result using
the LeakyReLU outperformed other activation functions, so that we decided that the binarized
convolution output passed through the LeakyReLU layer.
The first BN layer in the second subblock normalizes all features from the first subblock, where
the unfiltered features from previous blocks can be adjusted in this BN layer. On the other hand, the
first subblock does not have the BN as the first layer. We think that if all features pass through the
BN layer in each subblock, errors from the repeated normalization could have negative effects, which
produced 72.95% Top-1 final test accuracy on the CIFAR-100 dataset in our experiments. When the
first subblock did not adopt the BN layer, 73.01% Top-1 final test accuracy was obtained, so the
difference was not significant. However, additional BN layer increased computational costs, so that
it was expected that there was no need to insert that layer. Therefore, we determine that a basic
block has this BN layer every two subblocks in AresB-Net.
Point 1.5 — In page 7 line 249, it is indicated that the speedup over real-valued ResNet-18 on
CIFAR dataset can be over 44%. Please clarify.
Reply: According to your concerns, the speedup calculation and its explanation has been clarified as:
(Lines 303 ∼ 307) Compared to the theoretical speedup on ImageNet database in Rastegari etal.
(2016), the computation ratio of the first real-valued convolution layer was lower on the CIFAR
S(ResN et−18)
dataset. The speedup over real-valued ResNet-18 was calculated by FFLOP
LOP S(AresB−18) ≈ ×44.73
on the CIFAR dataset, which was smaller than ×62.27 speedup on ImageNet database.
Point 1.6 — Please provide the correct reference for the calculation method of the FLOPS of
convolution layers (Page 6 line 241).
Reply: Thanks for checking the reference. I have revised the bibtex-formatted reference to make the
link correct as follows:
3
@miscflopscnn, title = flops-cnn, howpublished = https://github.com/sagartesla/flops-cnn,
note = Accessed: 2020-06-03, year=2020
Point 1.7 — In Section Ablation studies, four case studies are described. However, experimental
results are not really mentioned thoroughly. Please provide some more detailed results. For example,
it is mentioned that slight accuracy enhancements were shown with the random erasing for data
augmentation. Please provide more detailed experimental results and why you chose this method
for data augmentation.
Reply: Thanks for your good points. For details of the data augmentation, we have provided additional
explanation in Ablation studies subsection as follows:
(Lines 367∼373) An experiment was conducted to know how much the specific data augmentation
affected the performance improvement. Without the random erasing (Zhong et al., 2017) in the
data augmentation, the AresB-18 model on CIFAR-10 achieved 91.68% Top-1 final test accuracy.
Compared to the final classification result without the random erasing (91.90%), slight accuracy
enhancements were shown with this specific data augmentation technique. Therefore, it was expected
that this augmentation technique could improve the performance. But the increase was not significant,
which means that the performance enhancement was mainly caused by the proposed AresB-Net
model.
For the detail explanation of the setup of the pooling layer, The explanation of the pooling layer in
downsampling has been revised in Ablation studies subsection as follows:
(Lines 359∼365) As shown in Figure 1 (B), the first subblock provided the downsampled shortcut
with stride = 2, where 3 × 3 max-pooling layer with padding = 1 was adopted. Different types of
pooling layers were applied to the AresB-18 model on the CIFAR-10 dataset. In addition to 2 × 2
max-pooling, 2 × 2 and 3 × 3 average pooling layers were adopted in modified versions. In these
evaluations, the final Top-1 classification accuracies with different pooling layers were ranged in
91.54% ∼ 91.58%, which were slightly degraded compared with the version using 3 × 3 max-pooling
in the downsampling.
We have revised explanation about the effects of repeating channels in grouped convolution in
Ablation studies subsection as follows:
(Lines 345∼354) We performed experiments to know the effects of the increasing number of channels
in the AresB-Net model. Compared to the baseline ResNet (He et al., 2016), the basic block doubled
the number of channels, but the grouped convolution maintained computational costs. By extending
this idea, another experiment repeated channels and increased groups in the convolution by a factor
of 2, multiplying the trainable shift and bias parameters in the BN layer with expanded channels.
The experimental results enhanced overall test accuracies even though the computational costs in
4
the grouped convolution maintained. Compared to the original setup of the AresB-18 model, Top-5
test accuracies of the extended versions increased by 92.86% for ×2 channels and 93.07% for ×4
channels on the CIFAR-100 dataset, respectively.
Thanks for your brilliant comments. We assure that your idea will be helpful for
inceasing the quality of this paper.
Reviewer 2
Point 2.1 — For me, the section, ARESB-NET: ACCURATE RESIDUAL BINARIZED NEURAL
NETWORK MODEL, is quite not easy to understand. I think it would be better to rearrange the
contents.
Reply: Thanks for your comments. We have revised this section significantly and defined several
terminology for better understanding. Notably, we have modified several points as follows:
1. We have rearranged the contents into four subsections. The Binarization subsection has described
the binarization in the basic block.
2. Additional details about the basic block have been added, where the weight size and output tensor
size of each subblock for stride = 1 and stride = 2 have been explained in Proposed basic block
subsection. The weight and output tensor sizes have been formulated depending on input tensor
size. Table 1 has been added to summarize weight and output tensor sizes in basic blocks.
3. Explanation of several terms has been added. Firstly, term BinConv3x3 has been explained as:
(Lines 138∼139) The BinConv3x3 layer stands for the binarized convolution layer with 3 × 3
sized filter.
Secondly, we have explained the calculation of the scaling factor for c input channels as:
(Lines 185∼186) The scaling factor is
1
n,
where n = 21 c × w × h.
4. For better understanding of the basic block in Figure 1, the explanation has been added in Proposed
basic block subsection as:
(Lines 165∼183) In the first subblock, the deterministic binarized activation sign(I) rounds I
into {−1, +1}c×win ×hin . When the weight filter for each input channel has the width of w and
1
height of h, the real-valued weight filter for an output channel is denoted as W ∈ Rn= 2 c×w×h ,
where w ≤ win , h ≤ hin . In the BinConv3x3 layer, w = 3 and h = 3, respectively. Because the
5
BinConv3x3 layer performs the grouped convolution (g = 2), 12 c input channels are adopted in
each group.
Depending on the stride, the number of output channels from the first subblock is different.
As shown in Figure 1, whereas the first BinConv3x3 layer of stride = 1 has 12 c output channels,
that of stride = 2 has c output channels. When stride = 1, c input channels for the second
subblock are produced by concatenating the shortcut from 12 c shuffled input channels. On the
other hand, for stride = 2, 2c input channels are for the second subblock produced, where c
channels from the max-pooling layer are concatenated to produce 2c channels. By applying
stride = 2, the width and height of the feature from the subblock are w2in and h2in , respectively.
In the second subblock, the numbers of input and output channels are the same. Therefore,
the output tensor sizes of stride = 1 and stride = 2 are c × win × hin and 2c × w2in × h2in ,
respectively.
In a group of the BinConv3x3 layer, when the numbers of input and output channels are the
same as 2c , the number of parameters for the group can be 14 c2 × w × h. Total number of
parameters for two groups can be 12 c2 × w × h. When the number of output channels in a
group are doubled as c in the first subblock for stride = 2, total number of parameters for
two groups can be c2 × w × h. Table 1 summarizes the number of parameters in weight filters
denoted as weight size and output tensor sizes in basic blocks.
Point 2.2 — Also it would be better to mention the reason for using the block for stride=2 and
why the number of output channels is doubled.
Reply: In agreement with your concerns, we have added explanation about the pyramid structure
using stride > 1 and doubled channels in ARESB-NET: ACCURATE RESIDUAL BINARIZED NEURAL
NETWORK MODEL section. In the baseline ResNet and XNOR ResNet models, they simply adopt
stride = 2 and double channels in the downsampling. Even though several papers such as PyramidNet
had different factors to increase channels, we did not consider the specific pyramidal structure in this
paper. Thus, our design also follows this pyramidal method using a factor of 2. The added explanation
has been as follows:
(Lines 125∼131) This basic blocks can be stacked using a pyramid structure. Most CNN
structures reduce the width and height of feature maps and increase the number of channels when
they encounter a layer with downsampling (stride > 1). Because the baseline residual networks
(He et al. 2016) and XNOR ResNet (Rastegari et al. 2016) simply adopt stride = 2 and double
channels in the downsampling, the AresB-Net also follows this pyramidal method using a factor of 2.
In this downsampling, whereas the width and height of features are reduced in half, the number of
channels are doubled. Therefore, the number of computations on each layer is kept similar.
Point 2.3 — Please be clear whether the experiments were conducted under the same conditions(using the same data augmentation technique?)
6
Reply: Thanks for your good points. In the experiments of the counterparts, we referred several existing
uploaded codes and data in the references, which applied the random crop and random flip like the
AresB-Net model, Differently, the random erasing introduced in Zhong et al. (2017) was only applied in
the AresB-Net model on the CIFAR dataset, which has been clearly described in Experiments on CIFAR
dataset subsection as follows:
(Lines 276∼277) The random erasing was only adopted in the training of the AresB-Net, so that
other counterparts did not use this augmentation technique.
For apples-to-apples comparison, we explain the results without the data augmentation in Ablation
studies subsection, where the AresB-18 model achieved 91.68% Top-1 final test accuracy. It showed
that the random erasing can enhance the final classification performance slightly, compared to 91.90%
Top-1 final result with the random erasing. This explanation has been more described in Ablation studies
subsection as follows:
(Lines 367∼373) An experiment was conducted to know how much the specific data augmentation
affected the performance improvement. Without the random erasing (Zhong et al., 2017) in the
data augmentation, the AresB-18 model on CIFAR-10 achieved 91.68% Top-1 final test accuracy.
Compared to the final classification result without the random erasing (91.90%), slight accuracy
enhancements were shown with this specific data augmentation technique. Therefore, it was expected
that this augmentation technique could improve the performance. But the increase was not significant,
which means that the performance enhancement was mainly caused by the proposed AresB-Net
model.
On the other hand, the experiments on the ImageNet dataset did not adopt the random erasing,
which has been clearly described in Experiments on ImageNet dataset subsection as follows:
(Lines 314 ∼ 315) Therefore, the same augmentation scheme described in He et al. (2016) was
adopted in the AresB-Net model.
Point 2.4 — Is the storage-saving effect coming from the absence of 1x1 convolution is enough to
account for the reduction in accuracy? (the storage is not significantly different from Bi-RealNet-18)
Reply: Thanks for good question! It was thought that the removal of the real-valued 1 × 1 convolution
was very important in developing BNN models. Based on the first manuscript, the reason can be explained
as follows: in the experiments on the ImageNet database, because all models started with real-valued
7 × 7 convolution layer and ended with real-valued fully connected layer with 1000 labels, the storage
reduction by removing the real-valued 1× convolution felt small, and the operating speedup looked
negligible. However, in the mid-sized dataset (e.g., CIFAR dataset), the ratio of reduced storage was
significant. In our experiments on the CIFAR-10 dataset, the storage was reduced by 30%. Additionally,
the speedup reached up to ∼ ×2.8 by removing the real-valued 1 × 1 convolution layers. Moreover, the
real-valued 1 × 1 convolution should be intervented in the middle of operating a BNN model, which
could make BNN’s operating steps more complex.
7
To describe the reason why the ratio of the reduced storage decreased, the following explanation has
been added in Experiments on ImageNet dataset as follows:
(Lines 337 ∼ 339) In addition, because all models started with real-valued 7 × 7 convolution layer
and ended with real-valued fully connected layer for 1000 labels, the ratio of the reduced storage by
removing the real-valued 1× convolution also decreased.
To show significant benefits on the CIFAR dataset, the following expression has been added in
Conclusion section as follows:
(Lines 384 ∼ 387) Notably, this proposed model achieves significant benefits in terms of storage size
and speedup on CIFAR datasets. By removing the intervention of the real-valued 1 × 1 convolution
in the middle of operating a BNN model, BNN’s operating steps become more simple.
Point 2.5 — I recommend analyzing more the reason why the proposed network shows good
performance without 1x1 convolution
Reply: We think that the shuffle layer can group the information acrossing input channels, mixing the
different types of features from previous subblock. Because the ShuffleNet (Zhang et al. 2018) did not
adopt the 1 × 1 convolution, the motivated explanation has been added in the Proposed basic block
subsection as follows:
(Lines 147 ∼ 149) In this shuffling, the information crossing input channels is mixed in the next
grouped convolution, which is helpful for removing real-valued 1 × 1 convolution like Zhang et al.
(2018).
In Ablation studies section, the channel expansion of the AresB-Net multiplied the trainable shift
and biasing parameters, which gave more optimizing points in the binarized convolution. Compared to
the baseline ResNet and XNOR ResNet models, because the AresB-Net model increased the numberof
channels without increasing computation costs, more trainable shift and biasing parameters were given.
The explanation above has been added in Proposed basic block subsection as follows:
(Lines 159 ∼ 161) Compared to ResNet (He et al., 2016) and XNOR ResNet (Rastegari et al., 2016)
models, the proposed model contains doubled shifting and biasing parameters, which could give more
optimizable points in training.
Thanks for your brilliant comments. We assure that your idea will be helpful for
inceasing the quality of this paper.
8
" | Here is a paper. Please give your review comments after reading it. |
31 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Access control is a critical aspect for improving the privacy and security of IoT systems. A consortium is a public or private association or a group of two or more institutes, businesses, and companies that collaborate to achieve common goals or form a resource pool to enable the sharing economy aspect. However, most access control methods are based on centralized solutions, which may lead to problems like data leakage and singlepoint failure. Blockchain technology has its intrinsic feature of distribution, which can be used to tackle the centralized problem of traditional access control schemes. Nevertheless, blockchain itself comes with certain limitations like the lack of scalability and poor performance. To bridge the gap of these problems, here we present a decentralized capability-based access control architecture designed for IoT consortium networks named IoT-CCAC. A blockchain-based database is utilized in our solution for better performance since it exhibits favorable features of both blockchain and conventional databases. The performance of IoT-CCAC is evaluated to demonstrate the superiority of our proposed architecture. IoT-CCAC is a secure, salable, effective solution that meets the enterprise and business's needs and adaptable for different IoT interoperability scenarios.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>As we step into the Internet of Things (IoT) era where ubiquitous objects are connected, the number of IoT devices has witnessed an unprecedented increase. According to Juniper Research, there will be more than 46 billion IoT devices in 2021 <ns0:ref type='bibr'>(Juniper Research, 2016)</ns0:ref>. The proliferation of the IoT has brought many benefits to us, boosting various technologies such as smart home <ns0:ref type='bibr' target='#b6'>(Dhelim et al., 2018)</ns0:ref> and smart city <ns0:ref type='bibr' target='#b5'>(Camero and Alba, 2019)</ns0:ref>. However, both current and future IoT systems also cause concerns in terms of security and privacy <ns0:ref type='bibr' target='#b15'>(Xu et al., 2018b)</ns0:ref>. Specifically, malicious users may gain access to devices that do not belong to them, deliberately tamper data, and even steal valuable information. As a countermeasure, access control for IoT has been a popular research topic and a crucial aspect of IoT security and privacy <ns0:ref type='bibr' target='#b12'>(Singh et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Ouaddah et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bouras et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Conventional access control methods (e.g., role-based access control (RBAC), attribute-based access control (ABAC), capability-based access control (CBAC)) have been widely applied to IT systems <ns0:ref type='bibr' target='#b15'>(Xu et al., 2018b)</ns0:ref>. Compared to the two schemes, CBAC is relatively more lightweight as it uses a communicable and unforgeable token of authority, which associates an object with corresponding access rights. However, one drawback of the original CBAC is that a token can only be granted to one subject, which may cause low efficiency and calls for a proper solution. Also, these access control methods mostly rely on centralized solutions, which may lead to several problems. Firstly, central management may end up with single-point failures because many systems suffer from security issues related to the tools used to manage the platforms. Secondly, the reliance on a central server or a third party gives them access to perform checks on stored data, which could lead to privacy leakage. Third, such centralized system are not designed for a consortium applications as the transparency is omitted.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Blockchain keeps all transaction records through a peer-to-peer network as a distributed ledger.</ns0:p><ns0:p>It is essentially a growing list of records (i.e., blocks) linked to the previous block via cryptography.</ns0:p><ns0:p>Blockchain possesses various features (e.g., decentralization, tamper-proof, security) that make it a trustable alternative infrastructure for access control systems. Thus, when integrated with the blockchain technique, access control can bring the following favorable advantages: a) Help eliminate third parties, solve single-point failures and other centralized management problems; b) Have access to trustable and unmodifiable history logs; c) Consensus mechanisms are applied that only valid transactions are recorded on the blockchain; d) Smart contracts can help monitor and enforce access permissions under complex conditions.</ns0:p><ns0:p>However, the use of blockchain for IoT access control also comes with some limitations. First of all, blockchain is not designed to store a significant volume of data, which usually requires the proper integration of on-chain and off-chain databases to handle specific tasks. Second, the transactions in public blockchain can be viewed by anyone which does not accommodate the need of a consortium enterprise network because its transactions must be private and only accessible to consortium members. Although private blockchain (e.g., Ethereum private blockchain, Hyperledger Fabric) has been developed to solve this problem, it is not the only viable solution -a blockchain database (e.g., <ns0:ref type='bibr'>BigchainDB (McConaghy et al., 2016)</ns0:ref>) can do the same with the even better performance <ns0:ref type='bibr' target='#b13'>(Tseng et al., 2020)</ns0:ref>. Third, performance and scalability have always been two significant problems of blockchain technology. Regardless transaction execution and validation performance has been improved recently by introducing lighter consensus mechanisms <ns0:ref type='bibr' target='#b3'>(Biswas et al., 2019)</ns0:ref>, and more efficient transaction scheme such as Hyperledger Fabric <ns0:ref type='bibr' target='#b2'>(Androulaki et al., 2018)</ns0:ref>, the performance and scalability of the blockchain-based access control solutions still cannot compete with the current centralized solutions.</ns0:p><ns0:p>Therefore, based on the aforementioned limitations of existing blockchain-based access control methods, here we present an enhanced blockchain-based capability access control architecture for IoT named IoT-CCAC, IoT Consortium Capability-based Access Control Model. In our design, we focus on interoperability and data exchange by organizing the access control data in form of assets (physical devices), services (collaborative applications), and profiles (the representation of the asset inside a service) to make the solution granular and flexible taking in consideration fast growing and the scalability of IoT.</ns0:p><ns0:p>In addition, we introduce the concept of statement, which can be granted to a subject or a group of subjects as a single capability token or group capability token. Different from the other IoT capability based access control methods, our solution is designed for consortium networks instead of personal networks. Based on the aforementioned limitations of blockchain, we further investigate the blockchain based database that combines the security properties of blockchain and the performance advantage of a database and use it as a backbone of the proposed access control. The contributions of this paper mainly include:</ns0:p><ns0:p>• Faced with the centralized problem of most existing IoT access control methods and the limitation of current blockchain-based solutions, we present an enhanced decentralized capability-based access control architecture for consortium applications named IoT-CCAC.</ns0:p><ns0:p>• The notation of the group capability token is introduced as a measure to improve the conventional capability-based solutions and works.</ns0:p><ns0:p>• We discuss the IoT access control data registry requirements, and we present the blockchain-based database integration architecture.</ns0:p><ns0:p>• The proposed approach is implemented and evaluated in proof-of-concept prototype. The results shows IoT-CCAC is fast, secure and can scale and support IoT city and business applications.</ns0:p><ns0:p>The remainder of this paper is organized as follows. Section 2 presents related works of blockchainbased IoT capability access control solutions. Section 3 presents the IoT-CCAC architecture and define it's components, token generation protocol and authorization scheme. Section 4 discusses the requirements of IoT access control data registry and the blockchain-based database integration. In Section 5, we implement and evaluate the prototype of our proposed approach and discuss it's security and performance aspects. We complete our work with a conclusion and an outlook for the future and following works.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>RELATED WORKS</ns0:head><ns0:p>In this section, we mainly summarize some research on the integration of blockchain and CBAC for IoT. Specifically, CBAC is selected considering its relative advantages over RBAC and ABAC. For instance, by leveraging CBCA, a subject can complete its task using the minimum of access rights (i.e., the principle of least privilege) <ns0:ref type='bibr' target='#b9'>(Nakamura et al., 2019)</ns0:ref>. The detailed comparison of the three access control methods is summarized below (see Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>) in terms of their corresponding explanation, scalability, heterogeneity, dynamicity, lightweight, flexibility, and granularity.</ns0:p></ns0:div>
<ns0:div><ns0:head>AC approach</ns0:head><ns0:p>Role-based AC Attribute-based AC Capability-based AC Description Employs pre-defined roles that carry a specific set of privileges. To grant access you have to give the object a role.</ns0:p><ns0:p>Uses policies which are defined according to a set of selected attributes from the user, subject, resource, and environment attributes and so on.</ns0:p><ns0:p>Uses a communicable, unforgeable token of authority. The token references an object along with an associated set of access rights Scalability Not scalable as predefining roles for billions of devices is not possible and will drive to many errors when assigning roles to fast-changing devices.</ns0:p><ns0:p>The access policies are defined on attribute which gives it the scalability feature because in a complex system or nested policies the more granular your system is the more is efficient to handle billions of devices.</ns0:p><ns0:p>Scalability </ns0:p></ns0:div>
<ns0:div><ns0:head>Blockchain-based Capability access control for IoT</ns0:head><ns0:p>Abundant work has been carried out on the topic of integrating IoT access control with blockchain. There exists much research on applying CBAC to IoT <ns0:ref type='bibr' target='#b11'>(Ouaddah et al., 2017)</ns0:ref> considering its characteristics such as lightweight and scalability, and these features also make it a preferred choice to be integrated with blockchain to provide more secure access management for IoT. However, only a few existing studies have explored the potential of combining CBAC with blockchain-related technology to manage IoT identity management and access control and all works were designed for IoT personal networks.</ns0:p><ns0:p>Xu et al. <ns0:ref type='bibr' target='#b14'>(Xu et al., 2018a)</ns0:ref> propose a complete blockchain-enabled CBAC strategy for IoT called BlendCAC. Then, in another work <ns0:ref type='bibr' target='#b16'>(Xu et al., 2019)</ns0:ref>, the authors further modify BlendCAC in the case of space situation awareness to handle identity authentication via a virtual trust zone, token management, and access right validation. To evaluate the feasibility of BlendCAC, experiments are carried out on a private Ethereum blockchain and demonstrated its effectiveness. However, the capabilities of subjects and their delegation relationships are managed by using a delegation tree in BlendCAC, which can cause incomplete recorded delegation information. Also, two types of tokens in BlendCAC must be consistently updated, which cannot always be met. In addition, the BlendCAC is partially decentralized as it employs Manuscript to be reviewed</ns0:p><ns0:p>Computer Science a cloud server to coordinate between the domains and to be the service provider.</ns0:p><ns0:p>To address the delegation problem in BlendCAC, Nakamura et al. <ns0:ref type='bibr' target='#b9'>(Nakamura et al., 2019)</ns0:ref> introduced the delegation graph in place of the delegation tree. Moreover, Ethereum smart contracts were used for the storage and management of capability tokens. Later, they further enhance the method and propose to handle token management according to its actions or access rights instead of conventionally used subjects <ns0:ref type='bibr' target='#b10'>(Nakamura et al., 2020)</ns0:ref>. However, the work is still lack of systematic architecture design meeting the IoT requirements. For example, the work focuses on solving the problems of delegation ambiguity without taking in consideration that in a personal network issuing large number of tokens without a solid management will cause the ambiguity to system users. However, the above CBAC studies suffer also from the lack of organization and management of information inside the system. For instance, a network of massive connected sensors and devices will raise the problem of data management and classification which will lead to traceability and analysis issues and slow the process of continues security enhancement. In addition, the proposed works don't support interoperability and data exchange between the IoT domains and organization as the solution is proposed for a personal IoT network and it doesn't fit the city or business IoT network and applications.</ns0:p><ns0:p>Comparing to existing work, this study aims to provide a fine-grained, scalable and high performance CBAC solution for IoT city and business consortium networks. We designed a modular CABC system to enhance flexibility of the solution by defining and creating a framework for the transactions and data. The design decision adopted enables interoperability and data exchange between the network members and impose the principal of least privileges.</ns0:p></ns0:div>
<ns0:div><ns0:head>IOT CONSORTIUM CAPABILITY-BASED ACCESS CONTROL MODEL (IOT-CCAC)</ns0:head><ns0:p>In this section, we design and overview the essential aspects adopted in this work for an IoT consortium capability-based access control model. We also give a detailed description of the linkages between all the components presented in our proposal.</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT-CCAC Description</ns0:head><ns0:p>IoT access control is a paradigm of defining policies and assigning them to users, groups of users, and network resources such as devices and sensors defining their permissions and protecting the network from malicious and unauthorized access. For instance, IoT is a complex network of connected domains</ns0:p><ns0:p>where each domain has its sub-network, and each sub-network manages its resources. Defining policies for a complex network depends on the degree of flexibility, granularity, and privacy maintained in the ecosystem, considering the interoperability and cross-organizational information exchange. Therefore, IoT-CCAC allows every domain to define, manage, and share its resources to enable interoperability in services with other organizations and hand the control of the network and sub-networks resources to its owners. To better illustrate the proposed model, we define relevant IoT network and IoT-CCAC components as presented in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>Term Description Domain a member of group of organization participating in the consortium network Subject a human user or a device that interacts with the consortium network and applications Resource an entity as a service in the network, such as a temperature sensor or a document data Asset the digital representation of a physical resource owned by a participating domain Service a service or an application initiated by several domains under a collaborative project Profile the representation of an asset inside a service Context environmental information gathered from resources, such as location and time Statement a document defines the access rights granted to a subject to access a resource</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2. IoT-CCAC terms and descriptions</ns0:head></ns0:div>
<ns0:div><ns0:head>Identity Management External Component</ns0:head><ns0:p>Identity management (IDM) is a crucial feature of any digital environment, especially IoT ecosystem access control. Each IoT entity must have a unique identifier representing its identity. The IDM typically</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:ref> Manuscript to be reviewed Computer Science has three main functions, which are registration, authentication, and revocation. Registration to upload an entity identity to the system and assign a unique identifier, authentication to inspect an entity identity each time reacts with the ecosystem, and revocation to withdraw the digital identity of an entity <ns0:ref type='bibr' target='#b4'>(Bouras et al., 2020)</ns0:ref>. In our design. All the aspects related to the authentication are out of scope for this work.</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT-CCAC System Architecture</ns0:head><ns0:p>The main components of the IoT-CCAC system are asset management, service management, profile management, context management, and statement management. The system also has a token verification module and a unique identifier (UID) generator module, as shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The asset management allows each domain to register and store its physical resources in the form of assets, and only the asset owner can edit or withdraw its own asset. The system assets are the available physical resources that services can use and interact with. Assets are used mainly for network resource discovery, classification, and other modeling strategies and digital representation. Properties needed for creating an asset can be expressed with the following notations:</ns0:p><ns0:formula xml:id='formula_0'>Asset = {assetConext, assetCredential, assetMetadata} assetContext = {UID, Issuer ID , Issued time } assetCredential = {Resource id , Domain id , Resource type , Resource f unc } assetMetadata = {Resource URI , Resource location }</ns0:formula><ns0:p>Asset context information represents the system-related information such as the unique identifier (UID), the issuer ID, and the creation time. Asset credential contains the constant resources information, including the resource ID (granted from the IDM component), domain ID, resource type (e.g., sensor, actuator, tag), and resource function (e.g., temperature, pressure, light). Asset metadata covers the changeable resource information such as resource URI and resource location.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Service Management</ns0:head><ns0:p>Since a consortium may have multiple collaboration projects, each project is interpreted as a service inside the network. The service management module is responsible for creating, editing, and altering service-related operations. Introducing the notion of service to the network will enhance the flexibility and the granularity of the system and regulate the collaboration project's requesters and requests. Properties needed for creating service can be expressed with the following notations:</ns0:p><ns0:formula xml:id='formula_1'>Service = {serviceConext, serviceCredential, serviceMetadata} serviceContext = {UID, Issuer ID , Issued time } serviceCredential = {Service name , Service initiator , Service participants : {domain 1 , domain 2 , . . . domain n }} serviceMetadata = {Requester number , Request number }</ns0:formula><ns0:p>Service context represents a service's information in the system, including its unique identifier (UID), issuer ID, and issuance time. Service credential includes but not only a service name, service initiator, and service participants which are a list of participating domains. Service metadata contains regulation and security information, such as the maximum number of requesters and requests.</ns0:p></ns0:div>
<ns0:div><ns0:head>Profile Management</ns0:head><ns0:p>Conceptually speaking, a profile represents the context information that a physical resource holds in a particular service. One resource may have different profiles, but each profile is defined for only one resource in a particular service. A profile can be assigned to one or multiple statements, and it stands as the resource identifier. The profile management module is responsible for creating, editing, and altering profiles. The alias profiles are represented as follow, where profile context contains the system-related information, and profile credential is defined by corresponding asset ID and the service ID.</ns0:p><ns0:formula xml:id='formula_2'>Pro f ile = {pro f ileConext, pro f ileCredential} pro f ileContext = {UID, Issuer ID , Issued time } pro f ileCredential = {Asset UID , Service UID } Context Management</ns0:formula><ns0:p>Context management is a crucial point of managing access rights as it is the part of defining environment conditions to allow access under some circumstances and denied them under others. Conditions can be location, time, security level, authentication status, protocol, and more. The context information values are gathered from the network resources and the surrounding environment regularly to ensure the correctness of the condition values. The context conditions can be attached to profiles, assets and services metadata to deny or allow access according to the fulfillment of conditions. Context management is presented in the following notations:</ns0:p><ns0:formula xml:id='formula_3'>Condition = {conditionConext, conditionMetadata} conditionContext = {UID, Issuer ID , Issued time } ConditionMetadata = {Condition check(1) ,Condition check(2) , . . . ,Condition check(n) }</ns0:formula><ns0:p>Condition context represents the information of a condition in the system and condition metadata covers the different condition to check before granting access to a requester. In order to check a condition with the gathered data, we apply the context check function that takes a Boolean format as follow: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>Condition check =</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Statement Management</ns0:head><ns0:p>A statement is a document holding the permission and access rights of a particular resource in a particular service. Statements can be granted to a particular subject or a group of subjects in the form of tokens for access authorization. The statement management module is responsible for registering, updating, and altering statements and also checking the legitimacy of other system information such as profiles and services before each registration or updating operation.</ns0:p><ns0:p>The complete statement definition in IoT-CCAC can be expressed with the following notations:</ns0:p><ns0:formula xml:id='formula_5'>statement = {statementConext, statementCredential, statementMetadata} statementContext = {SID, Issuer ID , Issued time , Principal} statementCredential = {Pro f ile ID , Action, Resource URI } statementMetadata = {Condition(1) ID ,Condition(2) ID , . . . ,Condition(n) ID }</ns0:formula><ns0:p>A brief description of statement elements as follows:</ns0:p><ns0:p>• SID: unique identifier for each statement in the system.</ns0:p><ns0:p>• Issuer: the issuer of the statement (e.g., service admin).</ns0:p><ns0:p>• Issued-time: represent the time of creating or updating the statement.</ns0:p><ns0:p>• Principal: for each statement alteration, a new statement will be created and the principal field will have the previous SID value. In the case of first-time creation, the principal field will have the same SID field value. It is mainly used for traceability concerns.</ns0:p><ns0:p>• Profile: represents the resource profile in a particular service.</ns0:p><ns0:p>• Action: represent the set of access rights that are granted in the statement. Its value could is defined as follow: Action ∈ {Read,W rite, Read&W rite, NULL}</ns0:p><ns0:p>If Action=NULL, permission denied.</ns0:p><ns0:p>• Resource URI: a URI format used to identify the access path of a particular entity. Represented as follow:</ns0:p><ns0:p>Resource URL = Domain ID : Service ID : Region ID : Resource ID Domain ID represents the organization holding the ownership of the entity; service ID represents the application where the entity participates, region ID represents the location of the entity, and the resource ID represents the resource for which the action is granted.</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT-CCAC Membership Service</ns0:head><ns0:p>IoT-CACM Membership Service (MS) implements accounts to interact with its management module.</ns0:p><ns0:p>Each account belongs to one domain, and there are two types of accounts consisting of a collection of permission. The first type is administrators that carry full permission to create and alter assets and services related information and assign members to services. The other is service members with the right to perform various network-related operations, such as creating and altering statements, granting access tokens to subjects, and auditing or analyzing reports. Subjects (requesters) simply use client-server abstractions to interact with the access control system after receiving a valid authentication token from the IDM. As a result, the device to device communication is enabled as a resource (asset) in the system can interact with another resource as it holds a valid identity issued by IDM and can request access permission as a standard subject.</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT-CCAC Token Operations</ns0:head><ns0:p>In this subsection, we discuss the capability token operations, starting from converting a statement to a Capability token then the generation of group token then the revocation process.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Issuing Capability Token</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> illustrates the system interactions between the subject, IDM, and access control for generating the capability tokens. As an initial step, after defining the elements of the access control and linking the resources and services following the previous steps, all the subjects requesting access must first register to the consortium network via the IDM for a valid identity. Once the subject is successfully registered, it can request a token containing access rights to access a network resource. Further, the service member checks the subject legitimacy and checks if the statement containing the same permissions exists. If it does not exist, SM creates a statement containing the granted access right and the access conditions, as well as filling other statement information as mentioned above. Once the statement is formed, the system creates a capability token using a token generation algorithm and communicates it to the requester following this notation: Introducing group capability token will help categorize and consolidate the access forms where a group is created and hold few subjects seeking the same access right and access purpose. In our design, a group capability token is supported as we do not store capabilities internally. By design, the statements can be shared among different subjects if they are from the same service and request the same access rights. A subject (group manager) needs to create a valid group identity (GID) from the IDM and send a request containing the GID and other access rights. At the same time, the system will generate a token following the notation:</ns0:p><ns0:formula xml:id='formula_6'>Cap Token = {Sub ject ID , Statement ID ,Valid Time } Valid Time = {Start V T , End V T }</ns0:formula><ns0:formula xml:id='formula_7'>Cap Token = {Group ID , Statement ID ,Valid Time } 8/16</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Revocation of Capability Token</ns0:head><ns0:p>The basic way of revoking a capability token is to store the capability token in a database and perform a simple delete action and check all tokens for every access request. Alternatively, token revocation can be done by adding the token to an exception list and perform a check task for that list each time a subject sends an access request. In our design, we opted for an exception list to revoke the tokens. For instance, our granular design allows denying access to resources at various levels. Suppose a profile is deleted or a service is archived, or a statement document is altered. In that case, the statements containing outdated data will not be valid when performing an authorization decision task, and the request will be rejected. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT-CCAC Authorization Process</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to check the legitimacy. If the token is not valid or the subject is not authenticated, the request is rejected.</ns0:p><ns0:p>• Check the approval of access right: checking if the access method requested matches the access right granted in the statement credentials. If not met, the request is rejected</ns0:p><ns0:p>• Check the availability of the asset: using the profile ID, we check the existence of the profile and services and the availability of the asset. In the case of an unavailable asset, the request will be rejected.</ns0:p><ns0:p>• Check the fulfillment of conditions: the last step is to check if the conditions of the statement metadata are fulfilled and match the records on the database. If the condition is met, the request is authorized.</ns0:p></ns0:div>
<ns0:div><ns0:head>IOT-CCAC AND BLOCKCHAIN INTEGRATION</ns0:head><ns0:p>This section discusses the different points of choosing decentralized data registry architecture over a centralized architecture for an IoT access control system based on the requirements of the city, business, and utilities IoT application (Abou Jaoude and Saade, 2019).</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT Access Control Data Registry Requirements</ns0:head><ns0:p>The data layer of access control is a critical component and the most vulnerable as it persistently stores the necessary data. The system acts upon the stored information to answer the correctness of the operations.</ns0:p><ns0:p>For instance, by nature, IoT is decentralized as each domain owns a sub-network of objects, and the IoT network is predicted to be the network of billions of sensors and connected devices, which will require high reliability, and availability, to support such network. Besides, the crucial element to unlock the value of IoT is the interoperability and data exchange between the sub-networks; henceforth, integrity, confidentiality, and transparency are crucial to achieving the purpose <ns0:ref type='bibr' target='#b17'>(Yaqoob et al., 2017)</ns0:ref>.</ns0:p><ns0:p>A blockchain is an immutable digital ledger formed by blocks that uses cryptography practices to store data. It can provide properties such as decentralization, immutability, and enhanced security, while traditional databases allow data to be stored in different data structures such as tables or documents with properties of competent transaction performance, scalability, usability, and low-cost maintenance. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the advantages and disadvantages of adopting blockchain and a traditional database to meet the security and performance requirements of the IoT access control system.</ns0:p><ns0:p>In the final analysis, blockchain meets the access control security requirements, and the database leverages performance. The Blockchain technology was created to support the concept of decentralized monetary systems such as Bitcoin and Ethereum, where the databases are better for system performance as they have been used since the early age of creating computers. Our purpose is to deliver a secure, robust access control system to meet IoT domains' needs and leverage IoT value by enabling interoperability and data exchange. Using the traditional database to backbone the IoT-CCAC will certainly leverage a robust solution; many research works and existing enterprise incidents have already proven that security issues will arise, such as data breaches, single-point access, and the lack of transparency. For this reason, we adopt blockchain-based database technology to enhance the security of IoT-CCAC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Blockchain Integration</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows the IoT-CCAC based blockchain architecture, which consists of a consortium network (IoT domains), IoT consortium capability access control module, and blockchain-based database registry.</ns0:p><ns0:p>The consortium is formed by the members participating in the network to achieve a business goal or collaborate in a particular project. Every member needs to provide a node or more to participate in the network operations and hold a copy of the data.</ns0:p><ns0:p>The IoT-CCAC module were explained in the previous section. Each module connects and interact with its registry. The off-chain data store is a standard database that stores the environment data coming from the devices and sensors in the network and participates in context management checking. And the blockchain-based database is a hybrid solution that assembles the security characteristics of blockchain and the database performance in one data registry. Adopting a blockchain-based database for IoT-CCAC will bring all the database properties such as high transaction rate, data indexing, and querying, and friendly usability, and will enhance the security of the access control data registry by making it resistant to </ns0:p><ns0:p>unauthorized changes without a need for any trusted third party to answer the integrity or the confidentiality of the registry, as all the consortium members hold a copy of the data and it is maintained by cryptography practices.</ns0:p></ns0:div>
<ns0:div><ns0:head>Use Case Scenario</ns0:head><ns0:p>To better explain the proposed architecture as well as its integration with blockchain technology and evaluate its feasibility, we turn to the use case scenario of waste management in a smart city.</ns0:p><ns0:p>In the context of our IoT-CCAC, say that there are three organizations working on a collaborative project (a service) of waste management. In this service, the city council oversees the whole process and manages the garbage can sensors; the recycling plant is responsible for sorting the recycled garbage; and the manufacturing plant then processes the classified recycled materials to manufacture specific products.</ns0:p><ns0:p>The sensors and devices recording relative data belong to different organizations. They are considered their respective assets inside the system, and each asset can possess more than one profile considering it can participate in other services, and the processes of registering services and creating digital assets for a particular organization is by holing an account in MS.</ns0:p><ns0:p>Given the waste management service, when a supervisor from the city council needs to read all project-related data to have a clear picture of the current status of the whole project, registration and authentication through IDM is needed. He will then need to request a capability access token. A capability token (a statement inside a system) is granted to a supervisor only if he satisfies the system requirements.</ns0:p><ns0:p>Using the acquired token, the supervisor can authenticates and send an access request to corresponding asset. On receiving an access request containing the capability token, the token authorization process will decide if the access is granted or not after performing all the checks.</ns0:p></ns0:div>
<ns0:div><ns0:head>IMPLEMENTATION AND EVALUATION</ns0:head><ns0:p>In this section we will discuss the implementation stages and the evaluation results. Firstly, we discuss the system design, present the testing environment, the employed technologies, and finally we discuss the obtained results.</ns0:p></ns0:div>
<ns0:div><ns0:head>System Design Discussion</ns0:head><ns0:p>• Who is going to use it?</ns0:p><ns0:p>Our system is designed to fit the categories of business, utilities, and enterprise domain applications Manuscript to be reviewed</ns0:p><ns0:p>Computer Science • What are the requirements of the system?</ns0:p><ns0:p>In this IoT application scale, reliability and availability are essential as the services' intelligent decisions are based on the vast amount of data continually collected from the network resources.</ns0:p><ns0:p>Confidentiality and integrity are secondly important as any compromised data might lead to a wrong decision that will impact the consortium business plans and objectives. End-user privacy is not much required in such applications as they are not potentially involved in the interaction with the system (end-user privacy involves in the personal network).</ns0:p><ns0:p>• Who are the users of the system?</ns0:p><ns0:p>Our system is controlled by a set of administrators where each domain has an account, and each account has the role of an administrator. In the same manner, an account also has a service member role responsible for the management of services inside the system. Interacting with the system is done by a simple client-server abstraction without using any system-related notion or task.</ns0:p><ns0:p>• What are the inputs and outputs of the system?</ns0:p><ns0:p>The input data of our system are the physical resources registered in the form of assets and the data gathered from the environment to be consumed in the condition fulfillment process. On the other hand, the output is a payload object that contains an authorization decision.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment Environment</ns0:head><ns0:p>We evaluated our solution by simulating the use case scenario of waste management in the previous section.</ns0:p><ns0:p>For instance, we simulated three organizations collaborating on several services, each organization can register physical devices as assets and generate Json Web Tokens matching the access control statements.</ns0:p><ns0:p>Our experiment results is based on two types of the data store; first one is implemented locally (offline)</ns0:p><ns0:p>using Docker technology and the second one we use the BigchainDB online test node (https://test.ipdb.io/).</ns0:p><ns0:p>The different components of the experiment use RESTful API to exchange data.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment Setting</ns0:head><ns0:p>In order to examine the performance of our proposed access control solution we implemented our prototype using Python programming language, FLASK micro web framework, and JWT Crypto Library.</ns0:p><ns0:p>We employed BigchainDB, a blockchain-based database as the data store node using Docker container.</ns0:p><ns0:p>BigchainDB node contains BigchainDB 2.0 server, a mangoDB database and a Tendermint as consensus protocol. The execution environment is a virtual machine running xUbuntu with 4 GB of RAM and 1 CPU Intel Core i7-4510U 2.00 GHz. We also used Apache JMeter to simulate simultaneous registration and authentication requests.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 5. Execution time of creating transactions</ns0:head></ns0:div>
<ns0:div><ns0:head>Security Analysis</ns0:head><ns0:p>To evaluate the security of our solution we present several common attacks in the decentralized system and we discuss our approach to avoid such attacks.</ns0:p><ns0:p>• Forgery attack: it's a common attack of tampering identities and transaction data to get access to confidential information or pollute the system with random data.</ns0:p><ns0:p>• Injection Attacks: an attacker can inject a script to manipulate the authorization process or to alter a database record or to carry out an unwanted action.</ns0:p><ns0:p>• Man in the middle attack: it's when the attacker secretly stands in the middle between two communicating entities and read the exchanged data.</ns0:p><ns0:p>We prevent such attacks by implementing the following preconditions:</ns0:p><ns0:p>• The assets identities are unknown to attackers and to other participating organization as we only exchange externally the capability tokens corresponding to statements which contains the profile ID not the asset ID.</ns0:p><ns0:p>• We use SHA256 algorithm to digitally sign the exchanged messages and tokens which makes it hard to forge or to alter.</ns0:p><ns0:p>• For each system input we run different checks to ensure the legitimacy of the information before accessing the data store.</ns0:p><ns0:p>• Adopting blockchain technology is another strong point to enhance the security of the system and to prevent forgery attacks.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment results and discussion</ns0:head><ns0:p>To verify the effectiveness of IoT-CCAC, we conducted several test experiment, firstly we calculate the communication and computation cost for creating assets, profiles, services, and statements using the local data store and the online testing node. The results are presented in the Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. Transaction in BigchainDB flows in two stages before committing it for permanent storage.</ns0:p><ns0:p>• Preparation stage: the stage of constructing the transaction and executing initial input checks to ensure the validity of the transaction. At this stage the size of the testing transaction is 240 bytes.</ns0:p><ns0:p>• Fulfillment stage: the stage of signing the transaction with the creator private key and hash its body content to be the ID of the transaction. At this stage the size of the testing transaction is 368 bytes.</ns0:p><ns0:p>The second experiment is to send bulk transactions to the server to test the performance and the scalability of the data store in term of handling concurrent transactions. Using apache JMeter, We piloted 4 groups of 10, 50, 100, 200 concurrent transactions for create and authenticate operations.</ns0:p><ns0:p>The Figure <ns0:ref type='figure'>5</ns0:ref> shows the execution time of creation operation , and the Figure <ns0:ref type='figure'>6</ns0:ref> shows the execution time of authentication operation. The x-axis present the execution time (in millisecond), and the y-axis presents the 4 bulk transactions group and the series represents the average time of the transaction commit, transaction latency, and time to connect the server. From the first sight we can see that creation operation takes more time as a transaction have to accomplish two verification steps before writing it inside a block.</ns0:p><ns0:p>For instance, when the BigchainDB server receives the creation transaction it will check the legitimacy of the transaction by verifying the signature of the issuer and the correctness of the data by hashing the transaction content and comparing it to the transaction ID; if both checks are valid and the transaction is not a duplicate inside the system the transaction will be written inside the blockchain database. The Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Our solution showed a better performance results compared to related work as we adopted the blockchain-based database technology to benefit of the blockchain security properties and the high performance of the database. From the experiment results it can be observed that our solution can achieve the performance needed for IoT city-level access control. In addition, the flexibility, and the interoperability of IoT-CCAC makes it adoptable for different use cases and IoT applications.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we presented a blockchain-based consortium access control approach for IoT large-scale applications. We first compared the capability access control model (CBAC) to the role and attributed based access control (RBAC, ABAC) and highlight the advantages of adopting CBAC over the others for IoT applications. In the architecture design, we presented a novel concept of managing the access control data to enable flexibility, interoperability, and data exchange between the consortium members. We explained the system assets, services, profiles, statements, membership service, and the token generation protocol, including the authorization process. Secondly, we discussed the IoT access control data store requirements, and we conducted a comparison between blockchain security features and database performance properties. We explained the benefits of adopting a blockchain-based database as the IoT-CCAC data store and discussed its integration architecture. A concept-proof prototype was implemented and evaluated in terms of security and performance to verify the feasibility of IoT-CCAC. Our IoT-CCAC approach showed promising results and a good fit for city and business network applications.</ns0:p><ns0:p>Despite our approach's encouraging results, a part of our ongoing efforts is to investigate and further explore the blockchain-based database security and privacy for access control in IoT networks and application scenarios.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. IoT-CCAC System Architecture</ns0:figDesc><ns0:graphic coords='6,144.91,186.70,407.22,266.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Condition constant OP Value Condition constant ∈ {Location, Time, Protocol, . . . } OP ∈ {≥, ≤, =, =, . . .} 6/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 2. Token Generation Sequence Diagram</ns0:figDesc><ns0:graphic coords='9,171.50,271.75,354.05,295.20' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 shows the flowchart of the authorization decision process. The components participating in the authorization decision are IDM and the access control module. IDM is responsible for checking the legitimacy of the subject requesting access.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Authorization Process</ns0:figDesc><ns0:graphic coords='10,239.28,230.15,218.48,411.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. IoT-CCAC & Blockchain Integration</ns0:figDesc><ns0:graphic coords='13,165.96,63.78,365.13,271.26' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Latency time of 50 simultaneous authentication transactions</ns0:figDesc><ns0:graphic coords='16,168.40,63.78,360.24,226.80' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,168.52,171.01,360.00,221.28' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of Three Access Control Methods.<ns0:ref type='bibr' target='#b11'>Ouaddah et al. (2017)</ns0:ref> </ns0:figDesc><ns0:table><ns0:row><ns0:cell>is made possi-</ns0:cell></ns0:row><ns0:row><ns0:cell>ble by providing tokens</ns0:cell></ns0:row><ns0:row><ns0:cell>only (the management of</ns0:cell></ns0:row><ns0:row><ns0:cell>tokens are easier and ef-</ns0:cell></ns0:row><ns0:row><ns0:cell>ficient), but it can be a</ns0:cell></ns0:row><ns0:row><ns0:cell>problem for complex sys-</ns0:cell></ns0:row><ns0:row><ns0:cell>tems (many components)</ns0:cell></ns0:row><ns0:row><ns0:cell>where a user may han-</ns0:cell></ns0:row><ns0:row><ns0:cell>dle tens of tokens where</ns0:cell></ns0:row><ns0:row><ns0:cell>each token represents an</ns0:cell></ns0:row><ns0:row><ns0:cell>access right</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>3/16 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison between Traditional Database and Blockchain for IoT Access Control. Tseng et al.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Computing and communication cost for each system transaction (average time of 100 transactions is presented)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Transaction</ns0:cell><ns0:cell>Preparation</ns0:cell><ns0:cell>Fulfillment</ns0:cell><ns0:cell>Commit Time</ns0:cell><ns0:cell>Commit Time</ns0:cell></ns0:row><ns0:row><ns0:cell>type</ns0:cell><ns0:cell>Time (ms)</ns0:cell><ns0:cell>Time (ms)</ns0:cell><ns0:cell>Offline (ms)</ns0:cell><ns0:cell>Online (ms)</ns0:cell></ns0:row><ns0:row><ns0:cell>Asset</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>1210</ns0:cell></ns0:row><ns0:row><ns0:cell>Service</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>1170</ns0:cell></ns0:row><ns0:row><ns0:cell>Profile</ns0:cell><ns0:cell>1.5</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>640</ns0:cell></ns0:row><ns0:row><ns0:cell>Statement</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2.6</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>900</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 6. Execution time of authenticating transactions</ns0:note></ns0:figure>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54422:1:1:NEW 15 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "February 14, 2021
Dear Editors,
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns.
In particular the experiment part is explained to reviewers in the upcoming comments answers.
We believe that the manuscript is suitable for publication in PeerJ Computer Science and we would love to receive more precise comments to enhance the explanation of the manuscript.
The Covid-19 pandemic is making everything harder to make as I’m in Algeria but I study in China and I’m doing research online for one year already which is so difficult.
Dear Editors, i couldn’t do more work in the LAB server to provide faultless results or to try new blockchain solutions for comparison but I uploaded the entire experiment database to Github repository to see and validate the findings yourselves which I hope is not a plus work.
Best regards
Mohammed Amine Bouras
On behalf of all authors.
Reviewer 1
Experimental design
Authors presented detailed description of IoT-CCAC module but blockchain details are missing. Not sure which platform of blockchain is used by authors and why they chose only that. They have mentioned two blockchains (Ethereum private blockchain, Hyperledger Fabric).
We have used blockchain database (i.e., BigchainDB) instead of traditional blockchain platforms. We illustrated this in the ABSTRACT (line 21-23), INTRODUCTION (line 60-63, 77-80, 86-87), IOT-CCAC AND BLOCKCHAIN INTEGRATION (line 268-269, 278-285), IMPLEMENTATION AND EVALUATION (line 399-340).
Validity of the findings
The python code of the experiment is not attached and therefore hard to verify results. I would suggest either to publicly upload the code and instructions on github or attach here as supplementary file.
https://github.com/mohamine18/CCapAC to run the code please follow the steps in readme.me file. All the experiment code lines are inside the folder https://github.com/mohamine18/CCapAC/tree/master/CCapAC other folders and files are just for the framework configuration
Comments for the author
Overall the paper looks fine and well written. But it is important to verify the results presented by authors and it is only worth to accept the paper if experiments can be verified. Therefore I would suggest a major revision with above comments.
The code is provided with GitHub link. https://github.com/mohamine18/CCapAC. However, to run the code please follow the steps in README.md file.
The system is build using python language under the mini framework (FLASK) and the local database used is a document database named (TinyDB), while the blockchain database used is (BigChainDB) and it’s used online in the link (https://test.ipdb.io/).
Reviewer 2
Basic reporting
The paper discusses the utilization of the blockchain technology for IoT access control.
The general presentation of the paper is good which makes it very to read since the main contributions are clearly highlighted.
I suggest adding some references to Table 1 and 3.
Thank you so much for reviewing our work and your suggestions. We added few important references to make our work solid and we would love more if you can provide some references to take in consideration if there’s any we couldn’t cite.
• Ouaddah, A., Mousannif, H., Abou Elkalam, A., and Ouahman, A. A. (2017). Access control in the internet of things: Big challenges and new opportunities. Computer Networks, 112:237–262.
• Tseng, L., Yao, X., Otoum, S., Aloqaily, M., and Jararweh, Y. (2020). Blockchain-based database in an iot environment: challenges, opportunities, and analysis. Cluster Computing, 23(3):2151–2165.
Comments for the author
The authors presented in Table 3 a list of the classical DB's disadvantages. For instance the mutability, the point of failure,. ... etc. However what would be the general thoughts in the case of the use of a nosql DB (or any other big-data implementation) where the database management system ensures a high degree of redundancy. and recovery.
The purpose of Table 3 is to compare the advantages and disadvantages of traditional databases and blockchain-based database technology, so as to introduce our method based on blockchain-based database (BigchainDB) yet we used a local database which is nosql named (TinyDB) to compare the both technologies.
Reviewer 3: Pavol Helebrandt
Basic reporting
While the article includes tables and graphs representing experiment results, authors do not provide raw data.
Experiment methodology should be described in greater detail, as further explained in the second part of this review.
Authors use clear and technically correct English, provide relevant references and article is structured in an acceptable format.
We provided our code and raw data in the following link: https://github.com/mohamine18/CCapAC. Note that we used the online version of BigchainDB and we saved data locally in our database as the online version restart itself every 24 hours. We discussed our experiment methodology in the experiment results and discussion section. Our main goal was to answer a high availability and security for our proposed identity management work.
We believe we maybe didn’t get your comment meaning and we are happy to receive more comments if we didn’t address correctly the comments.
Experimental design
It is not clear if the measurement results presented were obtained from only one measurement or if they are an average of multiple runs. The label of Table 4 states numbers presented are an average of 100 transactions and lines 372-274 that four groups of 10, 50, 100 and 200 concurrent transactions were used. However, it is unclear if measurements for these four groups were repeated or run only once. The initial state of both local VM and online node used for experiment can have an impact on the accuracy of the results.
The results presented are repeated at least 5 times for each group. We also took in consideration the initialization of online database where we conducted our experiment in different days to get accurate results.
The results presented are verified by two co-authors and everything was coded from scratch.
Security analysis is superficial, and although preconditions presented improve IOT-CACC model security, attack prevention mechanisms are not exhaustive. The statement on lines 359-360 does not specify what checks are made or even references the IOT-CACC authorization process provided in system architecture.
The checks made are simply comparing the hashes of the transactions. We have three hashes for each transaction including the ID of the transaction as a hash, content hash, and full transaction hash (ID + Content). We discussed this in the section: experiment results and discussion.
Validity of the findings
Findings presented are based on preliminary results and make it difficult to call them robust. Due to deficiencies mentioned in the second part of this review, it is hard to compare results with related work conclusively. As such, the statement on lines 389-391 requires more experiments for direct comparison.
Comments for the author
IOT-CACC model presented is promising, but further investigation to support presented conclusions is required.
We agreed with the statement provided. yet we couldn’t run more experiments as we are not in LAB to run our personal blockchain nodes and do more work and experiments. Note that this paper is just a chapter of a project.
As I mentioned to editor in chief, it’s so hard to work remotely from home especially when it comes to complex system such as blockchain.
We hoped that China will welcome us before the New Year, and we extended the revision time for the same purpose, but seems it’s not possible to conduct more experiments to strength our work.
" | Here is a paper. Please give your review comments after reading it. |
32 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cervical cancer is the fourth leading cause of cancer-related deaths in women, especially in low to middle-income countries. Despite the outburst of recent scientific advances, there is no totally effective treatment, especially when diagnosed in an advanced stage.</ns0:p><ns0:p>Screening tests, such as cytology or colposcopy, have been responsible for a strong decrease in cervical cancer deaths. Cervical cancer automatic screening via Pap smear is a highly valuable cell imaging-based detection tool, where cells must be classified as being within one a multitude of ordinal classes, ranging from abnormal to normal. Current approaches at ordinal inference for neural networks are found to not take advantage of the ordinal problem or to be too uncompromising. A non-parametric ordinal loss for neuronal networks is proposed that promotes the output probabilities to follow a unimodal distribution. This is done by imposing a set of different constraints over all pairs of consecutive labels which allows for a more flexible decision boundary relative to approaches from the literature. Our proposed loss is contrasted against other methods from the literature by using a plethora of deep architectures. A first conclusion is the benefit of using non-parametric ordinal losses against parametric losses in cervical cancer risk prediction. Additionally, the proposed loss is found to be the top-performer in several cases. The best performing model scores an accuracy of 75.6% for 7 classes and 81.3% for 4 classes.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The survival rate for women with cervical cancer is disturbing -in the USA, the 5-year survival rate for all women with cervical cancer is just 66% and is responsible for around 10 deaths per week in women aged 20 to 39 years <ns0:ref type='bibr' target='#b24'>(Siegel et al., 2020)</ns0:ref>. The main factor for the high mortality rate is the asymptomatic characteristic of cervical cancer in its initial stages which justifies the need for early diagnosis. Screening tests have been responsible for a strong decrease in cervical cancer deaths. The screening programs are implemented in most developed countries and the process includes Human papillomavirus (HPV) test, cytology test (or Pap smear), colposcopy, and biopsy <ns0:ref type='bibr'>(WHO, 2019)</ns0:ref>. HPV is a group of viruses known to influence the risk of cervical cancer -some types of HPV viruses produce dysplastic changes in cells that can progressively lead to the development of cancer <ns0:ref type='bibr'>(WHO, 2019)</ns0:ref>.</ns0:p><ns0:p>A cervical cytology test is used to detect potentially abnormal cells from the uterine cervix. These premalignant dysplastic changes of cells are classified in progressive stages: 7 stages by the World Health Organization classification (WHO) system or 4 stages by The Bethesda classification system (TBS) <ns0:ref type='bibr' target='#b8'>(DeMay, 2007)</ns0:ref>.</ns0:p><ns0:p>The risk of developing cancer is especially pronounced for the later stages. Therefore, distinguishing between the stages can be crucial for diagnosis. Yet, most of the literature focuses on binary classification (normal or abnormal), ignoring the fine-grained classification of cervical cells into different stages.</ns0:p><ns0:p>The classification of observations into naturally ordered classes, as the stages of the premalignant dysplastic changes, are traditionally handled by conventional methods intended for the classification of nominal classes where the order relation is ignored. This paper introduces a new machine learning paradigm intended for multi-class classification problems where the classes are ordered. A non-parametric loss for ordinal data classification is proposed whose goal is to promote unimodality in the prediction distributions produced by the neural network; e.g., it would be inconsistent to predict that stage 1 and stage 3 are both more likely than stage 2. Yet, this loss is more flexible than other losses from the literature which force a binomial distribution in the output <ns0:ref type='bibr' target='#b7'>(Costa and Cardoso, 2005)</ns0:ref>. This loss is also contrasted with the standard cross-entropy loss and networks that predict classes in the form of an ordinal encoding <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref>. The Herlev dataset, which comprises 917 images of individual cervical cells in different stages of the disease, is used in the experiments <ns0:ref type='bibr' target='#b16'>(Jantzen and Dounias, 2006)</ns0:ref> together with a plethora of CNN architectures.</ns0:p><ns0:p>In the next section, the problem and dataset at hand are presented. Other work for Pap smear cell classification is then reviewed in the 'Related Work' section. The proposed loss is elaborated on the 'Proposal' section, and the experimental details are described in 'Experiments' with results and discussion presented in 'Results'. The study finished with a 'Conclusion' section.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>According to the WHO classification system, there are seven different types of Pap smear cells in cervical cancer progression. This system rules the existence of three different types of normal cells and four different types of abnormal cells. From suspicious cells to carcinoma in situ (CIS), the premalignant dysplastic changes of cells can also include four stages, which are mild, moderate, severe dysplasia, and carcinoma in situ <ns0:ref type='bibr' target='#b28'>(Suhrland, 2000)</ns0:ref>. However, nowadays the most used classification system is the TBS classification system, which is widely accepted among the medical society. According to the TBS system, the Pap smear cells can be divided into four classes: normal, Low-grade Squamous Intraepithelial Lesion (LSIL), High-grade Squamous Intraepithelial Lesion (HSIL) and Carcinoma in situ <ns0:ref type='bibr' target='#b22'>(Nayar and Wilbur, 2015)</ns0:ref>.</ns0:p><ns0:p>The different stages of cervical cytology abnormalities are associated with different morphological changes in the cells including the cytoplasm and nucleus. However, the small visual differences between some stages of cervical cells make the construction of a multi-class autonomous classification system a true challenge.</ns0:p><ns0:p>The dataset used in this work was the Herlev Dataset which is a publicly available dataset 1 collected at the Herlev University Hospital (Denmark) using a digital camera and microscope with an image resolution of 0.201 µm per pixel <ns0:ref type='bibr' target='#b16'>(Jantzen and Dounias, 2006)</ns0:ref>. The preparation of the specimens followed the traditional Pap smear and Pap staining. To amplify the certainty of diagnosis, two cyto-technicians and a doctor characterized the cervical images in Herlev dataset into seven classes. The Herlev dataset is composed of a total of 917 images of individual cervical cells. Each image contains ground truth segmentation and classification label. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the nomenclature of the 7 different classes from the dataset, wherein classes 1-3 correspond to types of normal cells and classes 4-7 to different levels of abnormal cells. Illustrations of these classes are then displayed in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>.</ns0:p><ns0:p>In most cases, the abnormal cells present a nucleus size bigger than healthy cells. However, the difference between normal columnar nucleus and severe and/or carcinoma nucleus is not easy to differentiate, which makes the classification between these different types of cells a challenge.</ns0:p><ns0:p>There is some imbalance in the class distribution of the dataset: 8%, 7%, 11%, 19%, 16%, 22%, and 17%, whereas 14% would be expected if the distribution was uniform. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_0'>Normal Abnormal WHO k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 TBS k = 1 k = 2 k = 3 k = 4</ns0:formula></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In most literature, the classification of Pap smear images consists in a binary separation between normal and abnormal cell (two classes), using different methodologies such as: Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b5'>(Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kashyap et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bora et al., 2017)</ns0:ref>, k-Nearest Neighbours (kNN) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bora et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b21'>Marinakis et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b9'>Fekri Ershad, 2019)</ns0:ref>, Fuzzy c-Means Algorithm (FCM) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>William et al., 2019)</ns0:ref>, k-Means clustering <ns0:ref type='bibr' target='#b23'>(Paul et al., 2015)</ns0:ref>, Artificial Neural Networks (ANN) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014)</ns0:ref>, and, more recently, Convolutional</ns0:p><ns0:p>Neural Networks (CNN) <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Kurnianingsih et al., 2019)</ns0:ref>.</ns0:p><ns0:p>However, all this work consists of binary classification, which is useful for screening, but not enough for a confident diagnosis. Fewer works explore the multi-class classification of cervical cells on Herlev dataset:</ns0:p><ns0:p>Chankong et al. ( <ns0:ref type='formula'>2014</ns0:ref>) proposed a multi-class automatic cervical cancer cell classification system using different classifiers, such as FCM, ANN, and kNN. However, this system is based only on 9</ns0:p><ns0:p>cell-based features. The Chankong's approach applies feature extraction from the nucleus and cytoplasm in each image and requires manual selection of the best threshold to minimize the error when applying the FCM to construct the cell mask. More recently, <ns0:ref type='bibr' target='#b19'>Kurnianingsih et al. (2019)</ns0:ref> perform feature extraction in a more autonomous way using a CNN. The use of a CNN simplifies the pre-processing steps that were necessary for the previous Chankong's approach. <ns0:ref type='bibr' target='#b11'>Ghoneim et al. (2019)</ns0:ref> proposed a new approach for multi-class cervical cancer cell detection and classification, using in the first step CNNs to extract deep-learned features and in second step, extreme learning machine (ELM)-based classifiers to classify the input cell images. <ns0:ref type='bibr' target='#b20'>Lin et al. (2019)</ns0:ref> proposed a new CNN-based method that combines cell image appearance with cell morphology for multi-class classification of cervical cells in the Herlev dataset. In all these cases, cross-entropy is adopted for ordinal data classification.</ns0:p><ns0:p>Assume that examples in a classification problem come from one of K classes, labeled from C (1) to C (K) , corresponding to their natural order in ordinal classes, and arbitrarily for nominal classes. </ns0:p></ns0:div>
<ns0:div><ns0:head>Cross</ns0:head><ns0:formula xml:id='formula_1'>= [ ŷn1 • • • ŷnk • • • ŷnK ] ∈ R K</ns0:formula><ns0:p>is the output probability vector given by the neural network for observation n. Note that y nk ∈ {0, 1}, ŷnk ∈ [0, 1] and ∑ K k=1 y nk = ∑ K k=1 ŷnk = 1.</ns0:p><ns0:p>However, CE has limitations when applied to ordinal data. Defining k ⋆ n ∈ {1, • • • , K} as the index of the true class of observation x n (the position where y nk = 1), it is then clear that</ns0:p><ns0:formula xml:id='formula_2'>CE(y n , ŷn ) = − log( ŷnk ⋆ n ).</ns0:formula><ns0:p>Intuitively, CE is just trying to maximize the probability in the output corresponding to the true class, ignoring all the other probabilities. For this loss, an error between classes C (1) and C (2) is treated as the same as an error between C (1) and C (K) , which is undesirable for ordinal problems. Probabilities produced by two different models for observation n. CE is unable to distinguish both scenarios, setting the same loss for both. For ordinal problems, a unimodal distribution, peaking in the true class, is, arguably, preferable. In this example, k ⋆ n = 3 is the assumed true class.</ns0:p><ns0:p>Furthermore, the loss does not constrain the model to produce unimodal probabilities, so inconsistencies can be produced such as ŷn j > ŷnℓ < ŷni , even when 1 ≤ j < ℓ < i ≤ K. It would be preferable for output probabilities to follow a unimodal distribution, as depicted by Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Cross-entropy is a fair approach for nominal data, where no additional information is available.</ns0:p><ns0:p>However, for ordinal data, the order can be explored to further regularize learning.</ns0:p><ns0:p>Ordinal Encoding (OE): A model agnostic way to introduce ordinality is by training binary classifiers, in the form of an ensemble, where each classifier tries to distinguish between each pair of adjacent classes, (i) and C (i+1) <ns0:ref type='bibr' target='#b10'>(Frank and Hall, 2001</ns0:ref>). An adaptation for neural networks consists of training a single neural network to produce K − 1 outputs, where each output makes a binary decision between each pair of adjacent classes. The information on the ordinal distribution can, therefore, be encoded in the y labels themselves <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>C</ns0:formula><ns0:p>In traditional one-hot encoding, classes are encoded using the indicator function</ns0:p><ns0:formula xml:id='formula_4'>✶(k = k ⋆ ), so that y nm is represented by 1 if k = k ⋆ n and 0 otherwise. In ordinal encoding, classes are encoded using a cumulative distribution -the indicator function used is ✶(k < k ⋆ ) so that y nm is represented by 1 if k < k ⋆</ns0:formula><ns0:p>n and 0 otherwise. Each output represents the incremental neighbor probability, and the inverse operation (during inference) is performed by summing up these outputs, p nk = ∑ K−1 m=1 y nm .</ns0:p><ns0:p>Binomial Unimodal (BU): A different approach is to constrain the output of the network directly, approaching the problem under a regression setting. Instead of several outputs, the output predicts a single output representing the probability along the classes, with y n = 0 representing k ⋆ n = 1 and <ns0:ref type='bibr' target='#b7'>and Cardoso, 2005)</ns0:ref>. Thus, this model has only one output unit as the final layer. The model's sigmoid output is converted into class probabilities using Binomial's probability mass function.</ns0:p><ns0:formula xml:id='formula_5'>y n = 1 representing k ⋆ n = K (Costa</ns0:formula><ns0:p>The goal of this approach is to maintain the ordinality of the classes by applying a parametric model for the output probabilities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Poisson Unimodal (PU):</ns0:head><ns0:p>The Poisson probability mass function (PMF) is used to enforce a discrete unimodal probability distribution <ns0:ref type='bibr' target='#b0'>(Beckham and Pal, 2017)</ns0:ref>. As a final layer, the log Poisson PMF transform is applied together with a softmax to normalize the output as a probability distribution.</ns0:p><ns0:p>These parametric approaches sometimes sacrifice accuracy to ensure the ordinality assumption. This sacrifice might sometimes prove too much, especially given the fact that modern deep learning datasets are very big and have a significant number of mislabels. A loss is now proposed to stimulate a unimodal output without modifying the network architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSAL</ns0:head><ns0:p>As already explored, CE presents drawbacks when applied to ordinal data. By focusing only on the mode of the distribution and ignoring all the other values in the output probability vector, one is not leveraging the ordinal information intrinsic to the data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fixing CE with an Ordinal Loss Term</ns0:head><ns0:p>A possible fix for CE is to a regularization term that penalizes the deviations from the unimodal setting. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for an order-aware loss could be CO(y n , ŷn ) = CE(y n , ŷn )</ns0:p><ns0:formula xml:id='formula_6'>+ λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU( ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU( ŷn(k) − ŷn(k+1) ), (1)</ns0:formula><ns0:p>where λ ≥ 0 is controlling the relative importance of the extra terms favoring unimodal distributions.</ns0:p><ns0:p>Predicted probability values are expected to decrease monotonously as we depart left and right from the true class. The added terms penalize any deviation from this expected unimodal distribution, with a penalty proportional to the difference of the consecutive probabilities. The additional terms, although promoting uni-modality, still allow flat distributions. A generalization of the previous idea is to add a margin of δ > 0 to the ReLU, imposing that the difference between consecutive probabilities is at least δ . This leads us to a second CE loss, CO2, suitable for ordinal classes:</ns0:p><ns0:formula xml:id='formula_7'>CO2(y n , ŷn ) = CE(y n , ŷn ) + λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU(δ + ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU(δ + ŷn(k) − ŷn(k+1) ). (2)</ns0:formula><ns0:p>A value of δ = 0.05 has been empirically found to provide a sensible margin. This loss is aligned with the proposal present in <ns0:ref type='bibr' target='#b2'>Belharbi et al. (2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Beyond CO2: Ordinal Entropy Loss Function</ns0:head><ns0:p>In CO2, the CE term by itself is only trying to maximize the probability estimated in the true output class (while ignoring the remaining probabilities); the ordinal terms are promoting unimodality but not penalizing (almost) flat distributions. This also explains why the ordinal terms by themselves (especially the version without margin) are not enough to promote strong learning: the model could converge to solutions where the predicted probability in the true class is only slightly above the neighboring probabilities, which will not, most likely, provide a strong generalization for new observations.</ns0:p><ns0:p>However, the extreme nature of CE, ignoring almost everything in the predicted distribution ŷn is equivalent to assume that the perfect probability distribution is one on the true class and zero everywhere else. This assumes a strong belief and dependence on the chosen one-hot encoding, which is often a crude approximation to the true probability class distribution. Seldom, for a fixed observation x n , the class is deterministically known; rather, we expect a class distribution with a few non-zero values. This is particularly true for observations close to the boundaries between classes. A softer assumption is that the distribution should have a low entropy, only.</ns0:p><ns0:p>This leads us to propose the ordinal entropy loss, HO2, for ordinal data as</ns0:p><ns0:formula xml:id='formula_8'>HO2(y n , ŷn ) = H(ŷ n ) + λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU(δ + ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU(δ + ŷn(k) − ŷn(k+1) ),<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where H(p) denotes the entropy of the distribution p.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>Several neural network architectures are now trained using the aforementioned losses for the dataset at hand. In this work it was also evaluated the performance differences between parametric and nonparametric losses for ordinal classification (Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). All the experiments were implemented in PyTorch and are available online 2 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Pre-processing</ns0:head><ns0:p>Given that all images from the Herlev dataset were of different sizes, all images were resized to 224 × 224 pixels; however, before the resize of cytological images a zero-padding must be done to avoid the function. It became famous for winning ImageNet, the first CNN to do so <ns0:ref type='bibr' target='#b18'>(Krizhevsky et al., 2012)</ns0:ref>.</ns0:p><ns0:p>The following ImageNet competitions were also won by other CNNs -VGG and GoogLeNet -which were evolutions on top of AlexNet which consisted mostly in an explodingly higher number of parameters <ns0:ref type='bibr' target='#b27'>(Simonyan and Zisserman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b29'>Szegedy et al., 2015)</ns0:ref>. Then, MobileNet <ns0:ref type='bibr' target='#b13'>(Howard et al., 2017)</ns0:ref> introduced hyperparameters to help the user choose between latency and accuracy trade-offs. An attempt was then made at curbing the number of parameters with ShuffleNet <ns0:ref type='bibr' target='#b35'>(Zhang et al., 2018)</ns0:ref> by approximating convolution operators using fewer parameters.</ns0:p><ns0:p>Finally, an attempt was made at curbing the number of parameters, which had been exploding, while keeping the accuracy of these early CNNs with SqueezeNet <ns0:ref type='bibr' target='#b14'>(Iandola et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In another line of research, ResNet <ns0:ref type='bibr' target='#b12'>(He et al., 2016)</ns0:ref> introduced residual blocks whose goal was to make the optimization process easier for gradient descent. Each residual block learns a = f (x) + x instead of a = f (x). Given that weights are initialized randomly around zero and most activation functions are also centered in zero (an exception would be the logistic activation function), then, in expectation, all neurons output zero before any training. Therefore, when using residual blocks, at time=0, a = x, i.e. activations produce the identity function. This greatly helps gradient descent focus on finding improvements (residuals) on top of the identity function. While this model allowed for deeper neural networks, each percent of improved accuracy cost nearly doubling the number of layers, which motivated WideResNet <ns0:ref type='bibr' target='#b33'>(Zagoruyko and Komodakis, 2016)</ns0:ref> and ResNeXt <ns0:ref type='bibr' target='#b32'>(Xie et al., 2017)</ns0:ref> to improve the residual architecture to improve learning time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Training</ns0:head><ns0:p>The weights of the architectures mentioned in the previous section are already initialized by pre-training on ImageNet. Adam was used as the optimizer and starts with a learning rate of 10 −4 . The learning rate is reduced by 10% whenever the validation loss is stagnant for 10 epochs. The training process is completed after 100 epochs, and the best model is saved as judged by the lowest validation loss. The dataset was divided into 10 different k-folds using stratified cross-validation, maintaining the ratio among the different classes.</ns0:p><ns0:p>The loss hyperparameter λ was chosen using nested k-fold cross-validating (K=5) for each architecture, in order to reduce the bias in combined hyperparameter tuning and model selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The most popular classification metric is accuracy (Acc). For N observations, taking k i and ki to be the label and prediction of the n-th observation, respectively, then Acc</ns0:p><ns0:formula xml:id='formula_9'>= 1 N ∑ N n=1 ✶( k⋆ n = k ⋆ n )</ns0:formula><ns0:p>, where ✶ is the indicator function.</ns0:p><ns0:p>However, this metric treats all class errors as the same, whether the error is between adjacent classes or between classes in the extreme. If we have K classes represented by a set C = {C (1) , C (2) , . . . , C (K) }, then accuracy will treat an error between C (1) and C (2) with the same magnitude as an error between C (1) and C (K) which is clearly worse. As an illustration, in a medical setting, a misdiagnosis between</ns0:p></ns0:div>
<ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:1:1:REVIEW 12 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Stage II and Stage III of a disease, while bad, is not as bad as a misdiagnosis between Healthy and Stage III. For that reason, a popular metric for ordinal classification is the Mean Absolute Error (MAE),</ns0:p><ns0:formula xml:id='formula_10'>MAE = 1 N ∑ i |k ⋆ i − k⋆ i |.</ns0:formula><ns0:p>This metric is not perfect since it treats an ordinal variable as a cardinal variable.</ns0:p><ns0:p>An error between classes C (1) and C (3) will be treated as two times worse than an error between classes C (1) and C (2) . Naturally, the assumption of ardinality is not always warranted.</ns0:p><ns0:p>To evaluate the models' performance we also used a specific metric for ordinal classification, Uniform</ns0:p><ns0:p>Ordinal Classification Index (UOC) which takes into account accuracy and ranking in the performance assessment and it is also robust against imbalanced classes <ns0:ref type='bibr' target='#b25'>(Silva et al., 2018)</ns0:ref>. The better the performance, the lower the UOC.</ns0:p><ns0:p>By combining a quality assessment (accuracy) with a quantity assessment (MAE) and also with a specific metric for ordinality (UOC) we hope to provide a balanced view of the performance of the methods.</ns0:p><ns0:p>The two other metrics used are the AUC of ROC or AUROC (Area Under the Receiver Operating Characteristic) and Kendall's τ rank correlation coefficient. AUROC measures how well-calibrated are the probabilities produced by the model. This first metric is used in the binary classification context (two classes) and are extended for multi-class by comparing each class against the rest (one vs rest strategy)</ns0:p><ns0:p>and performing an overall average, known as macro averaging. On the other hand, Kendall's Tau is a non-parametric evaluation of relationships between columns of ranked data, so it is a measure of ordinal association between data. The τ correlation coefficient returns a value that ranges from -1 to 1, with 0 being no correlation and 1 perfect correlation.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The average performance for the 10-folds of 9 different architectures are presented in Tables <ns0:ref type='table' target='#tab_8'>3-8</ns0:ref>, for both the 7-class and 4-class classification problems, with the seven different learning losses -conventional Cross-Entropy (CE), Binomial Unimodal (BU) <ns0:ref type='bibr' target='#b7'>(Costa and Cardoso, 2005)</ns0:ref>, Poisson Unimodal (PU) <ns0:ref type='bibr' target='#b0'>(Beckham and Pal, 2017)</ns0:ref>, Ordinal Encoding (OE) <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref> and our proposed losses (CO, CO2 and HO2), as measured by MAE, accuracy, UOC index and Kendall's coefficient (Appendix -table <ns0:ref type='table' target='#tab_0'>A1</ns0:ref>-A2) detailed in the previous section. In all the Tables of results, the best results for each loss are in bold. Furthermore, the other results are compared to the best result and a statistical test is used with the hypothesis of them being the same or not. A p-value of 0.1 is used with a one-sided paired t-test due to the small sample size (10 folds).</ns0:p><ns0:p>For the 7-class classification problem, Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the results for MAE, which confirm the influence of ordinal losses in promoting ordinality when comparing to nominal loss (CE).OE loss achieved the best performance across the different architectures but it is also notable the good performance of our loss, which in 67% of cases, the models trained with our proposed loss provide better MAE results. The MAE results present in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> for 4-class classification are consistent with the 7-class Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>, with ordinal losses winning over nominal CE.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> and Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> are presented the accuracy results for 7-class and 4-class classification problems respectively. Regarding this metric, the results between nominal and ordinal losses are more balanced.</ns0:p><ns0:p>CE loss performance is above ordinal losses in 11% for 7-class problem and is tied for 4-class problem.</ns0:p><ns0:p>This can be explained by the lower role of ordinality in the CE loss, as also confirmed by the MAE results.</ns0:p><ns0:p>This means that when misclassification occurs, ordinal losses tend to classify Pap smear images as being closer to the real class. Results for UOC index (Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref>-8) are also consistent with MAE metric, with 78% of the models presenting a lowest UOC index when using the ordinal losses. Table <ns0:ref type='table' target='#tab_0'>A1</ns0:ref> and Table <ns0:ref type='table' target='#tab_2'>A2</ns0:ref> in appendix represents the results for Kendall's τ coefficient test in 4-class and 7-class classification problems. This results are also aligned with the results of MAE and UOC metrics, being the ordinal losses in advantage when comparing with nominal CE.</ns0:p><ns0:p>Adding the margin (CO → CO2) influences positively most of metrics for 7 and 4 classes. Using entropy (CO2 or HO2), instead of cross-entropy, promotes better results on the metrics intrinsically connected with ordinality (MAE, UOC and Kendall's τ coefficient).</ns0:p><ns0:p>The average results for all losses across the 9 different architectures for MAE, accuracy, UOC, AUROC, Kendall's τ coefficient and Gini index metrics are present in appendix Table <ns0:ref type='table' target='#tab_3'>A3</ns0:ref> and Table <ns0:ref type='table' target='#tab_4'>A4</ns0:ref> for 4 and 7 class classification respectively. In both Tables are represented the results using the classical mode (softmax) to aggregate the probabilities and also using mean (expectation trick) <ns0:ref type='bibr' target='#b0'>Beckham and Pal (2017)</ns0:ref>. Concerning the sparsity of the prediction probabilities, as measured by the Gini index, it is notable that as the loss is made more ordinal-aware then the predicted probabilities tend to be more spread across the classes. This could already be seen in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Interestingly, the OE distribution is almost identical to the CE distribution and has been omitted from the figure for eligibility.</ns0:p><ns0:p>In average, in most metrics, non-parametric losses outperformed parametric losses. This difference can be justified with the greater flexibility in boundary decisions provided by non-parametric losses. OE, CO2 and HO2 provided better results across the different metrics when comparing to BU and PU.</ns0:p><ns0:p>Most work from the literature concerns the binary case using Herlev dataset (normal vs abnormal); only a couple concern themselves with the 7-class and 4-class ordinal classification problem. Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref> contrasts the best performing models from two recent works against the proposed method. In our case, the non-parametric loss (CO2) was able to beat the state-of-the-art by 11.1% (7 classes) and by 10% (4 classes) in accuracy metric. Furthermore, the confusion matrix from Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> contrasts the proposal against <ns0:ref type='bibr' target='#b20'>Lin et al. (2019)</ns0:ref>.</ns0:p><ns0:p>There are classes of cells easier to be classified than others, as shown by the confusion matrix in The main challenge occurs in the classification of abnormal cells (i.e., mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma) where the characteristics of these kinds of cells are very similar. The fact is that the abnormal classes correspond to different levels of evolution of structures, with a progressive change in its characteristics which leads them to present characteristics common to two levels, being a hard task even for cytopathologists to classify them correctly. Thus, the right multiclass classification of abnormal cells is highly desirable, and with substantial clinical value.</ns0:p><ns0:p>Finally, the influence of the losses on the output probabilities is illustrated in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> when predicting two classes for the 7-class case. Contrasting this to Figure <ns0:ref type='figure'>1</ns0:ref>, it is clear that the proposed loss tends to promote a unimodal distribution of probabilities relative to the CE loss, which tends to maximize the probability in the output corresponding to the true class and ignoring all the other probabilities distribution, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Furthermore, the proposed loss is a convenient way of introducing ordinality to the optimization problem without the major changes in architecture or data format required by other techniques from the literature. On the other hand, the proposed loss is pestered by two new hyperparameters, albeit the suggested values have been found to be robust. While motivated by this dataset, the proposed loss could potentially be used by other applications of ordinal classification or even time-series problems.</ns0:p><ns0:p>In any case, there is a lot to improve in multi-class classification of cervical cells to achieve better accuracy performances since results are still short of 75.6% accuracy. The Herlev data set is mainly composed of expert-selected 'typical' cells, however, in real-life circumstances, it is more complex because a cytology image contains lots of cells and not only a single cropped cell, so further work are needed before moving the results of this work to practice. Another important detail is the effect of overlapping nuclei and cell clumps, which has not been taken into account in this work. The presence of artifacts on the images also interferes with classification accuracy.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>-Entropy (CE): Traditionally, a CNN would perform multi-class classification by minimizing cross-entropy, averaged over the training set, CE(y n , ŷn ) = − K ∑ k=1 y nk log( ŷnk ), where y n = [y n1 • • • y nk • • • y nK ] ∈ R K represents the one-hot encoding of the class of the n-th observation and ŷn</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Figure1. Probabilities produced by two different models for observation n. CE is unable to distinguish both scenarios, setting the same loss for both. For ordinal problems, a unimodal distribution, peaking in the true class, is, arguably, preferable. In this example, k ⋆ n = 3 is the assumed true class.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Defining ✶(x) as the indicator function of x and ReLU(x) = x✶(x > 0) = max(0, x), a tentative solution 4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:1:1:REVIEW 12 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic representation of the used and proposed ordinal losses.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Examples of data augmentation on the Herlev database. The original zero-padding image (left) and random transformations.</ns0:figDesc><ns0:graphic coords='7,206.79,194.79,283.47,66.13' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of state-of-the-art confusion matrix (7 classes) against WideResNet50 trained using the HO2 loss.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 (b). Columnar cells are sometimes inappropriately classified as severe dysplasia cells since severe dysplasia cells have similar characteristics in appearance and morphology with columnar cells (e.g., small cytoplasm, dark nuclei).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Probability distribution for WideResNet50 contrasting losses CE (solid line) and HO2 (dashed line).</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The 7 Different Pap Smear Classes in the Herlev dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>WHO TBS Type of cell</ns0:cell><ns0:cell>Quantity</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>1 2 3</ns0:cell><ns0:cell>1 1 1</ns0:cell><ns0:cell>Superficial squamous epithelial Intermediate squamous epithelial Columnar epithelial</ns0:cell><ns0:cell>74 cells 70 cells 98 cells</ns0:cell></ns0:row><ns0:row><ns0:cell>Abnormal</ns0:cell><ns0:cell>4 5 6 7</ns0:cell><ns0:cell>2 3 3 4</ns0:cell><ns0:cell cols='2'>Mild squamous non-keratinizing dysplasia Moderate squamous non-keratinizing dysplasia 146 cells 182 cells Severe squamous non-keratinizing dysplasia 197 cells Squamous cell carcinoma in situ intermediate 150 cells</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Image examples of the 7 Different Pap Smear Classes in the Herlev dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Results in terms of Mean Absolute Error (MAE) for 7 class problem, averaged for 10 folds.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Lower is better</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet</ns0:cell><ns0:cell cols='7'>0.46 ± 0.08 0.52 ± 0.09 0.50 ± 0.09 0.44 ± 0.08 0.90 ± 0.19 0.41 ± 0.08 0.45 ± 0.10</ns0:cell></ns0:row><ns0:row><ns0:cell>googlenet</ns0:cell><ns0:cell cols='7'>0.39 ± 0.05 0.41 ± 0.07 0.42 ± 0.08 0.38 ± 0.09 0.53 ± 0.10 0.37 ± 0.07 0.36 ± 0.06</ns0:cell></ns0:row><ns0:row><ns0:cell>mobilenet v2</ns0:cell><ns0:cell cols='7'>0.34 ± 0.05 0.36 ± 0.04 0.31 ± 0.04 0.33 ± 0.05 0.52 ± 0.26 0.34 ± 0.06 0.34 ± 0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>resnet18</ns0:cell><ns0:cell cols='7'>0.34 ± 0.09 0.36 ± 0.06 0.35 ± 0.06 0.35 ± 0.10 0.49 ± 0.11 0.34 ± 0.07 0.35 ± 0.10</ns0:cell></ns0:row><ns0:row><ns0:cell>resnext50 32x4d</ns0:cell><ns0:cell cols='7'>0.34 ± 0.07 0.33 ± 0.05 0.33 ± 0.03 0.34 ± 0.06 0.41 ± 0.08 0.33 ± 0.06 0.31 ± 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>shufflenet v2 x1 0 0.41 ± 0.07 0.49 ± 0.07 0.41 ± 0.05 0.38 ± 0.07 0.47 ± 0.08 0.40 ± 0.05 0.38 ± 0.06</ns0:cell></ns0:row><ns0:row><ns0:cell>squeezenet1 0</ns0:cell><ns0:cell cols='7'>0.38 ± 0.07 0.45 ± 0.05 0.46 ± 0.07 0.40 ± 0.09 0.97 ± 0.31 0.41 ± 0.08 0.45 ± 0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>vgg16</ns0:cell><ns0:cell cols='7'>0.37 ± 0.09 0.44 ± 0.05 0.44 ± 0.10 0.37 ± 0.06 0.67 ± 0.15 0.36 ± 0.06 0.36 ± 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>wide resnet50 2</ns0:cell><ns0:cell cols='7'>0.33 ± 0.06 0.37 ± 0.05 0.32 ± 0.06 0.30 ± 0.04 0.45 ± 0.13 0.33 ± 0.06 0.35 ± 0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>9/15</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:1:1:REVIEW 12 Dec 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Results in terms of Mean Absolute Error (MAE) for 4 class problem, averaged for 10 folds. ± 0.06 0.32 ± 0.04 0.28 ± 0.04 0.29 ± 0.06 0.47 ± 0.19 0.29 ± 0.05 0.31 ± 0.06 googlenet 0.24 ± 0.04 0.25 ± 0.03 0.25 ± 0.05 0.24 ± 0.05 0.38 ± 0.17 0.22 ± 0.05 0.25 ± 0.06 mobilenet v2 0.22 ± 0.06 0.21 ± 0.03 0.24 ± 0.05 0.22 ± 0.06 0.23 ± 0.04 0.24 ± 0.05 0.22 ± 0.05 resnet18 0.24 ± 0.03 0.26 ± 0.05 0.24 ± 0.05 0.22 ± 0.04 0.29 ± 0.11 0.22 ± 0.04 0.26 ± 0.06 resnext50 32x4d 0.21 ± 0.03 0.22 ± 0.04 0.23 ± 0.03 0.20 ± 0.04 0.28 ± 0.07 0.21 ± 0.03 0.22 ± 0.05 shufflenet v2 x1 0 0.28 ± 0.05 0.33 ± 0.05 0.27 ± 0.05 0.31 ± 0.06 0.36 ± 0.09 0.28 ± 0.06 0.28 ± 0.04 squeezenet1 0 0.28 ± 0.06 0.30 ± 0.05 0.30 ± 0.06 0.27 ± 0.07 0.66 ± 0.170.29 ± 0.04 0.31 ± 0.05 vgg16 0.27 ± 0.06 0.28 ± 0.06 0.26 ± 0.05 0.24 ± 0.03 0.53 ± 0.18 0.26 ± 0.05 0.27 ± 0.05 wide resnet50 2 0.23 ± 0.05 0.22 ± 0.04 0.20 ± 0.06 0.22 ± 0.05 0.43 ± 0.22 0.21 ± 0.05 0.22 ± 0.03</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Lower is better</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet 0.31 Avg</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.40</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Results in terms of Accuracy for 7 class problem, averaged for 10 folds. Higher is better</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet</ns0:cell><ns0:cell cols='4'>71.1 ± 5.1 60.6 ± 3.7 64.8 ± 5.4 70.1 ± 5.1</ns0:cell><ns0:cell cols='3'>44.2 ± 7.6 70.8 ± 5.1 67.9 ± 5.4</ns0:cell></ns0:row><ns0:row><ns0:cell>googlenet</ns0:cell><ns0:cell cols='4'>72.5 ± 3.7 66.1 ± 4.3 68.5 ± 4.5 71.5 ± 5.3</ns0:cell><ns0:cell cols='3'>59.7 ± 8.2 72.4 ± 4.9 72.4 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>mobilenet v2</ns0:cell><ns0:cell cols='7'>75.0 ± 4.4 69.0 ± 3.5 74.2 ± 2.8 74.4 ± 3.8 64.4 ± 16.5 73.1 ± 3.7 74.1 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell>resnet18</ns0:cell><ns0:cell cols='4'>74.4 ± 6.1 69.5 ± 3.7 73.3 ± 4.3 73.6 ± 6.4</ns0:cell><ns0:cell cols='3'>64.6 ± 6.5 73.3 ± 4.5 73.3 ± 6.4</ns0:cell></ns0:row><ns0:row><ns0:cell>resnext50 32x4d</ns0:cell><ns0:cell cols='4'>74.4 ± 3.7 72.4 ± 4.3 72.8 ± 2.8 74.0 ± 4.2</ns0:cell><ns0:cell cols='3'>68.0 ± 5.9 75.5 ± 3.5 75.7 ± 5.3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>shufflenet v2 x1 0 71.9 ± 5.5 61.0 ± 4.5 67.7 ± 4.6 70.7 ± 4.9</ns0:cell><ns0:cell cols='3'>65.5 ± 4.5 70.7 ± 3.1 71.3 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>squeezenet1 0</ns0:cell><ns0:cell cols='7'>73.0 ± 4.3 63.3 ± 2.4 67.3 ± 3.6 71.8 ± 5.3 40.5 ± 13.3 70.8 ± 4.5 67.1 ± 5.0</ns0:cell></ns0:row><ns0:row><ns0:cell>vgg16</ns0:cell><ns0:cell cols='4'>73.1 ± 4.7 63.9 ± 4.6 67.6 ± 6.2 72.6 ± 3.8</ns0:cell><ns0:cell cols='3'>54.4 ± 8.5 71.8 ± 3.3 72.0 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>wide resnet50 2</ns0:cell><ns0:cell cols='4'>75.7 ± 3.2 69.7 ± 3.1 74.5 ± 4.3 76.8 ± 1.9</ns0:cell><ns0:cell cols='3'>66.1 ± 7.8 75.6 ± 4.0 74.3 ± 5.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>73.4</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>58.6</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>72.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>6</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results in terms of Accuracy for 4 class problem, averaged for 10 folds. Higher is better</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet</ns0:cell><ns0:cell cols='7'>76.1 ± 3.8 72.8 ± 2.7 75.7 ± 4.0 76.8 ± 3.6 63.9 ± 12.5 75.9 ± 3.5 74.9 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell>googlenet</ns0:cell><ns0:cell cols='7'>79.9 ± 1.8 78.3 ± 2.6 77.3 ± 3.1 79.2 ± 4.0 69.4 ± 12.0 80.0 ± 3.8 78.4 ± 4.0</ns0:cell></ns0:row><ns0:row><ns0:cell>mobilenet v2</ns0:cell><ns0:cell cols='4'>81.8 ± 4.3 80.7 ± 2.5 78.8 ± 3.4 81.2 ± 4.9</ns0:cell><ns0:cell cols='3'>79.8 ± 3.7 79.2 ± 3.2 80.8 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>resnet18</ns0:cell><ns0:cell cols='4'>79.8 ± 2.6 77.2 ± 2.3 78.5 ± 4.1 80.7 ± 4.1</ns0:cell><ns0:cell cols='3'>75.2 ± 8.4 80.4 ± 3.8 78.0 ± 4.3</ns0:cell></ns0:row><ns0:row><ns0:cell>resnext50 32x4d</ns0:cell><ns0:cell cols='4'>82.0 ± 3.1 80.0 ± 3.5 79.5 ± 3.2 82.3 ± 4.3</ns0:cell><ns0:cell cols='3'>76.2 ± 5.1 80.8 ± 2.8 79.9 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>shufflenet v2 x1 0 77.1 ± 3.7 72.1 ± 3.5 76.1 ± 3.5 75.0 ± 4.4</ns0:cell><ns0:cell cols='3'>70.4 ± 6.6 76.9 ± 3.9 76.2 ± 2.3</ns0:cell></ns0:row><ns0:row><ns0:cell>squeezenet1 0</ns0:cell><ns0:cell cols='7'>77.2 ± 4.2 73.5 ± 3.1 74.9 ± 5.1 77.3 ± 5.3 49.9 ± 12.2 75.5 ± 3.3 74.3 ± 4.5</ns0:cell></ns0:row><ns0:row><ns0:cell>vgg16</ns0:cell><ns0:cell cols='7'>77.9 ± 4.8 74.4 ± 4.7 77.5 ± 3.8 79.4 ± 2.5 58.1 ± 11.8 77.0 ± 3.9 77.4 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>wide resnet50 2</ns0:cell><ns0:cell cols='7'>80.8 ± 3.2 79.3 ± 3.3 82.2 ± 4.2 81.0 ± 3.9 64.0 ± 15.3 81.3 ± 4.2 80.6 ± 2.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>76.5</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>78.5</ns0:cell><ns0:cell>77.8</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>10/15</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:1:1:REVIEW 12 Dec 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Results in terms of Uniform Ordinal Classification Index (UOC) for 7 class problem, averaged for 10 folds. Lower is better.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet</ns0:cell><ns0:cell cols='4'>45.1 ± 6.5 51.7 ± 5.7 49.8 ± 6.6 44.0 ± 6.9</ns0:cell><ns0:cell cols='3'>70.3 ± 7.8 42.8 ± 7.3 46.4 ± 7.8</ns0:cell></ns0:row><ns0:row><ns0:cell>googlenet</ns0:cell><ns0:cell cols='4'>38.9 ± 6.0 44.2 ± 5.7 44.6 ± 7.3 39.0 ± 7.2</ns0:cell><ns0:cell cols='3'>51.3 ± 9.1 38.8 ± 6.9 38.1 ± 4.7</ns0:cell></ns0:row><ns0:row><ns0:cell>mobilenet v2</ns0:cell><ns0:cell cols='7'>36.0 ± 5.7 39.7 ± 4.9 33.6 ± 4.5 35.4 ± 5.6 46.7 ± 15.0 36.2 ± 6.4 36.2 ± 6.1</ns0:cell></ns0:row><ns0:row><ns0:cell>resnet18</ns0:cell><ns0:cell cols='4'>36.2 ± 9.3 40.1 ± 5.7 37.2 ± 6.3 37.3 ± 9.1</ns0:cell><ns0:cell cols='3'>46.9 ± 6.8 37.1 ± 7.6 37.8 ± 8.7</ns0:cell></ns0:row><ns0:row><ns0:cell>resnext50 32x4d</ns0:cell><ns0:cell cols='4'>36.9 ± 6.8 37.0 ± 5.2 37.6 ± 4.6 36.8 ± 6.1</ns0:cell><ns0:cell cols='3'>42.2 ± 6.7 35.3 ± 6.7 34.0 ± 7.2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>shufflenet v2 x1 0 41.8 ± 7.1 49.6 ± 6.4 43.6 ± 4.9 40.3 ± 6.3</ns0:cell><ns0:cell cols='3'>46.3 ± 6.0 42.4 ± 4.1 40.3 ± 4.9</ns0:cell></ns0:row><ns0:row><ns0:cell>squeezenet1 0</ns0:cell><ns0:cell cols='7'>40.4 ± 6.0 47.9 ± 3.8 47.5 ± 4.8 42.4 ± 8.1 73.6 ± 13.6 42.7 ± 7.4 46.8 ± 7.0</ns0:cell></ns0:row><ns0:row><ns0:cell>vgg16</ns0:cell><ns0:cell cols='7'>38.5 ± 8.2 47.2 ± 4.9 45.5 ± 8.6 39.0 ± 6.4 60.3 ± 10.0 40.2 ± 6.1 39.6 ± 6.8</ns0:cell></ns0:row><ns0:row><ns0:cell>wide resnet50 2</ns0:cell><ns0:cell cols='4'>35.7 ± 5.2 40.8 ± 5.4 35.6 ± 6.3 33.5 ± 4.5</ns0:cell><ns0:cell cols='3'>44.2 ± 9.1 34.8 ± 6.5 36.6 ± 8.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>38.8</ns0:cell><ns0:cell>44.2</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell>53.5</ns0:cell><ns0:cell>39.0</ns0:cell><ns0:cell>39.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Results in terms of Uniform Ordinal Classification Index (UOC) for 4 class problem, averaged for 10 folds. Lower is better. ± 6.4 36.2 ± 6.4 34.6 ± 4.7 32.3 ± 3.8 55.1 ± 10.5 34.7 ± 5.5 35.1 ± 6.0 wide resnet50 2 30.2 ± 5.7 29.9 ± 4.9 28.2 ± 5.0 30.5 ± 6.2 47.7 ± 14.4 29.1 ± 5.6 30.7 ± 4.3</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>alexnet</ns0:cell><ns0:cell cols='8'>38.2 ± 5.1 39.5 ± 3.4 37.1 ± 4.3 37.0 ± 4.9 52.7 ± 14.2 37.4 ± 5.8 38.9 ± 5.8</ns0:cell></ns0:row><ns0:row><ns0:cell>googlenet</ns0:cell><ns0:cell cols='8'>31.6 ± 3.1 31.7 ± 3.6 34.4 ± 5.6 32.5 ± 5.7 44.7 ± 14.6 30.8 ± 5.5 32.9 ± 6.3</ns0:cell></ns0:row><ns0:row><ns0:cell>mobilenet v2</ns0:cell><ns0:cell cols='5'>30.1 ± 6.9 29.2 ± 3.7 32.8 ± 5.2 30.6 ± 7.5</ns0:cell><ns0:cell cols='3'>31.0 ± 4.8 32.5 ± 5.5 30.5 ± 5.4</ns0:cell></ns0:row><ns0:row><ns0:cell>resnet18</ns0:cell><ns0:cell cols='8'>31.4 ± 4.6 33.1 ± 3.7 32.3 ± 5.5 29.4 ± 6.0 36.7 ± 11.0 30.3 ± 4.1 33.2 ± 6.7</ns0:cell></ns0:row><ns0:row><ns0:cell>resnext50 32x4d</ns0:cell><ns0:cell cols='5'>28.7 ± 4.7 29.8 ± 4.9 32.0 ± 3.9 27.5 ± 5.3</ns0:cell><ns0:cell cols='3'>35.9 ± 4.8 28.8 ± 4.6 31.0 ± 5.2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>shufflenet v2 x1 0 35.8 ± 5.3 38.6 ± 4.7 36.7 ± 4.4 39.0 ± 6.5</ns0:cell><ns0:cell cols='3'>43.5 ± 9.0 36.4 ± 6.9 35.9 ± 4.7</ns0:cell></ns0:row><ns0:row><ns0:cell>squeezenet1 0</ns0:cell><ns0:cell cols='5'>36.6 ± 5.8 37.3 ± 4.3 38.2 ± 6.8 35.3 ± 6.9</ns0:cell><ns0:cell cols='3'>65.1 ± 9.4 37.6 ± 4.1 39.6 ± 4.6</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>vgg16 35.3 Avg</ns0:cell><ns0:cell>33.1</ns0:cell><ns0:cell>33.9</ns0:cell><ns0:cell>34.0</ns0:cell><ns0:cell>32.7</ns0:cell><ns0:cell>45.8</ns0:cell><ns0:cell>33.1</ns0:cell><ns0:cell>34.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>4</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Accuracy comparison of different models with literature for 7 and 4 classes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>7 classes</ns0:cell><ns0:cell>4 classes</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Accuracy (%) Accuracy (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Jantzen et al.</ns0:cell><ns0:cell>61.1</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Lin et al.</ns0:cell><ns0:cell>64.5</ns0:cell><ns0:cell>71.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposal</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell>81.3</ns0:cell></ns0:row></ns0:table><ns0:note>11/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:1:1:REVIEW 12 Dec 2020)</ns0:note></ns0:figure>
</ns0:body>
" | "December 9, 2020
INESC TEC
Campus da Faculdade de Engenharia da Universidade do Porto
Rua Dr. Roberto Frias
4200-465 Porto
Portugal
Article id: 53280
Article Title: Ordinal Losses for Classification of Cervical Cancer Risk
Dear Editors,
We thank the reviewers for their generous comments on the manuscript and we have
edited the manuscript to address their concerns.
All of the code we wrote is available and I have included the link throughout the paper
to the appropriate code repository.
We look forward to hear what your thoughts are about the updated manuscript.
Yours sincerely,
Tomé Mendes Albuquerque
On behalf of all authors.
Editor:
Regarding Reviewer 3’s comments, given the focus on cervical cancer imaging, it may
not be possible or necessary to extend the collection of datasets used in the study.
However, significance tests need to be performed on the estimates of predictive performance, and measures of variance need to be provided. This should be feasible even
with deep learning given the small size of the dataset.
The exact experimental protocol used in the study remains unclear to me. It is important to state exactly how much data is used for training, validation (i.e., parameter
tuning and early stopping), and testing. If stratified k-fold cross-validation is performed to establish the final performance estimates, then parameter tuning needs to
be performed separately for each of the k runs, making sure that information from the
test set of run k does not influence hyperparameter choice for run k in any way.
We agree with your concern and we implemented stratified 10-fold cross-validation and
we also implemented nested k-fold to do parameter tuning (λ of our proposal loss).
We updated the manuscript including in ”Train” subsection more information about
cross-validation methodologies used during the train.
In my opinion, given the current results, where OE appears very competitive with
the proposed new ordinal loss functions, the paper should deemphasise the novel loss
functions and instead focus on the possibility that using ordinal methods improves
results on this cancer data (assuming superiority holds after significance testing).
An empirical comparison of different deep ordinal classification approaches (including
the new ones) on this data seems a valuable contribution. In this regard, the suggestions by Reviewer 2 need to be addressed, particularly the simple baseline using the
”expectation trick” and the published deep ordinal methods cited in the review.
2
We agree with your concern that we should deemphasise the novel loss functions and
focus on the idea that using ordinal methods improves results on this cancer data.
We updated the manuscript by changing article title to ”Ordinal Losses for Classification of Cervical Cancer Risk” and by adding more details between parametric and
no-parametric losses. We also write in ”results” and ”conclusion” sections a critical
analysis of the results obtained by nominal losses (CE) VS ordinal losses and we also
analyze the differences between parametric and non-parametric losses in relation to
their performance.
Reviewer 1: Pingjun Chen
Basic reporting: The writing is unambiguous and easy to follow. Background and
related work are clear and rather detailed. ”Fully automatic knee osteoarthritis severity
grading using deep neural networks with a novel ordinal loss” is suggested to add to
the related work.
Experimental design: Considering the ordinal nature of the pap smear cell classification, the authors propose a non-parametric ordinal loss to promote the output
probabilities to follow a unimodal distribution.
Validity of the findings: The authors experiment with the proposed loss on the
Herlev dataset on multiple CNN architectures. In addition, the authors compare several
other losses. Experiments show the effectiveness of the proposed methods.
Comments for the author: The authors propose a novel ordinal loss for the pap
smear cell classification. They focus on promoting the unimodal distribution of the
output probabilities, which is a good insight into the ordinal classification problem.
The experiments and evaluations well demonstrate the idea.
Reviewer 2: Christopher Beckham
To elaborate on my ’stronger baselines’ point, it seems like the main reason why this
loss was proposed is because we do not necessarily want a distribution that is purely
unimodal (like in the case of PU). Perhaps that is partly because (1) the conditional
probability distribution should not be modelled by a unimodal distribution; and/or (2)
using PU (i.e. a binomial distribution) would be too constraining since the variance
cannot be easily controlled.
3
To address point (2): a variance-controlled version of the binomial distribution does
exist – called the Conway-Maxwell Binomial (CMB) [2,3] – which has a variancecontrolling term. That means that your network could be modified to have two outputs:
(p, v) = f(x), and then you can maximise the log-likelihood of a CMB distribution. (A
more heuristic version of this was proposed in [1], but it’s essentially CMB.)
Secondly, to address point (1): why not just infer a mixture distribution between
a regular softmax (CE) distribution and a unimodal (PU) one? For instance, suppose your network was modified to have two outputs: p s(y|x)andp u(y|x), where
p s denotes a regular softmax distribution and p u the unimodal one, you could
simply (for some alpha in [0,1]) construct a mixture distribution between the two:
p(y|x) = alpha ∗ p u(y|x) + (1 − alpha) ∗ p s(y|x). alpha could either be a hyperparameter to tune, or you might even be able to get away with making it a learnable
parameter as part of the network. This would make for a somewhat interesting method,
since a high value of alpha would put more weight on p u, essentially acting as a strong
regulariser on the distribution.
Thirdly, the more competitive version of the simplest baseline (CE) would be to do
a post-hoc label prediction based on the ’expectation trick’ found in [1]. Essentially,
for some p(y|x), if we assign to each class an integer label [1, 2, . . . , K], we take the
expected value of this distribution by computing the dot product between p(y|x) and
[1, 2, . . . , K], and round the result to the nearest whole integer. This basically uses all
of the probability mass in p(y|x) to make a prediction.
In summary, I would compare your proposed technique to: - A more competitive CE
using the expectation trick
We agree with your concern and we implemented the expectation trick for all losses
and architectures. We updated the manuscript by adding two new tables (Table A3.
and A4. with the aggregate results for 4 and 7 class problem, averaged for 10 folds.
- Use a parametric unimodal (PU) method using the CMB distribution
4
We found your proposal very interesting and decided to implement the new parametric
loss using a Poisson distribution based on your article: ”Unimodal Probability Distributions for Deep Ordinal Classification”. We named this loss as Poisson Unimodal
(PU). We updated the manuscript by adding information about this loss in ”Related
Work” section and also by adding in all tables the results for this loss.
- Experiment with using a mixture distribution between p u and p s
We found your comment very interesting for future works, however we decided to not
implement in this article because we thought we would deviate from the central focus
of the article.
Other less significant points: - While it was appreciated that the authors tried out
a vast range of architectures, perhaps it would make for a better presentation if the
number of learnable parameters was stated for each of these architectures. You could
then explore performance vs # parameters. It seems like the dataset you have used
is extremely tiny, and having excessively large networks could degrade generalisation
performance here. If it saves you computational resources, I don’t think some of these
architectures are strictly needed in the analysis: for instance AlexNet and VGG, which
were superceded by ResNets (and for good reason). - It would be interesting to explore
the case where you don’t start off with pre-trained ImageNet weights. While I would
expect such a network to very easily overfit, it can be controlled with sufficient regularisation (weight decay), and also allow you to explore the effect of having a severely
constrained distribution (i.e. PU) in a ’low data’ setting.
We agree with your concern, we also want to explore in future works the performance
vs # parameters among the different architectures. We run our models across a large
number of architectures to prove the robustness of our proposal loss regardless the
architecture.
5
Reviewer 3:
Experimental design Only averages of 5 folds are given, in order to show the robustness I suggest to provide results of multiple experiments (e.g. 10), then averages
and variance / standard deviation or box plots. And in addition I suggest to perform
statistical significance tests on the prosed and tested algorithms.
Validity of the findings The proposed algorithm cost functions are straightforward.
It would be a surprise if ordinal classification can benefit from these cost functions in
general. Only a rigorous statistical evaluation of the proposed cost functions based
on 10 or more data sets utilising more complex statistical evaluation (e.g. Wilcoxons
test) in comparison with the other approaches could prove the strength of the proposed
algorithm.
We agree with your concern and we implemented stratified 10-fold cross-validation
and trained again all the models. We also updated the manuscript Tables of results.
Furthermore, the results are compared to the best loss result and a statistical test is
used with the hypothesis of them being the same or not. A p-value of 0.1 is used with
a one-sided paired t-test.
6
" | Here is a paper. Please give your review comments after reading it. |
33 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cervical cancer is the fourth leading cause of cancer-related deaths in women, especially in low to middle-income countries. Despite the outburst of recent scientific advances, there is no totally effective treatment, especially when diagnosed in an advanced stage.</ns0:p><ns0:p>Screening tests, such as cytology or colposcopy, have been responsible for a strong decrease in cervical cancer deaths. Cervical cancer automatic screening via Pap smear is a highly valuable cell imaging-based detection tool, where cells must be classified as being within one a multitude of ordinal classes, ranging from abnormal to normal. Current approaches at ordinal inference for neural networks are found to not take advantage of the ordinal problem or to be too uncompromising. A non-parametric ordinal loss for neuronal networks is proposed that promotes the output probabilities to follow a unimodal distribution. This is done by imposing a set of different constraints over all pairs of consecutive labels which allows for a more flexible decision boundary relative to approaches from the literature. Our proposed loss is contrasted against other methods from the literature by using a plethora of deep architectures. A first conclusion is the benefit of using non-parametric ordinal losses against parametric losses in cervical cancer risk prediction. Additionally, the proposed loss is found to be the top-performer in several cases. The best performing model scores an accuracy of 75.6% for 7 classes and 81.3% for 4 classes.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The survival rate for women with cervical cancer is disturbing -in the USA, the 5-year survival rate for all women with cervical cancer is just 66% and is responsible for around 10 deaths per week in women aged 20 to 39 years <ns0:ref type='bibr' target='#b24'>(Siegel et al., 2020)</ns0:ref>. The main factor for the high mortality rate is the asymptomatic characteristic of cervical cancer in its initial stages which justifies the need for early diagnosis. Screening tests have been responsible for a strong decrease in cervical cancer deaths. The screening programs are implemented in most developed countries and the process includes Human papillomavirus (HPV) test, cytology test (or Pap smear), colposcopy, and biopsy <ns0:ref type='bibr'>(WHO, 2019)</ns0:ref>. HPV is a group of viruses known to influence the risk of cervical cancer -some types of HPV viruses produce dysplastic changes in cells that can progressively lead to the development of cancer <ns0:ref type='bibr'>(WHO, 2019)</ns0:ref>.</ns0:p><ns0:p>A cervical cytology test is used to detect potentially abnormal cells from the uterine cervix. These premalignant dysplastic changes of cells are classified in progressive stages: 7 stages by the World Health Organization classification (WHO) system or 4 stages by The Bethesda classification system (TBS) <ns0:ref type='bibr' target='#b8'>(DeMay, 2007)</ns0:ref>.</ns0:p><ns0:p>The risk of developing cancer is especially pronounced for the later stages. Therefore, distinguishing between the stages can be crucial for diagnosis. Yet, most of the literature focuses on binary classification (normal or abnormal), ignoring the fine-grained classification of cervical cells into different stages.</ns0:p><ns0:p>The classification of observations into naturally ordered classes, as the stages of the premalignant dysplastic changes, are traditionally handled by conventional methods intended for the classification of nominal classes where the order relation is ignored. This paper introduces a new machine learning paradigm intended for multi-class classification problems where the classes are ordered. A non-parametric loss for ordinal data classification is proposed whose goal is to promote unimodality in the prediction distributions produced by the neural network; e.g., it would be inconsistent to predict that stage 1 and stage 3 are both more likely than stage 2. Yet, this loss is more flexible than other losses from the literature which force a binomial distribution in the output <ns0:ref type='bibr' target='#b7'>(Costa and Cardoso, 2005)</ns0:ref>. This loss is also contrasted with the standard cross-entropy loss and networks that predict classes in the form of an ordinal encoding <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref>. The Herlev dataset, which comprises 917 images of individual cervical cells in different stages of the disease, is used in the experiments <ns0:ref type='bibr' target='#b16'>(Jantzen and Dounias, 2006)</ns0:ref> together with a plethora of CNN architectures.</ns0:p><ns0:p>In the next section, the problem and dataset at hand are presented. Other work for Pap smear cell classification is then reviewed in the 'Related Work' section. The proposed loss is elaborated on the 'Proposal' section, and the experimental details are described in 'Experiments' with results and discussion presented in 'Results'. The study finished with a 'Conclusion' section.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>According to the WHO classification system, there are seven different types of Pap smear cells in cervical cancer progression. This system rules the existence of three different types of normal cells and four different types of abnormal cells. From suspicious cells to carcinoma in situ (CIS), the premalignant dysplastic changes of cells can also include four stages, which are mild, moderate, severe dysplasia, and carcinoma in situ <ns0:ref type='bibr' target='#b28'>(Suhrland, 2000)</ns0:ref>. However, nowadays the most used classification system is the TBS classification system, which is widely accepted among the medical society. According to the TBS system, the Pap smear cells can be divided into four classes: normal, Low-grade Squamous Intraepithelial Lesion (LSIL), High-grade Squamous Intraepithelial Lesion (HSIL) and Carcinoma in situ <ns0:ref type='bibr' target='#b22'>(Nayar and Wilbur, 2015)</ns0:ref>.</ns0:p><ns0:p>The different stages of cervical cytology abnormalities are associated with different morphological changes in the cells including the cytoplasm and nucleus. However, the small visual differences between some stages of cervical cells make the construction of a multi-class autonomous classification system a true challenge.</ns0:p><ns0:p>The dataset used in this work was the Herlev Dataset which is a publicly available dataset 1 collected at the Herlev University Hospital (Denmark) using a digital camera and microscope with an image resolution of 0.201 µm per pixel <ns0:ref type='bibr' target='#b16'>(Jantzen and Dounias, 2006)</ns0:ref>. The preparation of the specimens followed the traditional Pap smear and Pap staining. To amplify the certainty of diagnosis, two cytotechnicians and a doctor characterized the cervical images in Herlev dataset into seven classes. The Herlev dataset is composed of a total of 917 images of individual cervical cells. Each image contains ground truth segmentation and classification label. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the nomenclature of the 7 different classes from the dataset, wherein classes 1-3 correspond to types of normal cells and classes 4-7 to different levels of abnormal cells. Illustrations of these classes are then displayed in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>.</ns0:p><ns0:p>In most cases, the abnormal cells present a nucleus size bigger than healthy cells. However, the difference between the normal columnar nucleus and severe and/or carcinoma nucleus is not easy to differentiate, which makes the classification between these different types of cells a challenge.</ns0:p><ns0:p>There is some imbalance in the class distribution of the dataset: 8%, 7%, 11%, 19%, 16%, 22%, and 17%, whereas 14% would be expected if the distribution was uniform. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_0'>Normal Abnormal WHO k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 TBS k = 1 k = 2 k = 3 k = 4</ns0:formula></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In most literature, the classification of Pap smear images consists in a binary separation between normal and abnormal cell (two classes), using different methodologies such as Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b5'>(Chen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kashyap et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bora et al., 2017)</ns0:ref>, k-Nearest Neighbours (kNN) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bora et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b21'>Marinakis et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b9'>Fekri Ershad, 2019)</ns0:ref>, Fuzzy c-Means Algorithm (FCM) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>William et al., 2019)</ns0:ref>, k-Means clustering <ns0:ref type='bibr' target='#b23'>(Paul et al., 2015)</ns0:ref>, Artificial Neural Networks (ANN) <ns0:ref type='bibr' target='#b4'>(Chankong et al., 2014)</ns0:ref>, and, more recently, Convolutional</ns0:p><ns0:p>Neural Networks (CNN) <ns0:ref type='bibr' target='#b34'>(Zhang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Kurnianingsih et al., 2019)</ns0:ref>.</ns0:p><ns0:p>However, all this work consists of binary classification, which is useful for screening, but not enough for a confident diagnosis. Fewer works explore the multi-class classification of cervical cells on Herlev dataset:</ns0:p><ns0:p>Chankong et al. ( <ns0:ref type='formula'>2014</ns0:ref>) proposed a multi-class automatic cervical cancer cell classification system using different classifiers, such as FCM, ANN, and kNN. However, this system is based only on 9</ns0:p><ns0:p>cell-based features. The Chankong's approach applies feature extraction from the nucleus and cytoplasm in each image and requires manual selection of the best threshold to minimize the error when applying the FCM to construct the cell mask. More recently, <ns0:ref type='bibr' target='#b19'>Kurnianingsih et al. (2019)</ns0:ref> perform feature extraction in a more autonomous way using a CNN. The use of a CNN simplifies the pre-processing steps that were necessary for the previous Chankong's approach. <ns0:ref type='bibr' target='#b11'>Ghoneim et al. (2019)</ns0:ref> proposed a new approach for multi-class cervical cancer cell detection and classification, using in the first step CNNs to extract deep-learned features and in the second step, extreme learning machine (ELM)-based classifiers to classify the input cell images. <ns0:ref type='bibr' target='#b20'>Lin et al. (2019)</ns0:ref> proposed a new CNN-based method that combines cell image appearance with cell morphology for multi-class classification of cervical cells in the Herlev dataset. In all these cases, cross-entropy is adopted for ordinal data classification.</ns0:p><ns0:p>Assume that examples in a classification problem come from one of K classes, labelled from C (1) to C (K) , corresponding to their natural order in ordinal classes, and arbitrarily for nominal classes. </ns0:p></ns0:div>
<ns0:div><ns0:head>Cross</ns0:head><ns0:formula xml:id='formula_1'>= [ ŷn1 • • • ŷnk • • • ŷnK ] ∈ R K</ns0:formula><ns0:p>is the output probability vector given by the neural network for observation n. Note that y nk ∈ {0, 1}, ŷnk ∈ [0, 1] and ∑ K k=1 y nk = ∑ K k=1 ŷnk = 1.</ns0:p><ns0:p>However, CE has limitations when applied to ordinal data. Defining k ⋆ n ∈ {1, • • • , K} as the index of the true class of observation x n (the position where y nk = 1), it is then clear that</ns0:p><ns0:formula xml:id='formula_2'>CE(y n , ŷn ) = − log( ŷnk ⋆ n ).</ns0:formula><ns0:p>Intuitively, CE is just trying to maximize the probability in the output corresponding to the true class, ignoring all the other probabilities. For this loss, an error between classes C (1) and C (2) is treated as the same as an error between C (1) and C (K) , which is undesirable for ordinal problems. Probabilities produced by two different models for observation n. CE is unable to distinguish both scenarios, setting the same loss for both. For ordinal problems, a unimodal distribution, peaking in the true class, is, arguably, preferable. In this example, k ⋆ n = 3 is the assumed true class.</ns0:p><ns0:p>Furthermore, the loss does not constrain the model to produce unimodal probabilities, so inconsistencies can be produced such as ŷn j > ŷnℓ < ŷni , even when 1 ≤ j < ℓ < i ≤ K. It would be preferable for output probabilities to follow a unimodal distribution, as depicted by Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Cross-entropy is a fair approach for nominal data, where no additional information is available.</ns0:p><ns0:p>However, for ordinal data, the order can be explored to further regularize learning.</ns0:p><ns0:p>Ordinal Encoding (OE): A model agnostic way to introduce ordinality is by training binary classifiers, in the form of an ensemble, where each classifier tries to distinguish between each pair of adjacent classes, (i) and C (i+1) <ns0:ref type='bibr' target='#b10'>(Frank and Hall, 2001</ns0:ref>). An adaptation for neural networks consists of training a single neural network to produce K − 1 outputs, where each output makes a binary decision between each pair of adjacent classes. The information on the ordinal distribution can, therefore, be encoded in the y labels themselves <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>C</ns0:formula><ns0:p>In traditional one-hot encoding, classes are encoded using the indicator function</ns0:p><ns0:formula xml:id='formula_4'>✶(k = k ⋆ ), so that y nm is represented by 1 if k = k ⋆ n and 0 otherwise. In ordinal encoding, classes are encoded using a cumulative distribution -the indicator function used is ✶(k < k ⋆ ) so that y nm is represented by 1 if k < k ⋆</ns0:formula><ns0:p>n and 0 otherwise. Each output represents the incremental neighbor probability, and the inverse operation (during inference) is performed by summing up these outputs, p nk = ∑ K−1 m=1 y nm .</ns0:p><ns0:p>Unimodal (U): Another method to promote ordinality in classification problems consists of constraining discrete ordinal probability distributions to be unimodal using binomial or Poisson probability distributions:</ns0:p><ns0:p>→ Binomial Unimodal (BU): A different approach is to constrain the output of the network directly, approaching the problem under a regression setting. Instead of several outputs, the output predicts a single output representing the probability along the classes, with y n = 0 representing k ⋆ n = 1 and <ns0:ref type='bibr' target='#b7'>and Cardoso, 2005;</ns0:ref><ns0:ref type='bibr' target='#b0'>Beckham and Pal, 2017)</ns0:ref>. Thus, this model has only one output unit as the final layer. The model's sigmoid output is converted into class probabilities using Binomial's probability mass function. The goal of this approach is to maintain the ordinality of the classes by applying a parametric model for the output probabilities.</ns0:p><ns0:formula xml:id='formula_5'>y n = 1 representing k ⋆ n = K (Costa</ns0:formula></ns0:div>
<ns0:div><ns0:head>→ Poisson Unimodal (PU):</ns0:head><ns0:p>The Poisson probability mass function (PMF) is used to enforce a discrete unimodal probability distribution <ns0:ref type='bibr' target='#b0'>(Beckham and Pal, 2017)</ns0:ref>. As a final layer, the log Poisson PMF transform is applied together with a softmax to normalize the output as a probability distribution.</ns0:p><ns0:p>The major difference between <ns0:ref type='bibr' target='#b7'>Costa and Cardoso (2005)</ns0:ref> and <ns0:ref type='bibr' target='#b0'>Beckham and Pal (2017)</ns0:ref> work is that <ns0:ref type='bibr' target='#b0'>Beckham and Pal (2017)</ns0:ref> is exploring Binomial/Poisson distributions in the context of deep learning (rather than classical machine learning approaches), <ns0:ref type='bibr' target='#b0'>Beckham and Pal (2017)</ns0:ref> also proposes the use of a learnable softmax temperature term to control the variance of the distribution. In the experiments, the temperature term (τ) was used as a constant value of 1.</ns0:p><ns0:p>These parametric approaches sometimes sacrifice accuracy to ensure the ordinality assumption. This sacrifice might sometimes prove too much, especially given the fact that modern deep learning datasets are very big and have a significant number of mislabels. A loss is now proposed to stimulate a unimodal output without modifying the network architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:2:0:NEW 2 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>PROPOSAL</ns0:head><ns0:p>As already explored, CE presents drawbacks when applied to ordinal data. By focusing only on the mode of the distribution and ignoring all the other values in the output probability vector, one is not leveraging the ordinal information intrinsic to the data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fixing CE with an Ordinal Loss Term</ns0:head><ns0:p>A possible fix for CE is to a regularization term that penalizes the deviations from the unimodal setting.</ns0:p><ns0:p>Defining ✶(x) as the indicator function of x and ReLU(x) = x✶(x > 0) = max(0, x), a tentative solution for an order-aware loss could be</ns0:p><ns0:formula xml:id='formula_6'>CO(y n , ŷn ) = CE(y n , ŷn ) + λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU( ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU( ŷn(k) − ŷn(k+1) ),<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where λ ≥ 0 is controlling the relative importance of the extra terms favoring unimodal distributions.</ns0:p><ns0:p>Predicted probability values are expected to decrease monotonously as we depart left and right from the true class. The added terms penalize any deviation from this expected unimodal distribution, with a penalty proportional to the difference of the consecutive probabilities. The additional terms, although promoting uni-modality, still allow flat distributions. A generalization of the previous idea is to add a margin of δ > 0 to the ReLU, imposing that the difference between consecutive probabilities is at least δ . This leads us to a second CE loss, CO2, suitable for ordinal classes:</ns0:p><ns0:formula xml:id='formula_7'>CO2(y n , ŷn ) = CE(y n , ŷn ) + λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU(δ + ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU(δ + ŷn(k) − ŷn(k+1) ).<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>A value of δ = 0.05 has been empirically found to provide a sensible margin. This loss is aligned with the proposal present in <ns0:ref type='bibr' target='#b1'>Belharbi et al. (2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Beyond CO2: Ordinal Entropy Loss Function</ns0:head><ns0:p>In CO2, the CE term by itself is only trying to maximize the probability estimated in the true output class (while ignoring the remaining probabilities); the ordinal terms are promoting unimodality but not penalizing (almost) flat distributions. This also explains why the ordinal terms by themselves (especially the version without margin) are not enough to promote strong learning: the model could converge to solutions where the predicted probability in the true class is only slightly above the neighbouring probabilities, which will not, most likely, provide a strong generalization for new observations.</ns0:p><ns0:p>However, the extreme nature of CE, ignoring almost everything in the predicted distribution ŷn is equivalent to assume that the perfect probability distribution is one on the true class and zero everywhere else. This assumes a strong belief and dependence on the chosen one-hot encoding, which is often a crude approximation to the true probability class distribution. Seldom, for a fixed observation x n , the class is deterministically known; rather, we expect a class distribution with a few non-zero values. This is particularly true for observations close to the boundaries between classes. A softer assumption is that the distribution should have a low entropy, only.</ns0:p><ns0:p>This leads us to propose the ordinal entropy loss, HO2, for ordinal data as</ns0:p><ns0:formula xml:id='formula_8'>HO2(y n , ŷn ) = H(ŷ n ) + λ K−1 ∑ k=1 ✶(k ≥ k ⋆ n ) ReLU(δ + ŷn(k+1) − ŷn(k) ) + λ K−1 ∑ k=1 ✶(k ≤ k ⋆ n ) ReLU(δ + ŷn(k) − ŷn(k+1) ),<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where H(p) denotes the entropy of the distribution p. </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS</ns0:head><ns0:p>Several neural network architectures are now trained using the aforementioned losses for the dataset at hand. In this work, it was also evaluated the performance differences between parametric and nonparametric losses for ordinal classification (Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>). All the experiments were implemented in PyTorch and are available online 2 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Pre-processing</ns0:head><ns0:p>Given that all images from the Herlev dataset were of different sizes, all images were resized to 224 × 224 pixels; however, before the resize of cytological images a zero-padding must be done to avoid the loss of essential information regarding cells shape. The last preprocessing step was to apply the same normalization as used by ImageNet <ns0:ref type='bibr' target='#b26'>(Simonyan and Zisserman, 2014)</ns0:ref>.</ns0:p><ns0:p>Since the Herlev database has a relatively small number of observations (917), the dataset was augmented by a series of random transformations: 10% of width and height shift, 10% of zoom, image rotation, horizontal and vertical flips, and color saturation. These transformations are illustrated in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolutional Neural Networks</ns0:head><ns0:p>A convolutional neural network (CNN) is a neural network that successively applies convolutions of filters to the image. These filters are learned and consist of quadrilateral patches that are convolved across the whole input image -unlike previous fully-connected networks, only local inputs are connected at each layer. Typically, each convolution is intertwined with downsampling operations, such as max-pooling, that successively reduces the size of the original image. In the end, a series of outputs produce the desired classification.</ns0:p><ns0:p>The final layers are fully-connected and then the final output is processed by a soft-max for multiclass problems or a logistic function for binary classification. Dropout was used to reduce overfitting by constraining these fully-connected layers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Network Architectures</ns0:head><ns0:p>Two different models were trained and tested in this work for multi-class (4-class and 7-class) classification of Pap smear cells images (Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>). Both models were trained and tested with eight different convolutional network architectures: AlexNet <ns0:ref type='bibr' target='#b18'>(Krizhevsky et al., 2012)</ns0:ref>, GoogLeNet <ns0:ref type='bibr' target='#b29'>(Szegedy et al., 2015)</ns0:ref>, MobileNet V2 <ns0:ref type='bibr' target='#b13'>(Howard et al., 2017)</ns0:ref> Zisserman, 2014), and Wide ResNet50 2 <ns0:ref type='bibr' target='#b33'>(Zagoruyko and Komodakis, 2016)</ns0:ref>. The goal of testing these different architectures is to evaluate how well the proposed loss behaves in a wide range of architectures.</ns0:p><ns0:p>These 9 different architectures were chosen as they are often used in the literature and came pre-trained with PyTorch on ImageNet 3 . The last block of each architecture was replaced by these layers: dropout with p=20%, 512-unit dense layer with ReLU, dropout with p=20%, a 256-wide dense layer with ReLU, followed by K neuron outputs.</ns0:p><ns0:p>A brief introduction of each architecture is now presented. AlexNet, based on LeNet, formalized the Convolutional Neural Network (CNN) as is known today: a series of convolutions intertwined by downsampling blocks. Max-pooling was used for downsampling and ReLU was used as the activation function. It became famous for winning ImageNet, the first CNN to do so <ns0:ref type='bibr' target='#b18'>(Krizhevsky et al., 2012)</ns0:ref>.</ns0:p><ns0:p>The following ImageNet competitions were also won by other CNNs -VGG and GoogLeNet -which were evolutions on top of AlexNet which consisted mostly in an explodingly higher number of parameters <ns0:ref type='bibr' target='#b26'>(Simonyan and Zisserman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b29'>Szegedy et al., 2015)</ns0:ref>. Then, MobileNet <ns0:ref type='bibr' target='#b13'>(Howard et al., 2017)</ns0:ref> introduced hyperparameters to help the user choose between latency and accuracy trade-offs. An attempt was then made at curbing the number of parameters with ShuffleNet <ns0:ref type='bibr' target='#b35'>(Zhang et al., 2018)</ns0:ref> by approximating convolution operators using fewer parameters.</ns0:p><ns0:p>Finally, an attempt was made at curbing the number of parameters, which had been exploding, while keeping the accuracy of these early CNNs with SqueezeNet <ns0:ref type='bibr' target='#b14'>(Iandola et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In another line of research, ResNet <ns0:ref type='bibr' target='#b12'>(He et al., 2016)</ns0:ref> introduced residual blocks whose goal was to make the optimization process easier for gradient descent. Each residual block learns a = f (x) + x instead of a = f (x). Given that weights are initialized randomly around zero and most activation functions are also centred in zero (an exception would be the logistic activation function), then, in expectation, all neurons output zero before any training. Therefore, when using residual blocks, at time=0, a = x, i.e. activations produce the identity function. This greatly helps gradient descent focus on finding improvements (residuals) on top of the identity function. While this model allowed for deeper neural networks, each per cent of improved accuracy cost nearly doubling the number of layers, which motivated WideResNet <ns0:ref type='bibr' target='#b33'>(Zagoruyko and Komodakis, 2016)</ns0:ref> and ResNeXt <ns0:ref type='bibr' target='#b32'>(Xie et al., 2017)</ns0:ref> to improve the residual architecture to improve learning time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Training</ns0:head><ns0:p>The weights of the architectures previously mentioned are already initialized by pre-training on ImageNet.</ns0:p><ns0:p>Adam was used as the optimizer and starts with a learning rate of 10 −4 . The learning rate is reduced by 10% whenever the loss is stagnant for 10 epochs. The training process is completed after 100 epochs.</ns0:p><ns0:p>The dataset was divided into 10 different folds using stratified cross-validation, in order to maintain the class ratios. Therefore, the results are the average and deviation of these 10 folds. In the case of the proposed loss, the hyperparameter λ is tuned by doing nested k-fold cross-validating using the training 3 https://pytorch.org/docs/stable/torchvision/models.html</ns0:p></ns0:div>
<ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:2:0:NEW 2 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science set (with k=5) in order to create an unbiased validation set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The most popular classification metric is accuracy (Acc). For N observations, taking k i and ki to be the label and prediction of the n-th observation, respectively, then Acc =</ns0:p><ns0:formula xml:id='formula_9'>1 N ∑ N n=1 ✶( k⋆ n = k ⋆ n )</ns0:formula><ns0:p>, where ✶ is the indicator function.</ns0:p><ns0:p>However, this metric treats all class errors as the same, whether the error is between adjacent classes or between classes in the extreme. If we have K classes represented by a set C = {C (1) , C (2) , . . . , C (K) }, then accuracy will treat an error between C (1) and C (2) with the same magnitude as an error between C (1) and C (K) which is clearly worse. As an illustration, in a medical setting, a misdiagnosis between</ns0:p><ns0:p>Stage II and Stage III of a disease, while bad, is not as bad as a misdiagnosis between Healthy and Stage III. For that reason, a popular metric for ordinal classification is the Mean Absolute Error (MAE),</ns0:p><ns0:formula xml:id='formula_10'>MAE = 1 N ∑ i |k ⋆ i − k⋆ i |.</ns0:formula><ns0:p>This metric is not perfect since it treats an ordinal variable as a cardinal variable.</ns0:p><ns0:p>An error between classes C (1) and C (3) will be treated as two times worse than an error between classes <ns0:ref type='table'>and C</ns0:ref> (2) . Naturally, the assumption of cardinality is not always warranted.</ns0:p><ns0:formula xml:id='formula_11'>C (1)</ns0:formula><ns0:p>To evaluate the models' performance we also used a specific metric for ordinal classification, Uniform</ns0:p><ns0:p>Ordinal Classification Index (UOC) which takes into account accuracy and ranking in the performance assessment and it is also robust against imbalanced classes <ns0:ref type='bibr' target='#b25'>(Silva et al., 2018)</ns0:ref>. The better the performance, the lower the UOC.</ns0:p><ns0:p>By combining a quality assessment (accuracy) with a quantity assessment (MAE) and also with a specific metric for ordinality (UOC) we hope to provide a balanced view of the performance of the methods.</ns0:p><ns0:p>The two other metrics used are the AUC of ROC or AUROC (Area Under the Receiver Operating Characteristic) and Kendall's τ rank correlation coefficient. AUROC measures how well-calibrated are the probabilities produced by the model. This first metric is used in the binary classification context (two classes) and is extended for multi-class by comparing each class against the rest (one vs rest strategy)</ns0:p><ns0:p>and performing an overall average, known as macro averaging. On the other hand, Kendall's Tau is a non-parametric evaluation of relationships between columns of ranked data, so it is a measure of ordinal association between data. The τ correlation coefficient returns a value that ranges from -1 to 1, with 0 being no correlation and 1 perfect correlation.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The average performance for the 10-folds of 9 different architectures are presented in Tables <ns0:ref type='table' target='#tab_9'>3-8</ns0:ref>, for both the 7-class and 4-class classification problems, with the seven different learning losses -conventional Cross-Entropy (CE), Binomial Unimodal (BU) <ns0:ref type='bibr' target='#b7'>(Costa and Cardoso, 2005)</ns0:ref>, Poisson Unimodal (PU) <ns0:ref type='bibr' target='#b0'>(Beckham and Pal, 2017)</ns0:ref>, Ordinal Encoding (OE) <ns0:ref type='bibr' target='#b6'>(Cheng et al., 2008)</ns0:ref> and our proposed losses (CO, CO2 and HO2), as measured by MAE, accuracy, UOC index and Kendall's coefficient (Appendixtable A1-A2) detailed in the previous section. The best models are shown in bold, while italic is used to check for statistical similarity between the other models and the best one. A p-value of 0.1 is used with a two-sided paired t-test due to the small sample size (10 folds).</ns0:p><ns0:p>For the 7-class classification problem, Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> shows the results for MAE, which confirm the influence of ordinal losses in promoting ordinality when comparing to nominal loss (CE).OE loss achieved the best performance across the different architectures but it is also notable the good performance of our loss, which in 67% of cases, the models trained with our proposed loss provide better MAE results. The MAE results present in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> for 4-class classification are consistent with the 7-class Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, with ordinal losses winning over nominal CE.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> and Table <ns0:ref type='table' target='#tab_7'>6</ns0:ref> are presented the accuracy results for 7-class and 4-class classification problems respectively. Regarding this metric, the results between nominal and ordinal losses are more balanced.</ns0:p><ns0:p>CE loss performance is above ordinal losses in 11% for 7-class problem and is tied for 4-class problem.</ns0:p><ns0:p>This can be explained by the lower role of ordinality in the CE loss, as also confirmed by the MAE results.</ns0:p><ns0:p>This means that when misclassification occurs, ordinal losses tend to classify Pap smear images as being closer to the real class. Results for UOC index (Table <ns0:ref type='table' target='#tab_8'>7</ns0:ref>-8) are also consistent with MAE metric, with 78% of the models presenting a lowest UOC index when using the ordinal losses. Manuscript to be reviewed problems. These results are also aligned with the results of MAE and UOC metrics, being the ordinal losses in advantage when comparing with nominal CE.</ns0:p><ns0:note type='other'>Computer Science Predicted Actual</ns0:note><ns0:p>Adding the margin (CO → CO2) influences positively most of the metrics for 7 and 4 classes. Using entropy (CO2 or HO2), instead of cross-entropy, promotes better results on the metrics intrinsically connected with ordinality (MAE, UOC and Kendall's τ coefficient).</ns0:p><ns0:p>The average results for all losses across the 9 different architectures for MAE, accuracy, UOC, AUROC, Kendall's τ coefficient and Gini index metrics are present in appendix Table <ns0:ref type='table' target='#tab_4'>A3</ns0:ref> and Table <ns0:ref type='table' target='#tab_5'>A4</ns0:ref> for 4 and 7 class classification respectively. In both Tables are represented the results using the classical mode (softmax) to aggregate the probabilities and also using mean (expectation trick) <ns0:ref type='bibr' target='#b0'>Beckham and Pal (2017)</ns0:ref>. Concerning the sparsity of the prediction probabilities, as measured by the Gini index, it is notable that as the loss is made more ordinal-aware then the predicted probabilities tend to be more spread across the classes. This could already be seen in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Interestingly, the OE distribution is almost identical to the CE distribution and has been omitted from the figure for eligibility.</ns0:p><ns0:p>In average, in most metrics, non-parametric losses outperformed parametric losses. This difference can be justified with the greater flexibility in boundary decisions provided by non-parametric losses. OE, CO2 and HO2 provided better results across the different metrics when comparing to BU and PU.</ns0:p><ns0:p>Most work from the literature concerns the binary case using Herlev dataset (normal vs abnormal); only a couple concern themselves with the 7-class and 4-class ordinal classification problem. Table <ns0:ref type='table' target='#tab_10'>9</ns0:ref> contrasts the best performing models from two recent works against the proposed method. In our case, the non-parametric loss (CO2) was able to beat the state-of-the-art by 11.1% (7 classes) and by 10% (4 classes) in the accuracy metric. Furthermore, the confusion matrix from Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> contrasts the proposal against <ns0:ref type='bibr' target='#b20'>Lin et al. (2019)</ns0:ref>.</ns0:p><ns0:p>There are classes of cells easier to be classified than others, as shown by the confusion matrix in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science severe dysplasia cells have similar characteristics in appearance and morphology with columnar cells (e.g., small cytoplasm, dark nuclei).</ns0:p><ns0:p>The main challenge occurs in the classification of abnormal cells (i.e., mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma) where the characteristics of these kinds of cells are very similar. The fact is that the abnormal classes correspond to different levels of evolution of structures, with a progressive change in its characteristics which leads them to present characteristics common to two levels, being a hard task even for cytopathologists to classify them correctly. Thus, the right multiclass classification of abnormal cells is highly desirable, and with substantial clinical value.</ns0:p><ns0:p>Finally, the influence of the losses on the output probabilities is illustrated in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> when predicting two classes for the 7-class case. Contrasting this to Figure <ns0:ref type='figure'>1</ns0:ref>, it is clear that the proposed loss tends to promote a unimodal distribution of probabilities relative to the CE loss, which tends to maximize the probability in the output corresponding to the true class and ignoring all the other probabilities distribution, and even in contrast to OE. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>Non-parametric losses achieved better results when comparing with parametric losses. This type of losses does not limit the learned representation to a specific parametric model, which allows during the train to explore different and larger spaces of solutions avoiding the need for ad hoc choices.</ns0:p><ns0:p>A new non-parametric loss is proposed for multi-class Pap smear cell-classification based on convolutional neural networks. This new loss demonstrated to be competitive with state-of-the-art results and more flexible than the existing in deep ordinal classification techniques when imposing uni-modality in probabilities distribution. The use of the proposed loss in training popular architectures from the literature outperforms the state-of-the-art by over 10%. Furthermore, the proposed loss is a convenient way of introducing ordinality to the optimization problem without the major changes in architecture or data format required by other techniques from the literature. On the other hand, the proposed loss is pestered by two new hyperparameters, albeit the suggested values have been found to be robust. While motivated by this dataset, the proposed loss could potentially be used by other applications of ordinal classification or even time-series problems.</ns0:p><ns0:p>In any case, there is a lot to improve in the multi-class classification of cervical cells to achieve better accuracy performances since results are still short of 75.6% accuracy. The Herlev data set is mainly composed of expert-selected 'typical' cells, however, in real-life circumstances, it is more complex because a cytology image contains lots of cells and not only a single cropped cell, so further work are needed before moving the results of this work to practice. Another important detail is the effect of overlapping nuclei and cell clumps, which has not been taken into account in this work. The presence of artefacts on the images also interferes with classification accuracy.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>-Entropy (CE): Traditionally, a CNN would perform multi-class classification by minimizing cross-entropy, averaged over the training set, CE(y n , ŷn ) = − K ∑ k=1 y nk log( ŷnk ), where y n = [y n1 • • • y nk • • • y nK ] ∈ R K represents the one-hot encoding of the class of the n-th observation and ŷn</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Figure1. Probabilities produced by two different models for observation n. CE is unable to distinguish both scenarios, setting the same loss for both. For ordinal problems, a unimodal distribution, peaking in the true class, is, arguably, preferable. In this example, k ⋆ n = 3 is the assumed true class.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Schematic representation of the used and proposed ordinal losses.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Examples of data augmentation on the Herlev database. The original zero-padding image (left) and random transformations.</ns0:figDesc><ns0:graphic coords='7,206.79,194.38,283.47,66.13' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Schematic representation of the model used for multi-class classification of Pap smear cells.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Comparison of state-of-the-art confusion matrix (7 classes) against WideResNet50 trained using the HO2 loss.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 (b). Columnar cells are sometimes inappropriately classified as severe dysplasia cells since</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Probability distribution for WideResNet50 contrasting losses CE (solid line) and HO2 (dashed line).</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The 7 Different Pap Smear Classes in the Herlev dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>WHO TBS Type of cell</ns0:cell><ns0:cell>Quantity</ns0:cell></ns0:row><ns0:row><ns0:cell>Normal</ns0:cell><ns0:cell>1 2 3</ns0:cell><ns0:cell>1 1 1</ns0:cell><ns0:cell>Superficial squamous epithelial Intermediate squamous epithelial Columnar epithelial</ns0:cell><ns0:cell>74 cells 70 cells 98 cells</ns0:cell></ns0:row><ns0:row><ns0:cell>Abnormal</ns0:cell><ns0:cell>4 5 6 7</ns0:cell><ns0:cell>2 3 3 4</ns0:cell><ns0:cell cols='2'>Mild squamous non-keratinizing dysplasia Moderate squamous non-keratinizing dysplasia 146 cells 182 cells Severe squamous non-keratinizing dysplasia 197 cells Squamous cell carcinoma in situ intermediate 150 cells</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Image examples of the 7 Different Pap Smear Classes in the Herlev dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Results in terms of Mean Absolute Error (MAE) for 7 class problem, averaged for 10 folds (lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet</ns0:cell><ns0:cell cols='7'>0.46 ± 0.08 0.52 ± 0.09 0.50 ± 0.09 0.44 ± 0.08 0.90 ± 0.19 0.41 ± 0.08 0.45 ± 0.10</ns0:cell></ns0:row><ns0:row><ns0:cell>GoogLeNet</ns0:cell><ns0:cell cols='7'>0.39 ± 0.05 0.41 ± 0.07 0.42 ± 0.08 0.38 ± 0.09 0.53 ± 0.10 0.37 ± 0.07 0.36 ± 0.06</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet v2</ns0:cell><ns0:cell cols='7'>0.34 ± 0.05 0.36 ± 0.04 0.31 ± 0.04 0.33 ± 0.05 0.52 ± 0.26 0.34 ± 0.06 0.34 ± 0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet18</ns0:cell><ns0:cell cols='7'>0.34 ± 0.09 0.36 ± 0.06 0.35 ± 0.06 0.35 ± 0.10 0.49 ± 0.11 0.34 ± 0.07 0.35 ± 0.10</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNeXt50 32x4d</ns0:cell><ns0:cell cols='7'>0.34 ± 0.07 0.33 ± 0.05 0.33 ± 0.03 0.34 ± 0.06 0.41 ± 0.08 0.33 ± 0.06 0.31 ± 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>ShuffleNet v2 x1 0 0.41 ± 0.07 0.49 ± 0.07 0.41 ± 0.05 0.38 ± 0.07 0.47 ± 0.08 0.40 ± 0.05 0.38 ± 0.06</ns0:cell></ns0:row><ns0:row><ns0:cell>SqueezeNet1 0</ns0:cell><ns0:cell cols='7'>0.38 ± 0.07 0.45 ± 0.05 0.46 ± 0.07 0.40 ± 0.09 0.97 ± 0.31 0.41 ± 0.08 0.45 ± 0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell cols='7'>0.37 ± 0.09 0.44 ± 0.05 0.44 ± 0.10 0.37 ± 0.06 0.67 ± 0.15 0.36 ± 0.06 0.36 ± 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>Wide ResNet50 2</ns0:cell><ns0:cell cols='7'>0.33 ± 0.06 0.37 ± 0.05 0.32 ± 0.06 0.30 ± 0.04 0.45 ± 0.13 0.33 ± 0.06 0.35 ± 0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row></ns0:table><ns0:note>bold: best model, italic: statistically similar to best (paired t-test).9/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:2:0:NEW 2 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Results in terms of Mean Absolute Error (MAE) for 4 class problem, averaged for 10 folds (lower is better). ± 0.06 0.32 ± 0.04 0.28 ± 0.04 0.29 ± 0.06 0.47 ± 0.19 0.29 ± 0.05 0.31 ± 0.06 GoogLeNet 0.24 ± 0.04 0.25 ± 0.03 0.25 ± 0.05 0.24 ± 0.05 0.38 ± 0.17 0.22 ± 0.05 0.25 ± 0.06 MobileNet v2 0.22 ± 0.06 0.21 ± 0.03 0.24 ± 0.05 0.22 ± 0.06 0.23 ± 0.04 0.24 ± 0.05 0.22 ± 0.05 ResNet18 0.24 ± 0.03 0.26 ± 0.05 0.24 ± 0.05 0.22 ± 0.04 0.29 ± 0.11 0.22 ± 0.04 0.26 ± 0.06 ResNeXt50 32x4d 0.21 ± 0.03 0.22 ± 0.04 0.23 ± 0.03 0.20 ± 0.04 0.28 ± 0.07 0.21 ± 0.03 0.22 ± 0.05 ShuffleNet v2 x1 0 0.28 ± 0.05 0.33 ± 0.05 0.27 ± 0.05 0.31 ± 0.06 0.36 ± 0.09 0.28 ± 0.06 0.28 ± 0.04 SqueezeNet1 0 0.28 ± 0.06 0.30 ± 0.05 0.30 ± 0.06 0.27 ± 0.07 0.66 ± 0.17 0.29 ± 0.04 0.31 ± 0.05 VGG16 0.27 ± 0.06 0.28 ± 0.06 0.26 ± 0.05 0.24 ± 0.03 0.53 ± 0.18 0.26 ± 0.05 0.27 ± 0.05 Wide ResNet50 2 0.23 ± 0.05 0.22 ± 0.04 0.20 ± 0.06 0.22 ± 0.05 0.43 ± 0.22 0.21 ± 0.05 0.22 ± 0.03</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet 0.31 Avg</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.40</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Results in terms of Accuracy for 7 class problem, averaged for 10 folds. (higher is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet</ns0:cell><ns0:cell cols='4'>71.1 ± 5.1 60.6 ± 3.7 64.8 ± 5.4 70.1 ± 5.1</ns0:cell><ns0:cell cols='3'>44.2 ± 7.6 70.8 ± 5.1 67.9 ± 5.4</ns0:cell></ns0:row><ns0:row><ns0:cell>GoogLeNet</ns0:cell><ns0:cell cols='4'>72.5 ± 3.7 66.1 ± 4.3 68.5 ± 4.5 71.5 ± 5.3</ns0:cell><ns0:cell cols='3'>59.7 ± 8.2 72.4 ± 4.9 72.4 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet v2</ns0:cell><ns0:cell cols='7'>75.0 ± 4.4 69.0 ± 3.5 74.2 ± 2.8 74.4 ± 3.8 64.4 ± 16.5 73.1 ± 3.7 74.1 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet18</ns0:cell><ns0:cell cols='4'>74.4 ± 6.1 69.5 ± 3.7 73.3 ± 4.3 73.6 ± 6.4</ns0:cell><ns0:cell cols='3'>64.6 ± 6.5 73.3 ± 4.5 73.3 ± 6.4</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ResNeXt50 32x4d 74.4 ± 3.7 72.4 ± 4.3 72.8 ± 2.8 74.0 ± 4.2</ns0:cell><ns0:cell cols='3'>68.0 ± 5.9 75.5 ± 3.5 75.7 ± 5.3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ShuffleNet v2 x1 0 71.9 ± 5.5 61.0 ± 4.5 67.7 ± 4.6 70.7 ± 4.9</ns0:cell><ns0:cell cols='3'>65.5 ± 4.5 70.7 ± 3.1 71.3 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>SqueezeNet1 0</ns0:cell><ns0:cell cols='7'>73.0 ± 4.3 63.3 ± 2.4 67.3 ± 3.6 71.8 ± 5.3 40.5 ± 13.3 70.8 ± 4.5 67.1 ± 5.0</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell cols='4'>73.1 ± 4.7 63.9 ± 4.6 67.6 ± 6.2 72.6 ± 3.8</ns0:cell><ns0:cell cols='3'>54.4 ± 8.5 71.8 ± 3.3 72.0 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Wide ResNet50 2</ns0:cell><ns0:cell cols='4'>75.7 ± 3.2 69.7 ± 3.1 74.5 ± 4.3 76.8 ± 1.9</ns0:cell><ns0:cell cols='3'>66.1 ± 7.8 75.6 ± 4.0 74.3 ± 5.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>73.4</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>58.6</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>72.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results in terms of Accuracy for 4 class problem, averaged for 10 folds (higher is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet</ns0:cell><ns0:cell cols='7'>76.1 ± 3.8 72.8 ± 2.7 75.7 ± 4.0 76.8 ± 3.6 63.9 ± 12.5 75.9 ± 3.5 74.9 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell>GoogLeNet</ns0:cell><ns0:cell cols='7'>79.9 ± 1.8 78.3 ± 2.6 77.3 ± 3.1 79.2 ± 4.0 69.4 ± 12.0 80.0 ± 3.8 78.4 ± 4.0</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet v2</ns0:cell><ns0:cell cols='4'>81.8 ± 4.3 80.7 ± 2.5 78.8 ± 3.4 81.2 ± 4.9</ns0:cell><ns0:cell cols='3'>79.8 ± 3.7 79.2 ± 3.2 80.8 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet18</ns0:cell><ns0:cell cols='4'>79.8 ± 2.6 77.2 ± 2.3 78.5 ± 4.1 80.7 ± 4.1</ns0:cell><ns0:cell cols='3'>75.2 ± 8.4 80.4 ± 3.8 78.0 ± 4.3</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNeXt50 32x4d</ns0:cell><ns0:cell cols='4'>82.0 ± 3.1 80.0 ± 3.5 79.5 ± 3.2 82.3 ± 4.3</ns0:cell><ns0:cell cols='3'>76.2 ± 5.1 80.8 ± 2.8 79.9 ± 3.9</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ShuffleNet v2 x1 0 77.1 ± 3.7 72.1 ± 3.5 76.1 ± 3.5 75.0 ± 4.4</ns0:cell><ns0:cell cols='3'>70.4 ± 6.6 76.9 ± 3.9 76.2 ± 2.3</ns0:cell></ns0:row><ns0:row><ns0:cell>SqueezeNet1 0</ns0:cell><ns0:cell cols='7'>77.2 ± 4.2 73.5 ± 3.1 74.9 ± 5.1 77.3 ± 5.3 49.9 ± 12.2 75.5 ± 3.3 74.3 ± 4.5</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell cols='7'>77.9 ± 4.8 74.4 ± 4.7 77.5 ± 3.8 79.4 ± 2.5 58.1 ± 11.8 77.0 ± 3.9 77.4 ± 3.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Wide ResNet50 2</ns0:cell><ns0:cell cols='7'>80.8 ± 3.2 79.3 ± 3.3 82.2 ± 4.2 81.0 ± 3.9 64.0 ± 15.3 81.3 ± 4.2 80.6 ± 2.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>76.5</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>78.5</ns0:cell><ns0:cell>77.8</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table><ns0:note>bold: best model, italic: statistically similar to best (paired t-test).10/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:2:0:NEW 2 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Results in terms of Uniform Ordinal Classification Index (UOC) for 7 class problem, averaged for 10 folds (lower is better). ± 6.5 51.7 ± 5.7 49.8 ± 6.6 44.0 ± 6.9 70.3 ± 7.8 42.8 ± 7.3 46.4 ± 7.8 GoogLeNet 38.9 ± 6.0 44.2 ± 5.7 44.6 ± 7.3 39.0 ± 7.2 51.3 ± 9.1 38.8 ± 6.9 38.1 ± 4.7 MobileNet v2 36.0 ± 5.7 39.7 ± 4.9 33.6 ± 4.5 35.4 ± 5.6 46.7 ± 15.0 36.2 ± 6.4 36.2 ± 6.1 ResNet18 36.2 ± 9.3 40.1 ± 5.7 37.2 ± 6.3 37.3 ± 9.1 46.9 ± 6.8 37.1 ± 7.6 37.8 ± 8.7 ResNeXt50 32x4d 36.9 ± 6.8 37.0 ± 5.2 37.6 ± 4.6 36.8 ± 6.1 42.2 ± 6.7 35.3 ± 6.7 34.0 ± 7.2 ShuffleNet v2 x1 0 41.8 ± 7.1 49.6 ± 6.4 43.6 ± 4.9 40.3 ± 6.3 46.3 ± 6.0 42.4 ± 4.1 40.3 ± 4.9 SqueezeNet1 0 40.4 ± 6.0 47.9 ± 3.8 47.5 ± 4.8 42.4 ± 8.1 73.6 ± 13.6 42.7 ± 7.4 46.8 ± 7.0 VGG16 38.5 ± 8.2 47.2 ± 4.9 45.5 ± 8.6 39.0 ± 6.4 60.3 ± 10.0 40.2 ± 6.1 39.6 ± 6.8 Wide ResNet50 2 35.7 ± 5.2 40.8 ± 5.4 35.6 ± 6.3 33.5 ± 4.5 44.2 ± 9.1 34.8 ± 6.5 36.6 ± 8.4</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet 45.1 Avg</ns0:cell><ns0:cell>38.8</ns0:cell><ns0:cell>44.2</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell>53.5</ns0:cell><ns0:cell>39.0</ns0:cell><ns0:cell>39.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Results in terms of Uniform Ordinal Classification Index (UOC) for 4 class problem, averaged for 10 folds (lower is better).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>CE</ns0:cell><ns0:cell>BU</ns0:cell><ns0:cell>PU</ns0:cell><ns0:cell>OE</ns0:cell><ns0:cell>CO</ns0:cell><ns0:cell>CO2</ns0:cell><ns0:cell>HO2</ns0:cell></ns0:row><ns0:row><ns0:cell>AlexNet</ns0:cell><ns0:cell cols='7'>38.2 ± 5.1 39.5 ± 3.4 37.1 ± 4.3 37.0 ± 4.9 52.7 ± 14.2 37.4 ± 5.8 38.9 ± 5.8</ns0:cell></ns0:row><ns0:row><ns0:cell>GoogLeNet</ns0:cell><ns0:cell cols='7'>31.6 ± 3.1 31.7 ± 3.6 34.4 ± 5.6 32.5 ± 5.7 44.7 ± 14.6 30.8 ± 5.5 32.9 ± 6.3</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet v2</ns0:cell><ns0:cell cols='4'>30.1 ± 6.9 29.2 ± 3.7 32.8 ± 5.2 30.6 ± 7.5</ns0:cell><ns0:cell cols='3'>31.0 ± 4.8 32.5 ± 5.5 30.5 ± 5.4</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet18</ns0:cell><ns0:cell cols='7'>31.4 ± 4.6 33.1 ± 3.7 32.3 ± 5.5 29.4 ± 6.0 36.7 ± 11.0 30.3 ± 4.1 33.2 ± 6.7</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ResNeXt50 32x4d 28.7 ± 4.7 29.8 ± 4.9 32.0 ± 3.9 27.5 ± 5.3</ns0:cell><ns0:cell cols='3'>35.9 ± 4.8 28.8 ± 4.6 31.0 ± 5.2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ShuffleNet v2 x1 0 35.8 ± 5.3 38.6 ± 4.7 36.7 ± 4.4 39.0 ± 6.5</ns0:cell><ns0:cell cols='3'>43.5 ± 9.0 36.4 ± 6.9 35.9 ± 4.7</ns0:cell></ns0:row><ns0:row><ns0:cell>SqueezeNet1 0</ns0:cell><ns0:cell cols='4'>36.6 ± 5.8 37.3 ± 4.3 38.2 ± 6.8 35.3 ± 6.9</ns0:cell><ns0:cell cols='3'>65.1 ± 9.4 37.6 ± 4.1 39.6 ± 4.6</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell cols='7'>35.3 ± 6.4 36.2 ± 6.4 34.6 ± 4.7 32.3 ± 3.8 55.1 ± 10.5 34.7 ± 5.5 35.1 ± 6.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wide ResNet50 2</ns0:cell><ns0:cell cols='7'>30.2 ± 5.7 29.9 ± 4.9 28.2 ± 5.0 30.5 ± 6.2 47.7 ± 14.4 29.1 ± 5.6 30.7 ± 4.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Avg</ns0:cell><ns0:cell>33.1</ns0:cell><ns0:cell>33.9</ns0:cell><ns0:cell>34.0</ns0:cell><ns0:cell>32.7</ns0:cell><ns0:cell>45.8</ns0:cell><ns0:cell>33.1</ns0:cell><ns0:cell>34.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Winners</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table><ns0:note>bold: best model, italic: statistically similar to best (paired t-test).</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Accuracy comparison of different models with literature for 7 and 4 classes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>7 classes</ns0:cell><ns0:cell>4 classes</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Accuracy (%) Accuracy (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Jantzen et al.</ns0:cell><ns0:cell>61.1</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Lin et al.</ns0:cell><ns0:cell>64.5</ns0:cell><ns0:cell>71.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposal</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell>81.3</ns0:cell></ns0:row></ns0:table><ns0:note>11/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53280:2:0:NEW 2 Feb 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "February 2, 2021
INESC TEC
Campus da Faculdade de Engenharia da Universidade do Porto
Rua Dr. Roberto Frias
4200-465 Porto
Portugal
Article id: 53280
Article Title: Ordinal Losses for Classification of Cervical Cancer Risk
Dear Editors,
We thank the reviewers for their generous comments on the manuscript and we have
edited the manuscript to address their concerns.
All of the code we wrote is available and I have included the link throughout the paper
to the appropriate code repository.
We look forward to hearing your thoughts about the updated manuscript.
Yours sincerely,
Tomé Mendes Albuquerque
On behalf of all authors.
Editor: Eibe Frank
Thank you very much for your efforts to improve the paper. Reviewer 2 has some
comments regarding the changes that you have made. Please address these (which
should hopefully not require a substantial amount of work) and resubmit.
Thank you for the feedback. We tried to address all the comments that Reviewer 2
made regarding the changes that we did on the article.
Reviewer 1: Pingjun Chen
Comments for the Author: Revision well resolved raised issues.
Thank you for the positive feedback.
Reviewer 2: Christopher Beckham
Regarding the unimodal papers cited: currently [2] is called ”Poisson unimodal” (PU).
This is a mischaracterisation because [2] also explores the binomial distribution. Furthermore, although [1] only considers the binomial distribution (BU), [3] (same authors
as [1]) wrote a more recent paper which considers both binomial and Poisson. Therefore, I would suggest that you simply refer to both styles (binomial/poisson) as simply
”Unimodal” and cite [2] and [3] together. E.g. from lines 133-147 you simply say
something like ”Unimodal (da Costa et al (2008), Beckham & Pal (2017)) constrains
the output of ...”
2
Also, the difference between [2] and [3] is simply that [2] is exploring binomial/poisson
in the context of deep learning (rather than classical ML), [2] also proposes using a
learnable softmax temperature term to control the variance of the distribution (which
I mentioned in my first review as the Conway-Maxwell Binomial,Poisson distribution).
Therefore, in lines 141-147 you should explicitly state whether you are learning or
tuning such a temperature term or not. If you’re not doing this, then your implementation of PU/BU is going to be more faithful to [3] than it is to [2], and that should
be mentioned.
We agree with your suggestion that we must simply refer to both styles (binomial/poisson) as simply ”Unimodal”. We changed that section (lines 133-150) by
creating a new Unimodal losses section and then we divide them in Binomial Loss and
Poisson Loss. We also citted Beckham & Pal (2017) article inside Binomial Unimodal
(BU). We add information about the use of the learnable softmax temperature term
in our work.
In line 271 you say ”In all the Tables of results, the best results for each loss are in
bold”, followed by ”Furthermore, the other results are compared to the best result and a
statistical test is used with the hypothesis of them being the same or not”. This makes
it sound like the bold numbers are simply referring to the ”best” value (i.e. lowest
error, or higher accuracy), with statistical testing being something else added to the
table. When I look at the table, for each row (with each row denoting an architecture),
I see multiple bold results. The issue here is that, in most ML papers the bold number
traditionally means the ”best” value, whereas here I am seeing that it is referring to a
whether that result is statistically significant or not. But to make matters even more
confusing, the statistical test seems to be done with respect to the *best* loss. For
instance, for each row, rather than doing t test(baseline, loss1), t test(baseline, loss2),
etc... it looks like you are doing t test(loss1, best loss), t test(loss2, best loss), etc. It
would be much easier to interpret the result if each t-test was performed *with respect*
to the CE baseline.
3
I would suggest the following changes to the table:
- Instead of using boldface to determine statistical significance, format the table similar
to how it is done in [4]’s Table 2, i.e. a black dot next to the number (x +/- y) indicates
whether that result is statistically significant *with respect* to the baseline (CE) loss.
- To also be consistent with most deep learning papers, use boldface to highlight the
*best* result (i.e. lowest MAE, or highest accuracy, etc.). You could constrain the
”best” result to only be amongst the ones that passed the t-test. This means that for
each row in the table, only one of the losses is going to be in bold.
We understand your very pertinent concern; however, the reason why we perform
statistical tests is not only to compare against the baseline, but also because we want
to explore other questions such as how the families of models compare against one
another (such as parametric vs. non-parametric). In order to reflect your concerns,
we changed the results by only using bold for the best loss, but adding italic for the
statistical test.
You also need to make it clear in the text if these table numbers are averages from the
validation folds or test folds I am assuming that in your k-fold cross-validation setup,
for each iteration, 1/k folds is the valid set (for tuning HPs) and another 1/k folds is
the ’test set’. That means that when the k-fold cross-validation is completed you will
have results for a total of k validation folds and k test folds. Can you confirm that this
is the case? (Make this clear in lines 228-236.)
Yes, stratified k-fold is used (with k=10), and the results table are the average and
deviation of the results trained and evaluated using these 10 folds. In the case of the
CO, CO2 and HO2, there is an hyperparamter λ that was chosen by doing nested
k-fold – that is, for each one of these ten fold, the training set is again divided into
training-test sets in order to find the best value of λ without looking at the final test
fold. This was now clarified in the text.
4
Typos, etc:
- Figure 5a: Lin et al is mentioned twice
- Line 204: ”512 dense layer” → ”512-unit dense layer”, same for 256 dense layer
We apologize for the typos, we have fixed them.
5
" | Here is a paper. Please give your review comments after reading it. |
35 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The design of an observer-based robust tracking controller is investigated and successfully applied to control an Activated Sludge Process (ASP) in this study. To this end, the Takagi-Sugeno fuzzy modeling is used to describe the dynamics of a nonlinear system with disturbance. Since the states of the system are not fully available, a fuzzy observer is designed. Based on the observed states and a reference state model, a reduced fuzzy controller for trajectory tracking purposes is then proposed. While the controller and the observer are developed, the design goal is to achieve the convergence and a guaranteed \(H_{\infty}\) performance. By using Lyapunov and \(H_{\infty}\) theories, sufficient conditions for synthesis of a fuzzy observer and a fuzzy controller for T-S fuzzy systems are derived. Using some special manipulations, these conditions are reformulated in terms of linear matrix inequalities (LMIs) problem. Finally, the robust and effective tracking performance of the proposed controller is tested through simulations to control the dissolved oxygen and the substrate concentrations in an activated sludge process.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In the last few years, the Takagi-Sugeno (TS) fuzzy modelling which is a multi-modelling approach has been emerged as a powerful tool, providing a consistent and efficient approach to handle problems related to modelling and control of nonlinear systems especially wastewater treatment plants (WWTP). These processes are classified as complex systems due to their non-linear dynamics, large uncertainties and the lack of measurements. Hence, it has become a topic of substantial interest exposed to several studies using the TS fuzzy approach. For example, in <ns0:ref type='bibr' target='#b4'>(Carlos-Hernandez et al., 2006)</ns0:ref> an application for an anaerobic digestion process has been proposed, where a linearization study involving various representative operating points is first carried out to obtain the TS model then a TS fuzzy observer is designed and experimentally validated. Authors in <ns0:ref type='bibr' target='#b13'>(Nagy Kiss et al., 2011)</ns0:ref> propose a proportional integral observer for uncertain TS fuzzy systems affected by unknown inputs using L 2 -gain to minimize the effect of the unknown input. The method has been applied on a reduced model of the Activated Sludge Model No.1 (ASM1). The authors, in <ns0:ref type='bibr' target='#b2'>(Belchior et al., 2012)</ns0:ref> propose the regulation of the dissolved oxygen concentration in WWTP through the implementation of an adaptive fuzzy controller. The article <ns0:ref type='bibr' target='#b1'>(Aouaouda et al., 2012)</ns0:ref> deals with a fault tolerant control problem of an activated sludge process where an uncertain TS model is considered for the states and faults estimation and used into a robust tracking control scheme using the L 2 gain. Recently in <ns0:ref type='bibr' target='#b9'>(Li et al., 2020)</ns0:ref>, the control of the dissolved oxygen is presented using a fuzzy predictive model and where the membership functions of the fuzzy model are obtained based on the fuzzy C-means cluster algorithm.</ns0:p><ns0:p>In parallel, besides stabilization problem, tracking control designs are also important issues for practical applications. There are very successful studies dealing with the output/state tracking control design based on the TS fuzzy approach. In <ns0:ref type='bibr' target='#b10'>(Lin et al., 2007)</ns0:ref> an observer based output tracking control is investigated for TS fuzzy systems with time-delay. Authors in <ns0:ref type='bibr' target='#b11'>(Nachidi et al., 2011)</ns0:ref> studied the problem of robust PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science output tracking control of TS fuzzy uncertain discrete-time systems and its application in a DC-DC converters. An adaptive fuzzy control is proposed in <ns0:ref type='bibr' target='#b3'>(Bououden et al., 2015)</ns0:ref> for uncertain system subject to a pre-treatment of wastewater modeled using the TS approach. A TS fuzzy tracking control problem with respect to input saturation is adressed in <ns0:ref type='bibr'>(Yu et al., 2018)</ns0:ref> using an output feedback controller. In <ns0:ref type='bibr' target='#b0'>(Abyad et al., 2020)</ns0:ref> an output tracking control problem applied to a fermentation process has been scrutinized by considering the question of asymmetrical constraints on the control inputs. There are also relevant studies for the state tracking control even though its design is more general and more difficult than the design of the output tracking control. For example in <ns0:ref type='bibr' target='#b15'>(Senthilkumar and Mahanta, 2009)</ns0:ref>, a TS fuzzy guaranteed cost controller for trajectory tracking in nonlinear systems is investigated. A fuzzy state feedback law is used to build the controller whose performance is evaluated using a quadratic cost function.</ns0:p><ns0:p>By using observers to deal with the absence of full-state information, a robust TS fuzzy observer-based tracking controller is addressed in <ns0:ref type='bibr' target='#b6'>(Chang and Wu, 2012)</ns0:ref> where the H ∞ performance is considered to mitigate the tracking error. In our previous study <ns0:ref type='bibr' target='#b8'>(Khallouq et al., 2020)</ns0:ref>, a robust observer based tracking controller using a reference model is developed where the controller and the observer gains are obtained simultaneously in one single step by solving a set of linear matrix inequalities and where the tracking problem concerns all the state variables.</ns0:p><ns0:p>It is evident that a high number of state variables leads to high order controllers. In fact, a big problem needs to be solved for high order systems which increases numerical computations. Proceeding from the fact that for many realistic applications, it is not necessary to control all state variables. In addition, other difficulties may arise when the system is disturbed the existence of disturbance may deteriorate the performances of the system and can be a source of instability. Therefore, it is more appropriate to develop methodology which involves a low dimensional design for systems with disturbance. Motivated by the discussion above, the presented work is an extension of <ns0:ref type='bibr' target='#b8'>(Khallouq et al., 2020)</ns0:ref>. We propose to design a reduced order observer based state tracking controller in which only the key state variables has been considered and where the TS fuzzy model has been extended to deal with nonlinear systems with disturbance. Since the system states are not fully accessible to measurement a TS fuzzy observer is used to reconstruct all of them. The main contribution of this article can be outlined out as follows: we propose to split the TS system into two subsystems, one of which involves the part of the state variables to be controlled. Then we reformulate the problem of a robust observer based state tracking control design. The controller is then expressed by a feedback law, which is based on the classical structure of the Parallel Distributed Compensation (PDC) concept, involving the error between the estimate of the controlled state and the state of a reference model. Finally we establish sufficient conditions to guarantee tracking performance for the part to be controlled and stabilization for the remainder part. Unlike <ns0:ref type='bibr' target='#b8'>(Khallouq et al., 2020)</ns0:ref>, here the observer's synthesis is achieved separately from the controller synthesis. Whether for the controller or the observer, the used schemes produce a disturbance term. A performance criterion H ∞ is used in each problem. Based on H ∞ and Lyapunov theory, conditions are developed to ensure convergence and attenuate respectively the tracking error and the estimation error as small as possible. The results are formulated in terms of linear matrix inequalities (LMIs). Finally, the efficiency and the robustness of both the tracking control and estimation schemes are demonstrated via simulations on an activated sludge treatment process and are tested under a variety of operating conditions and simulated perturbations.</ns0:p><ns0:p>The paper is organised as follow: First, the modelling of the activated sludge process is described, followed by the design of an Observer-Based Fuzzy Robust Tracking Controller then the design of a robust TS fuzzy observer and end with an application of the proposed method to an activated sludge process.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE ACTIVATED SLUDGE PROCESS MODEL</ns0:head><ns0:p>The activated sludge schematized in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> is used as a biological purification in waste-water treatment, consisting essentially of flocculating microorganisms, mixed with dissolved oxygen and waste-water. Thus, the microorganisms come into contact with the organic pollutants presents in the wastewater, as well as with dissolved oxygen, and are kept in suspension. Based on the natural metabolism, These microorganisms convert the organic matter into new cells, carbon dioxide and water. The process of the purification is done into tow tanks called aerator and settler. The energy required by the process is provided by the dissolved oxygen. Subsequently, carbon dioxide is released in return. The mathematical model that represents the process is given using the mass balance around the aerator and the settler as follows <ns0:ref type='bibr' target='#b14'>(Nejjari et al., 1999)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_0'>dX dt = µ(.)X − D(1 + q r )X + q r DX r dS dt = − 1 Y µ(.)X − (1 + q r )DS + DS in dC o dt = − K 0 Y µ(.)X − D(1 + q r )C o + DC oin + K La (C s −C o ) dX r dt = D(1 + q r )X − D(β + q r )X r (1)</ns0:formula><ns0:p>where where -X(t), S(t) , C o (t) and X r (t) are respectively the biomass, the substrate, the dissolved oxygen and the recycled biomass concentrations.</ns0:p><ns0:p>µ(.) corresponds to the biomass specific growth rate. It is assumed to follow the following model:</ns0:p><ns0:formula xml:id='formula_1'>µ(S,C o ) = µ max S K s + S C o K c +C o</ns0:formula><ns0:p>µ max is the maximum specific growth rate, and K c is saturation constant, K s is the affinity constant.</ns0:p><ns0:p>-D and K La represent respectively the dilution rate and aeration the flow rate.</ns0:p><ns0:p>-S in and C oin the influent substrate and dissolved oxygen concentrations -Y is a constant yield coefficient, K 0 is a constant and C s is the maximum concentration of the dissolved oxygen concentration.</ns0:p><ns0:p>q r and β represent respectively the ratio of recycled flow and the ratio of waste flow to influent flow.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For this model the states, the inputs and the output vectors are given respectively by:</ns0:p><ns0:formula xml:id='formula_2'>x(t) = X(t) X r (t) S(t) C o (t) T u(t) = D(t) K La (t) T y(t) = C o (t) (2)</ns0:formula></ns0:div>
<ns0:div><ns0:head>OBSERVER BASED FUZZY ROBUST TRACKING CONTROLLER DESIGN Problem formulation and Preliminaries</ns0:head><ns0:p>The TS fuzzy approach consists in transcribing the dynamic of a nonlinear process into a finite weighted sum of linear models. There exist three approaches in the literature to obtain the TS fuzzy model <ns0:ref type='bibr'>(Tanaka and Wang, 2003)</ns0:ref>: the black box identification, the linearization technique and non-linearity sector method.</ns0:p><ns0:p>We are interested in the third method which gives an accurate TS fuzzy model description of nonlinear model without information loss.</ns0:p><ns0:p>Let consider the following nonlinear disturbed system:</ns0:p><ns0:formula xml:id='formula_3'>ẋ(t) = f (x(t), u(t), d(t)) y(t) = Cx(t) (3)</ns0:formula><ns0:p>where x(t) ∈ R n is the state vector, u(t) ∈ R m is the input vector, d(t) ∈ R l is the disturbance, y(t) ∈ R q is the output vector and C a matrix ∈ R q×n Consider a T-S fuzzy model The system (3) can be approximated or represented (according to the number n r of sub-models) by the Takagi-Sugeno structure:</ns0:p><ns0:formula xml:id='formula_4'>ẋ(t) = n r ∑ i=1 h i (z)(A i x(t) + B i u(t) + Gd(t)) (4)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_5'>A i ∈ R n×n , B i ∈ R n×m and G ∈ R n,l</ns0:formula><ns0:p>. z ∈ R p denotes the so-called decision variables (premise variables) that can be available when it depends on measurable variable such as u(t) or y(t) i.e z = z(u(t), y(t)) or unavailable when it depends on non-measured system state x(t) i.e z = z(x(t)). The weighting functions h i (z) called the membership functions satisfy the convex sum property expressed in the following equations:</ns0:p><ns0:formula xml:id='formula_6'>n r ∑ i=1 h i (z) = 1 0 ≤ h i (z) ≤ 1</ns0:formula><ns0:p>. The weighing functions h i (z) are generally nonlinear and depend on the premise variables z. Let us consider the following partition x(t) = x 1 (t)</ns0:p><ns0:p>x 2 (t) of the system (4) with the corresponding matrices</ns0:p><ns0:formula xml:id='formula_7'>A i = A 11 i A 12 i A 21 i A 22 i , B i = B 1 i B 2 i , G = G 1</ns0:formula><ns0:p>G 2 and where x 2 (t) corresponds to the part of the state vector to be controlled and x 1 (t) corresponds to the remaining state variables.</ns0:p><ns0:p>The dynamical model can be rewritten as follows:</ns0:p><ns0:formula xml:id='formula_8'>ẋ1 (t) = n r ∑ i=1 h i (z)(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i u(t) + G 1 d(t)) (5a) ẋ2 (t) = n r ∑ i=1 h i (z)(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i u(t) + G 2 d(t)) (5b)</ns0:formula><ns0:p>Consider a linear reference model given by the following equation:</ns0:p><ns0:formula xml:id='formula_9'>ẋr 2 (t) = A r x r 2 (t) + r(t) (6)</ns0:formula><ns0:p>where x r 2 (t) is the reference state which should be tracked by the system (5b). A r is a stable matrix and r(t) is a bounded input reference.</ns0:p><ns0:p>Our goal is to synthesize a control law based on the state estimation capable to reduce the error between the reference trajectory x r 2 (t) and the state x 2 (t). The Parallel Distributed Compensation concept</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science can be used to design a fuzzy controller where the main idea consists to design a local controller for each sub-model based on local control rule, which shares with the fuzzy model the same fuzzy sets.</ns0:p><ns0:p>The controller we choose in this paper is expressed by an observer-based law with reference model as follows:</ns0:p><ns0:formula xml:id='formula_10'>u(t) = n r ∑ i=1 h i (ẑ)K i ( x2 (t) − x r 2 (t))<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where x2 (t) is the estimation of x 2 (t) and the K i 's represent the local feedback gains that should be determined.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stability conditions</ns0:head><ns0:p>Before starting the stability analysis, some useful lemmas are recalled.</ns0:p><ns0:p>Lemma 1 <ns0:ref type='bibr' target='#b7'>(Guerra et al., 2006)</ns0:ref> For any matrices X,Y of appropriate dimensions and for any positive scalar η the following inequality holds:</ns0:p><ns0:formula xml:id='formula_11'>X T Y +Y T X ≤ ηX T X + η −1 Y T Y (8)</ns0:formula><ns0:p>Lemma 2 <ns0:ref type='bibr' target='#b7'>(Guerra et al., 2006)</ns0:ref> Considering Π < 0 a matrix X and a scalar λ , the following inequality holds:</ns0:p><ns0:formula xml:id='formula_12'>X T ΠX ≤ −λ (X T + X) − λ 2 Π −1 (9)</ns0:formula><ns0:p>Using ( <ns0:ref type='formula' target='#formula_10'>7</ns0:ref>) into ( <ns0:ref type='formula'>5</ns0:ref>), the closed loop system can then be rewritten as follows:</ns0:p><ns0:formula xml:id='formula_13'>ẋ1 (t) = n r ∑ i, j=1 h i (ẑ)h j (ẑ)(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i K j ( x2 (t) − x r 2 (t)) + ω 1 (t) ẋ2 (t) = n r ∑ i, j=1 h i (ẑ)h j (ẑ)(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i K j ( x2 (t) − x r 2 (t)) + ω 2 (t)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_14'>ω 1 (t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i u(t) + G 1 d(t)) ω 2 (t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i u(t) + G 2 d(t))</ns0:formula><ns0:p>Let us define by e r (t) = x 2 (t) − x r 2 (t) the tracking error and e o2 (t) = x 2 (t) − x2 (t) the state estimation error and consider the augmented state: x a (t) =</ns0:p><ns0:p>x 1 (t) e r (t) . Using (10), the dynamical model of the augmented system is given by:</ns0:p><ns0:formula xml:id='formula_15'>ẋa (t) = n r ∑ i=1 n r ∑ j=1 h i (ẑ)h j (ẑ)(A i j x a (t) + D i j ξ (t))<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>which can be rewritten as:</ns0:p><ns0:formula xml:id='formula_16'>ẋa (t) = n r ∑ i=1 h 2 i (ẑ)(A ii x a (t) + D ii ξ (t)) + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ)( A i j x a (t) + D i j ξ (t) 2 ) (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>)</ns0:formula><ns0:p>where Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_18'>A i j = A 11 i A 12 i + B 1 i K j A 21 i A 22 i + B 2 i K j , A i j = A i j + A ji D i j = −B 1 i K j A 12 i 0 I 0 −B 2 i K j (A 22 i − A re f ) −I 0 I , D i j = D i j + D ji ξ (t) = e o2 (t) T x r 2 (t) T r(t) T ω 1 (t) T ω 2 (t)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The term ξ (t) is acting like a disturbance affecting the augmented state x a (t). Thus to attenuate its effect, we propose the use of the H ∞ technique applied to System (12). The weighted H ∞ performance to minimize can be presented as follows:</ns0:p><ns0:formula xml:id='formula_19'>t f 0 x T a (t)Qx a (t)dt ≤ γ 2 t f 0 ξ T (t)ξ (t)dt (14)</ns0:formula><ns0:p>where Q is a positive definite matrix and γ is a prescribed positive scalar that defines the attenuation level of the disturbance ξ (t).</ns0:p><ns0:p>To realize Condition ( <ns0:ref type='formula'>14</ns0:ref>), one has to use a Lyapunov function for System (12)given by:</ns0:p><ns0:formula xml:id='formula_20'>V (x a ) = x a (t) T Px a (t) (P > 0) (15)</ns0:formula><ns0:p>To achieve the performance ( <ns0:ref type='formula'>14</ns0:ref>) and ensure the stability of augmented System (12), the following condition must be realized:</ns0:p><ns0:formula xml:id='formula_21'>V (x a ) + x T a (t)Qx a (t) − γ 2 ξ (t) T ξ (t) ≤ 0 (16)</ns0:formula><ns0:p>The following result can be announced:</ns0:p><ns0:p>Lemma 3 If there exist positive definite matrices P, Q 1 and Q 2 , and positive scalars α 1 , α 2 , γ 1 and γ 2 , the augmented system in (12) is asymptotically stable, such that the following conditions are satisfied:</ns0:p><ns0:formula xml:id='formula_22'>A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P < 0 ∀i = 1, . . . , n r α 1 − γ 1 < 0 (17) 1 2 (A T i j P + PA i j + Q 2 + α −1 2 PD i j D T i j P) < 0 ∀i < j = 1, . . . , n r 1 2 (α 2 − γ 2 ) < 0 (18)</ns0:formula><ns0:p>The H ∞ performmance criteria ( <ns0:ref type='formula'>14</ns0:ref>) is guaranteed where the scalar γ and the matrix Q are given by</ns0:p><ns0:formula xml:id='formula_23'>γ = √ γ 1 + γ 2 and Q = Q 1 + Q 2 Proof:</ns0:formula><ns0:p>Using (12), the derivative of the Lyapunov function ( <ns0:ref type='formula'>15</ns0:ref>) is:</ns0:p><ns0:formula xml:id='formula_24'>V (x a ) = n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii )x a (t) + ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t)] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j )x a (t) + ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t)]<ns0:label>(19)</ns0:label></ns0:formula><ns0:p>Denote by J the expression:</ns0:p><ns0:formula xml:id='formula_25'>J = V (x a (t)) + x T a (t)Qx a (t) − γ 2 ξ (t) T ξ (t)<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>by setting Q = Q 1 + Q 2 and γ 2 = γ 1 + γ 2 and using (19), J can be written as the sum of two terms:</ns0:p><ns0:formula xml:id='formula_26'>J = n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii )x a (t) + x T a (t)Q 1 x a (t) − γ 1 ξ T (t)ξ (t)) +ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t)] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j )x a (t) + x T a (t)Q 2 x a (t) − γ 2 ξ T (t)ξ (t) +ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t)]<ns0:label>(21)</ns0:label></ns0:formula><ns0:p>Using the Lemma 1 on the crossed terms yields: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_27'>ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t) ≤ α −1 1 x T a (t)PD ii D T ii Px a (t) + α 1 ξ T (t)ξ (t) ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t) ≤ α −1 2 x T a (t)PD i j D T i j Px a (t) + α 2 ξ T (t)ξ (t)<ns0:label>(22</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(22) into (21) leads to the following inequality:</ns0:p><ns0:formula xml:id='formula_28'>J ≤ n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P)x a (t) +ξ T (t)(α 1 − γ 1 )ξ (t))] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j + Q 2 + α −1 2 PD i j D T i j P)x a (t) +ξ T (t)(α 2 − γ 2 )ξ (t)]<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>This implies that ( <ns0:ref type='formula'>16</ns0:ref>) to be satisfied if the following sufficient conditions hold:</ns0:p><ns0:formula xml:id='formula_29'>A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P < 0 ∀i = 1, . . . , n r α 1 − γ 1 < 0 (24) 1 2 (A T i j P + PA i j + Q 2 + α −1 3 PD i j D T i j P) < 0 ∀i < j = 1, . . . , n r 1 2 (α 2 − γ 2 ) < 0 (25)</ns0:formula><ns0:p>This ends the lemma proof.</ns0:p></ns0:div>
<ns0:div><ns0:head>The main result</ns0:head><ns0:p>To determine the controller gains K i , we present new conditions in terms of LMIs. These conditions are developed through the use of separation Lemma 2, the introduction of some slack variables and other calculations leading to the following results.</ns0:p><ns0:p>Theorem 1 There exists an observer based controller (7) for the system (12) guaranteeing the H ∞ performance criteria (14) if there exists positive matrices</ns0:p><ns0:formula xml:id='formula_30'>X 1 = X T 1 , X 2 = X T 2 , matrices Y i , i = 1, 2, • • • , n r , positive matrices Q1 1 , Q2 1 , Q1</ns0:formula><ns0:p>2 and Q2 2 and prescribed positive scalars α 1 , α 2 , α 3 , α 4 , γ 1 and γ 2 such that α 1 < γ 1 and α 2 < γ 2 and that the following conditions hold:</ns0:p><ns0:formula xml:id='formula_31'>for i = 1, . . . , n r             M 1 M 2 −B 1 i Y i A 12 i 0 I 0 0 * M 3 −B 2 i Y i A 22 i − A r −I 0 I 0 * * −2α 3 X 2 0 0 0 0 −α 3 I * * * −α 1 I 0 0 0 0 * * * * −α 1 I 0 0 0 * * * * * −α 1 I 0 0 * * * * * * −α 1 I 0 * * * * * * * −α 1 I             < 0 (<ns0:label>26</ns0:label></ns0:formula><ns0:formula xml:id='formula_32'>)</ns0:formula><ns0:formula xml:id='formula_33'>for i < j = 1, . . . , n r 1 2             N 1 N 2 −B 1 i Y j − B 1 j Y i A 12 i + A 12 j 0 2I 0 0 * N 3 −B 2 i Y j − B 2 j Y i A 22 i + A 22 j − 2A r −2I 0 2I 0 * * −2α 4 X 2 0 0 0 0 −α 4 I * * * −α 2 I 0 0 0 0 * * * * −α 2 I 0 0 0 * * * * * −α 2 I 0 0 * * * * * * −α 2 I 0 * * * * * * * −α 2 I             < 0 (<ns0:label>27</ns0:label></ns0:formula><ns0:formula xml:id='formula_34'>)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_35'>M 1 = A 11 i X 1 + X 1 A 11 i T + Q1 1 M 2 = X 1 A 21 i T + A 12 i X 2 + B 1 i Y i M 3 = A 22 i X 2 + X 2 A 22 i T + B 2 i Y i + (B 2 i Y i ) T + Q2 1 N 1 = (A 11 i + A 11 j )X 1 + X 1 (A 11 i + A 11 j ) T + Q1 2 N 2 = X 1 (A 21 i + A 21 j ) T + (A 12 i + A 12 j )X 2 + B 1 i Y j + B 1 j Y i N 3 = (A 22 i + A 22 j )X 2 + X 2 (A 22 i + A 22 j ) T + (B 2 i Y j + B 2 j Y i ) + Q2 2 + (B 2 i Y j + B 2 j Y i )</ns0:formula><ns0:p>T and * stands for the symmetric term of the corresponding off-diagonal term.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>Solving LMIs ( <ns0:ref type='formula' target='#formula_31'>26</ns0:ref>),(27) the controller gains K i , the attenuation level γ and the matrix Q are given by:</ns0:p><ns0:formula xml:id='formula_36'>K i = Y i X −1 2 , i = 1, • • • , n r and γ = √ γ 1 + γ 2 Q = Q 1 + Q 2 where Q 1 = Q 1 1 0 0 Q 2 1 and Q 2 = Q 1 2 0 0 Q 2 2 proof:</ns0:formula><ns0:p>For the proof of the theorem 1, we will start from the sufficient conditions given in the Lemma 3. Let us consider the first condition (17). Multiplying it post and prior by P −1 and using Schur lemma, the following inequality is obtained:</ns0:p><ns0:formula xml:id='formula_37'>P −1 A T ii + A ii P −1 + Q1 D ii D T ii −α 1 I < 0 (28)</ns0:formula><ns0:p>By choosing matrices P and Q 1 as follows:</ns0:p><ns0:formula xml:id='formula_38'>P = P 1 0 0 P 2 and Q1 = P −1 Q 1 P −1 = Q1 1 0 0 Q2 1 and</ns0:formula><ns0:p>Replacing D ii , A ii by their expressions in ( <ns0:ref type='formula'>13</ns0:ref>) , ( <ns0:ref type='formula'>28</ns0:ref>) becomes:</ns0:p><ns0:formula xml:id='formula_39'>          M 1 M 2 −B 1 i K i A 12 i 0 I 0 * M 3 −B 2 i K i A 22 i − A r −I 0 I * * −α 1 I 0 0 0 0 * * * −α 1 I 0 0 0 * * * * −α 1 I 0 0 * * * * * −α 1 I 0 * * * * * * −α 1 I           < 0 (<ns0:label>29</ns0:label></ns0:formula><ns0:formula xml:id='formula_40'>)</ns0:formula><ns0:p>where </ns0:p><ns0:formula xml:id='formula_41'>M 1 = A 11 i P −1 1 + P −1 1 A 11 i T + Q1 1 M 2 = P −1 1 A 21 i T + (A 12 i + B 1 i K i )P −1 2 M 3 = (A 22 i + B 2 i K i )P −1 2 + P −1 2 (A 22 i + B 2 i K i ) T + Q2</ns0:formula><ns0:formula xml:id='formula_42'>          M 1 M 2 −B 1 i K i P −1 2 A 12 i 0 I 0 * M 3 −B 2 i K i P −1 2 A 22 i − A r −I 0 I * * −P −1 2 α 1 P −1 2 0 0 0 0 * * * −α 1 I 0 0 0 * * * * −α 1 I 0 0 * * * * * −α 1 I 0 * * * * * * −α 1 I           < 0 (30)</ns0:formula><ns0:p>Using Lemma 2 we have:</ns0:p><ns0:formula xml:id='formula_43'>P −1 2 (−α 1 )P −1 2 ≤ −2α 3 P −1 2 + α 2 3 (α 1 ) −1 I (31)</ns0:formula><ns0:p>and Schur complement yields to:</ns0:p><ns0:formula xml:id='formula_44'>            M 1 M 2 −B 1 i K i P −1 2 A 12 i 0 I 0 0 * M 3 −B 2 i K i P −1 2 A 22 i − A r −I 0 I 0 * * −α 3 P −1 2 0 0 0 0 α 3 I * * * −α 1 I 0 0 0 0 * * * * −α 1 I 0 0 0 * * * * * −α 1 I 0 0 * * * * * * −α 1 I 0 * * * * * * * −α 1             < 0 (32)</ns0:formula><ns0:p>Using the following variable change X 1 = P −1 1 , X 2 = P −1 2 and Y i = K i X 2 , the conditions (26) of the theorem is fulfilled.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To carry out the second LMI of the theorem 1, we proceed in the same way with the second sufficient condition of Lemma 3. Multiplying (18) post and prior with P −1 and using the Schur lemma, the following inequality is obtained:</ns0:p><ns0:formula xml:id='formula_45'>1 2 P −1 A T i j + A i j P −1 + Q2 D i j D T i j −α 2 I < 0 (33)</ns0:formula><ns0:p>By choosing the matrix Q2</ns0:p><ns0:formula xml:id='formula_46'>= P −1 Q 2 P −1 = Q1 2 0 0 Q2</ns0:formula><ns0:p>2 and replacing D i j and A i j by their expression in ( <ns0:ref type='formula'>13</ns0:ref>), (33) becomes:</ns0:p><ns0:formula xml:id='formula_47'>1 2           N 1 N 2 −(B 1 i K j + B 1 j K i ) A 12 i + A 12 j 0 2I 0 * N 3 −(B 2 i K j + B 2 j K i ) A 22 i + A 22 j − 2A r −2I 0 2I * * −α 2 I 0 0 0 0 * * * −α 2 I 0 0 0 * * * * −α 2 I 0 0 * * * * * −α 2 I 0 * * * * * * −α 2 I           < 0 (34) N 1 = (A 11 i + A 11 j )P −1 1 + P −1 1 (A 11 i + A 11 j ) T + Q1 2 N 2 = P −1 1 (A 21 i + A 21 j ) T + (A 12 i + A 12 j )P −1 2 + (B 1 i K j + B 1 j K i )P −1 2 N 3 = (A 22 i + A 22 j + B 2 i K j + B 2 j K i )P −1 2 + P −1 2 (A 22 i + A 22 j + B 2 i K j + B 2 j K i ) T + Q2 2</ns0:formula><ns0:p>Multiplying (34) left and right respectively by: diag( I I P −1 2 I I I I ) and its transpose we get:</ns0:p><ns0:formula xml:id='formula_48'>1 2           N 1 N 2 −(B 1 i K j + B 1 j K i )P −1 2 A 12 i + A 12 j 0 2I 0 * N 3 −(B 2 i K j + B 2 j K i )P −1 2 A 22 i + A 22 j − 2A r −2I 0 2I * * −P −1 2 α 2 P −1 2 0 0 0 0 * * * −α 2 I 0 0 0 * * * * −α 2 I 0 0 * * * * * −α 2 I 0 * * * * * * −α 2 I           < 0 (35)</ns0:formula><ns0:p>using Lemma 2 we have:</ns0:p><ns0:formula xml:id='formula_49'>P −1 2 (−α 2 )P −1 2 ≤ −2α 4 P −1 2 + α 2 4 (α 2 ) −1 I (36)</ns0:formula><ns0:p>and Schur complements yiels to:</ns0:p><ns0:formula xml:id='formula_50'>1 2             N 1 N 2 −(B 1 i K j + B 1 j K i )P −1 2 A 12 i + A 12 j 0 2I 0 0 * N 3 −(B 2 i K j + B 2 j K i )P −1 2 A 22 i + A 22 j − 2A r −2I 0 2I 0 * * −2α 4 P −1 2 0 0 0 0 α 4 I * * * −α 2 I 0 0 0 0 * * * * −α 2 I 0 0 0 * * * * * −α 2 I 0 0 * * * * * * −α 2 I 0 * * * * * * * −α 2 I             < 0 (37)</ns0:formula><ns0:p>Using the following variable change X 1 = P −1 1 , X 2 = P −1 2 and Y i = K i X 2 , the conditions ( <ns0:ref type='formula' target='#formula_33'>27</ns0:ref>) of the theorem is fulfilled. This achieves the proof of the theorem.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>A Robust T-S fuzzy observer design</ns0:head><ns0:p>In order to estimate both the state variables and the disturbance the following augmented state vector is considered x(t) = x T (t) d T (t) T . From the T-S system (4) we have:</ns0:p><ns0:formula xml:id='formula_51'>ẋ(t) = n r ∑ i=1 h i (ẑ)( A i G 0 0 x(t) + B i 0 u(t) + ω(t) ḋ(t) ) = n r ∑ i=1 h i (ẑ)( Āi x(t) + Bi u(t) + ω(t)) y(t) = C x(t)<ns0:label>(38)</ns0:label></ns0:formula><ns0:p>With:</ns0:p><ns0:formula xml:id='formula_52'>Āi = A i G 0 0 , Bi = B i 0 , C = C 0 , ω(t) = ω(t) ḋ(t)</ns0:formula><ns0:p>and</ns0:p><ns0:formula xml:id='formula_53'>ω(t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A i x(t) + B i u(t) + Gd(t))</ns0:formula><ns0:p>Let consider the following fuzzy Luenberger observer for the system (38):</ns0:p><ns0:formula xml:id='formula_54'>ẋ(t) = n r ∑ i=1 h i (ẑ)( Āi x(t) + Bi u(t) + Li C( x(t) − x(t)))<ns0:label>(39)</ns0:label></ns0:formula><ns0:p>where the Li 's represent the local observer gains. Note that this observer considers that the premise variables are unknown.</ns0:p><ns0:p>The dynamical model of the estimation error is then given by:</ns0:p><ns0:formula xml:id='formula_55'>ėo (t) = ẋ(t) − ẋ(t) = n r ∑ i=1 h i (ẑ)( Āi − Li C) ēo (t) + ω(t)<ns0:label>(40)</ns0:label></ns0:formula><ns0:p>where ω(t) is acting as disturbance. To attenuate its effect, the following H ∞ performance is used:</ns0:p><ns0:formula xml:id='formula_56'>t f 0 ēT o (t)R ēo (t)dt ≤ ν 2 t f 0 ω(t) T ω(t)dt<ns0:label>(41)</ns0:label></ns0:formula><ns0:p>where R is a positive definite matrix and ν is the attenuation level of the disturbances ω(t).</ns0:p><ns0:p>Theorem 2 Prescribing the attenuation level ν, an observer (39) for the system (46) satisfying the H ∞ performance criterion (41) exist if there exists a symmetric and positive matrix P o = P T o > 0, a matrix R > 0 and Z i , i = 1, 2, • • • , r and a positive scalar η such that the following LMIs are feasible</ns0:p><ns0:formula xml:id='formula_57'>P o Āi − Z i C + (P o Āi − Z i C) T + R P o P o −ηI < 0 (42) η − ν 2 < 0<ns0:label>(43)</ns0:label></ns0:formula><ns0:p>Solving LMIs 42, the observer gains Li are given by: Li</ns0:p><ns0:formula xml:id='formula_58'>= P −1 o Z i , i = 1, • • • , n r</ns0:formula><ns0:p>The scalar verifying the H ∞ norm for the observer is given by: ν = √ ν 2 proof: See Appendix A.</ns0:p></ns0:div>
<ns0:div><ns0:head>APPLICATION TO AN ACTIVATED SLUDGE PROCESS</ns0:head><ns0:p>This article addresses the problem of controlling an activated sludge treatment process using the Takagi-Sugeno approach developed above. The dissolved oxygen concentration in the ASP is an important parameter in the process control that has a considerable effect on the treatment effectiveness and economical cost. The reduction of organic substrate concentration is also crucially important and presents one of the main issues in the treatment process. It helps keeping a hight effluent quality. The control objective is to force the substrate and the dissolved oxygen concentrations S(t) and C o (t) to track the states of a given reference model S re f (t) and C re f o (t) under the following conditions:</ns0:p><ns0:p>• The dilution rate D(t) and the aeration flow rate K La (t) are the control variables.</ns0:p><ns0:p>• The dissolved oxygen concentration C o (t) is available.</ns0:p><ns0:p>• Biomass, substrate and recycled biomass concentrations X(t), S(t) and X r (t) are not available online.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The TS Fuzzy Model Design</ns0:p><ns0:p>To simulate more realistic conditions, the model ( <ns0:ref type='formula'>1</ns0:ref>) is rewrited in the form of a disturbed system. It is assumed that the concentration of the influent substrate is varying during a day instead of considering a constant value. The variation ∆Sin(t) around the daily average Sin acts as a disturbance.</ns0:p><ns0:p>To build the T-S model, the classical transformation of the non-linearity sector method is used. The following non linearities (the premise variables) are considered:</ns0:p><ns0:formula xml:id='formula_59'>z 1 (x) = z 1 (S,C o ) = µ max C o S (K s +S)(K c +C o ) − S z 2 (x) = z 2 (X) = X z 3 (x) = z 3 (X r ) = X r z 4 (x) = z 4 (C o ) = C o z 5 (x) = z 5 (X, S,C o ) = µ max C o X (K s +S)(K c +C o ) z 6 (x) = z 6 (S) = S (44)</ns0:formula><ns0:p>The nonlinear model of the Activated Sludge process described by equations ( <ns0:ref type='formula'>1</ns0:ref>) can be written in the form:</ns0:p><ns0:formula xml:id='formula_60'>ẋ(t) = A(z 1 , z 2 , z 3 , z 4 , z 5 )x(t) + B(z 2 , z 3 , z 4 , z 6 )u(t) + Gd(t) y(t) = Cx(t) (<ns0:label>45</ns0:label></ns0:formula><ns0:formula xml:id='formula_61'>)</ns0:formula><ns0:p>where the state, input and output vectors are</ns0:p><ns0:formula xml:id='formula_62'>x(t) =     X X r S C o     , u(t) = D K la , y = C o , G =     0 0 1 0     , and d(t) = D∆Sin(t)</ns0:formula><ns0:p>and the obtained matrices have the general form:</ns0:p><ns0:formula xml:id='formula_63'>A(z 1 , z 2 , z 3 , z 4 , z 5 ) =     z 1 0 z 2 0 0 −z 4 0 z 3 0 0 0 − 1 Y z 5 −z 3 z 2 0 − K 0 Y z 5     ; B(z 3 , z 4 , z 6 ) =     −(1 + q r )z 2 + q r z 3 0 (1 + q r )z 2 − (β + q r )z 3 0 S in − (1 + q r )z 6 0 C oin − (1 + q r )z 4 C s − z 4   </ns0:formula><ns0:p> and C = 0 0 0 1 6 premise variables are considered. Therefore, the model ( <ns0:ref type='formula' target='#formula_60'>45</ns0:ref>) can be represented by n r = 2 6 = 64 TS submodels, which is rewritten as follows:</ns0:p><ns0:formula xml:id='formula_64'>ẋ(t) = n r ∑ i=1 h i (z)(A i x(t) + B i u(t) + Gd(t)) (<ns0:label>46</ns0:label></ns0:formula><ns0:formula xml:id='formula_65'>)</ns0:formula><ns0:p>The A i 's and B i 's corresponding matrices and the h i (z)'s membership functions are obtained from</ns0:p><ns0:formula xml:id='formula_66'>A(z 1 , z 2 , z 3 , z 4 , z 5</ns0:formula><ns0:p>) and B(z 2 , z 3 , z 4 , z 6 ) and ( <ns0:ref type='formula'>44</ns0:ref>), for more explanation on the method to obtain them the reader can refer to <ns0:ref type='bibr' target='#b12'>(Nagy et al., 2010)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Observer Based Controller Synthesis and simulation results</ns0:head><ns0:p>The following partition x(t) = x 1 (t)</ns0:p><ns0:p>x 2 (t) are chosen with the corresponding vectors: </ns0:p><ns0:formula xml:id='formula_67'>x 1 (t) = X X r</ns0:formula><ns0:formula xml:id='formula_68'>(t) = S re f (t) C re f o (t)</ns0:formula><ns0:p>and the simulation parameters of table 1 are considered. To test the robustness of the proposed method, a sinusoidal variation of the influent substrate and changes in the kinetic parameters will be introduced during the simulation.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Simulation parameters <ns0:ref type='bibr' target='#b14'>(Nejjari et al., 1999</ns0:ref>) Remark 1 The LMIs in the theorem 1 and 2 are solved using matlab with the YALMIP toolbox can be 215 downloaded from https://yalmip.github.io/.</ns0:p><ns0:formula xml:id='formula_69'>Y q r β K 0 C s S in C oin µ max K s K c 0.</ns0:formula></ns0:div>
<ns0:div><ns0:head>216</ns0:head><ns0:p>• The resolution of the LMIs in theorem 1 for the parameters α 1 = 0.4, α 2 = 0.4, α 3 = 300, α 4 = 295, γ 1 = 0.4 and γ 2 = 0.4 leads to:</ns0:p><ns0:formula xml:id='formula_70'>X 1 = 10 3 × 29435 27668 27668 27609 , X 2 = 10 3 × 45062 3233 3233 21729 Q1 1 = Q2 1 = Q1 2 = Q2 2 = 1 0 0 1</ns0:formula><ns0:p>The attenuation level is given by: γ = √ γ 1 + γ 2 = 0.8944, bellow are given some of the controller gains: </ns0:p><ns0:formula xml:id='formula_71'>K 1 = 0.</ns0:formula><ns0:formula xml:id='formula_72'>      R =      </ns0:formula><ns0:p>0.0385 −0.0278 −0.0056 0.0000 0.0006 −0.0278 0.0379 −0.0066 0.0000 −0.0003 −0.0056 −0.0066 0.0598 0.0000 −0.0220 0.0000 0.0000 0.0000 1.0000 0.0000 0.0006 −0.0003 −0.0220 0.0000 0.0516</ns0:p><ns0:formula xml:id='formula_73'>     </ns0:formula><ns0:p>• bellow are given some of the observer gains: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_74'>L 1 = 10 2 ×       −1668.3 11.3 −136.5 14.6 −94.6       , L 8 = 10 2 ×       −1668.3 11.3 −136.4 14.7 −94.6       , L 16 = 10 2 ×       −834.86<ns0:label>6</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Two changes over a period of two days in two kinetic parameters respectively µ max (10%) at t = 10days, and Kc(10%) at t = 50days</ns0:p><ns0:p>The goal of the proposed control strategy is to follow the output references as closely as possible. This is well illustrated in figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> where a comparison is given between the true simulated value, the estimated and the corresponding reference trajectory respectively for the dissolved oxygen concentration C o and the substrate S. The results show the ability of the obtained control law to track the reference trajectories of the controlled variables after a short transient response despite the changes of the set-points. The perturbations of substrate and dissolved oxygen regulation due to considered disturbances are favorably rejected by the controller especially for the dissolved oxygen. </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this article, an observer based robust fuzzy tracking controller has been first developed then applied to a strongly nonlinear process with a biological nature. The nonlinear system is equivalently represented by a TS multi-model. Considering all the states are not fully accessible, the stability analysis and design of TS fuzzy system via an observer-based tracking controller satisfying the H ∞ performance requirement has been investigated. Controller and observer gains are obtained by solving a set of LMIs. These theoretical points has been then applied to an activated sludge process where the only measured variable available online is the concentration of dissolved oxygen, which is the most realistic and economical choice. The tracking problem of reference trajectories of two state variables (the dissolved oxygen and the substrate) making use of two manipulated variables (the dilution rate and the aeration flow rate) has been addressed.</ns0:p><ns0:p>The numerical simulation results illustrate the effectiveness of the proposed method and show good state estimation and tracking performances. Extension of the proposed approach to fault tolerant control is the focus of our future work, we investigate a Takagi-Sugeno fuzzy Proportional Integral observer-based fault-tolerant control subject to actuators and sensors fault of TS fuzzy positive systems with saturation on the inputs.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROOF OF THEOREM 2</ns0:head><ns0:p>Let consider the following Lyapunov function:</ns0:p><ns0:formula xml:id='formula_75'>V ( ēo ) = ēo (t) T P o ēo (t)<ns0:label>(47)</ns0:label></ns0:formula><ns0:p>To achieve the performance (41) and ensure the stability of system (40), the following condition must be realized:</ns0:p><ns0:formula xml:id='formula_76'>V ( ēo )) + ēT o (t)R ēo (t) − ν 2 ω(t) T ω(t) < 0 (48)</ns0:formula><ns0:p>Let consider the derivative of the Lyapunov function V ( ēo ): Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_77'>V ( ēo ) =</ns0:formula></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic diagram of the activated sludge process.</ns0:figDesc><ns0:graphic coords='4,178.44,63.78,340.17,286.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>generate the trajectories of the reference state x r 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>l] [mg/l] [mg/l] [h −1 ] [mg/l] [mg/l]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The evolution of the controlled variables, their estimates and their corresponding reference trajectories.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The manipulated variables.</ns0:figDesc><ns0:graphic coords='14,167.10,300.03,362.84,181.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison between the biomass and recycled biomass with their estimates.</ns0:figDesc><ns0:graphic coords='14,167.10,520.18,362.84,181.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3shows the manipulated variables respectively the dilution rate D and the air flow rate Kla.Their dynamics change whenever the reference model changes and the control objective is clearly reached.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure4represents respectively the reconstructed biomass X and recycled biomass Xr . As indicated previously, these two variables are not considered in the tracking problem. Nevertheless, it can be seen in these results that the estimated values of theses states variables are in excellent agreement with their corresponding true simulated values. Despite perturbation in the influent substrate and changes in kinetic parameters, the estimates are smooth and reliable.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>ēT o (t)(( Āi − Li C) T P o + P o ( Āi − Li C)) ēo (t) + ω(t) T P o ēo (t) + ēT o (t)P o ω(t) T P o +P o ( Āi − Li C) + R + η −1 P o P o ) ēo (t) +(η − ν 2 ) ω(t) T ω(t)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>(49)</ns0:cell></ns0:row><ns0:row><ns0:cell>Using Lemma 1 leads to:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ω(t) T P o ēo (t) + ēT o (t)P o ω(t) ≤ η ω(t) T ω(t) + η −1 ēT o (t)P o P o ēo (t)</ns0:cell><ns0:cell>(50)</ns0:cell></ns0:row><ns0:row><ns0:cell>(49) and (50) leads to:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>n r ∑</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(51)</ns0:cell></ns0:row></ns0:table><ns0:note>V ( ēo ) + ēT o (t)R ēo (t) − ν 2 ω(t) T ω(t) ≤ i=1 h i (ẑ)[ ēT o (t)(( Āi − Li C) 14/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:1:1:NEW 21 Jan 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Cover Letter
12/01/2021
Dear Qichun Zhang
Editor Journal of PeerJ Computer Science.
Subject: Submission of revised paper: 'Observer based robust H∞ fuzzy tracking control:
Application to an activated sludge process'
I acknowledge receipt of your message sent on Wednesday, November 12, 2020 I would like
to thank you and the reviewers for the comments and the interest in our manuscript. The paper has
been deeply revised. This new version is addressed taking into account the remarks raised by the
reviewer. Changes and the added parts are shown in red. Responses to the specific comments are
presented in the following.
We are looking forward to your next response.
Best regard
Abdelmounaim Khallouq
PhD student
LAEPT-URAC 28, Faculty of Science Semlalia
UCA, Marrakesh, Morocco
Answer to the reviewer's comments
Paper: 'Observer based robust H∞ fuzzy tracking control:
Application to an activated sludge process'
By A. Khallouq, A. Karama, M. Abyad
Answer to the comments of reviewer 1
We very much appreciate the time you have taken in carefully reviewing our paper. Your specific
suggestions have been very helpful to us in revising our paper. The following specifies how we have
responded to the issues you have pointed and incorporated your suggestions in the revised version of
our paper.
Basic reporting
The paper use a clear English, has a strong technical background, a relevant but not complete literature
review, with appropriate paper structure and figures, as well as clear definitions of all terms and
theorems, and detailed proofs.
The paper should be considered for publication, after solving some elements/issues detailed in the
following review.
1. The most important issue is the lack in explaining the novelty of your proposed method.
We have changed the introduction giving more details and explanations regarding the novelty of the
proposed method in the revised version of the paper.
2. The next most important item is the positioning of your paper in the recent international context.
New references were added to the introduction in the revised paper in order to situate our work in the
recent international context.
3. Some minor mathematical gaps were detected.
The mathematical gaps have been addressed in the revised paper
4. The least important points are related to the English.
The English related issue has been fixed.
Experimental design
The paper is according to the aims and scope of the journal.
The research question is quite well defined, but needs some further investigation.
The research complies with high technical and ethical standards.
The method is sufficiently described from the reproduction aspect.
Validity of the findings
Impact and novelty not sufficiently assessed, but existing. Some refinements are recommended in the
review.
As suggested the novelty has been deeply explained with details and assessed with recent references
Details on the data used are not explicitly given.
The parameters as well as the model used in our study were taken from the reference cited in the text:
Nejjari, F., Dahhou, B., Benhammou, A., and Roux, G. (1999). Non-linear multivariable adaptive
control of an activated sludge wastewater treatment process. International Journal of Adaptive Control
and Signal Processing, 13(5):347–365.
Conclusion is appropriately stated.
Comments for the author
Paper Review
As a summary of this review:
The paper uses a clear English, has a strong technical background, a relevant but not complete
literature review, with appropriate paper structure and figures, as well as clear definitions of all terms
and theorems, and detailed proofs.
The paper should be considered for publication, after solving some elements/issues detailed in the
following:
1. Your most important issue is the lack in explaining the novelty of your proposed method.
2. The next most important item is the positioning of your paper in the recent international context.
3. Some minor mathematical gaps were detected.
4. The least important points are related to the English.
Answer to the annotated manuscript:
Abstract
-Please consider deleting the part regarding the non-measurement of premise variables, as they are
used in the fuzzy modelling to incorporate the unavailable variables (states, but not only).
In the revised version we have removed the part concerning the non-measurement of the premise
variables as suggested.
-Please also refer (maybe in the introduction) to the equivalence in this article of the term “fuzzy
modelling” with the “multi-modelling approach”, as it was not originally the case. Almost all recent
research papers have adopted this equivalence.
The equivalence between the term „‟fuzzy modelling” with „‟multi-modelling approach‟‟ has been
highlighted in the introduction.
Context
-The paper reflects a strong technical investigation performed to a high technical standard; however,
the description and explanations need further improvement. The paper introduction needs more detail.
I suggest that you improve the description at lines 34- 46 to provide more justification for your study.
More specifically: Impact and novelty are not explicitly assessed, even though they are technically
reflected in the paper. The novelty of the method should be better described. Please consider giving
further explanations on the power of this method comparing with the existing ones.
In the revised paper we addressed all these issues: more details have been given in the introduction to
justify our proposed method, recent references were added at lines 72-91(trackedChanges paper) in
order to assess the novelty which has been described and detailed at lines 92-122(trackedChanges
paper) in the revised paper
-In the first paragraph some references are given, but then details on other papers are afterwards
provided. For coherence and consistency, please list all the references, especially the one(s) you are
detailing afterwards.
In the revised paper the introduction has been revised with the references therein which have been
detailed
-Some more references to the existing published proposals on robust fuzzy tracking controller would
be relevant to prove the novelty of your approach.
The following references have been added in the revised paper in order to assess the novelty of the
proposed method:
Lin, C., Wang, Q. G., Lee, T. H., and He, Y. (2007). Output Tracking Control for T-S Fuzzy Systems.
In LMI Approach to Analysis and Control of Takagi-Sugeno Fuzzy Systems with Time Delay, pages
125–147. Springer Berlin Heidelberg, Berlin, Heidelberg.
Nachidi, M., Hajjaji, A. E., and Bosche, J. (2011). Robust output tracking control of T-S fuzzy systems
and its application to DC-DC converters.IFAC Proceedings Volumes, 44(1):10934–10939.
Bououden, S., Chadli, M., and Karimi, H. (2015). Control of uncertain highly nonlinear biological
process based on Takagi–Sugeno fuzzy models.Signal Processing, 108:195–205.
Senthilkumar, D. and Mahanta, C. (2009). Fuzzy guaranteed cost controller for trajectory tracking in
nonlinear systems.Nonlinear Analysis: Hybrid Systems, 3(4):368–379.
Yu, Y., Lam, H. K., and Chan, K. Y. (2018). T-S Fuzzy-Model-Based Output Feedback Tracking
Control with Control Input Saturation.IEEE Transactions on Fuzzy Systems, 26(6):3514–3523.
Khallouq, A., Karama, A., and Abyad, M. (2020). Robust fuzzy observer-based control with reference
state model and unmeasurable premise variables: Application to a biological process.Journal of Control
Engineering and Applied Informatics, 22(4):13–22.
-Provide future perspective on the fault tolerant control.
In our future perspective we investigate a Takagi-Sugeno fuzzy Proportional Integral observer based
fault tolerant control subject to actuator and sensors fault subject to TS positive systems with
saturation on the inputs.
ACTIVATED SLUDGE PROCESS MODEL
-Lines 85 – 94: The inputs and outputs are not clearly stated. Please provide them for the sake of
clarity in the application of the estimation method afterwards.
The inputs and outputs have been stated clearly in the revised paper after line 136(trackedChanges
paper).
OBSERVER BASED FUZZY ROBUST TRACKING CONTROLLER DESIGN
Problem formulation and preliminaries
-In lines 97-100: as mentioned previously, the fuzzy modelling refers mainly to local models (that are
supposed to have more than 50% weight on the nonlinear model), which may not the case when
applying the multi-modelling approach to describe a nonlinear process. So, the correct description
should avoid using the “local” term, by only saying that the nonlinear process is transcribed into a
finite weighted sum of linear models.
In the revised paper we avoided the use of the term “local” and the sentence has been rewritten as
suggested.
-When the premise variables are introduced, before line 102, they are only depending on (t),
nevertheless in the text they are correctly explained as measurable, depending on u, y or
nonmeasurable, depending on x. Please consider explaining the dependency on the specific variables.
In the revised paper a brief explanation has been added about the dependency of the premise variables
on the specific variables.
APPLICATION TO AN ACTIVATED SLUDGE PROCESS
-Details on the data used are not explicitly given. Please provide them (real data, simulated data) are
reasons for the choice. Real data is preferable and available online.
The parameters as well as the model used in our study were taken from the reference cited in the text:
Nejjari, F., Dahhou, B., Benhammou, A., and Roux, G. (1999). Non-linear multivariable
adaptive control of an activated sludge wastewater treatment process. International Journal of
Adaptive Control and Signal Processing, 13(5):347–365.
English
-The English language should slightly be improved to ensure that an international audience can
fluently understand your text.
The English language has been improved according to the suggested recommendations.
-Line 29: approaches … “was developed” –> to be replaced by “have been/were developed”. Please
consider using the same past present form in the whole article was / were or has been /have been, as in
the draft there is a mixt.
The issue is addressed in the revised version.
-Line 57: please provide what ASM1 stands for
ASM1 stand for Activated Sludge Model No.1 and has been added to the revised version.
-Lines 139-141: please review English
The English in lines 184-186(trackedChanges paper) has been reviewed.
-Line 188: attention to “Seguno“-> Sugeno
This is corrected in the revised version.
-Line 198: please specify the dependence of each premise variables on the states/inputs/outputs: x1…
x4…
In the revised version the dependency of each premise variables on the states/inputs/outputs has been
stated clearly.
-Line 199: the matrix B also depends on z2.
The premise variable z2 has been included in the matrix B in the revised version.
-Please reformulate as follows: “Therefore, the model (43) can be represented by nr = 26 = 64 TS
submodels, which is rewritten as follows:”
The sentence has been reformulated as suggested.
-Line 208: please provide the link where YALMIP can be downloaded.
A link from where YALMIP can be downloaded is added to the revised version.
-Observer gains: I am surprised about the values of Li: could you please explain the
x []?
These gains are obtained by solving the LMIs (41) and (42) using the toolbox YALMIP and for which
the observer dynamic is stable, also note that the values inside the victor [] are small (0.0…).
-I would suggest deleting the Figure 5, as it is not relevant for the study.
Figure 5 has been removed from the revised version.
-Please make a comment on the figure 4 on the fact that the proposed controller can attain the
reference trajectory after around 10 days, explaining if this is enough comparing with other tracking
methods and with international recommendations.
Figure 4 represent respectively the reconstructed biomass ̂ and recycled biomass ̂ , they show the
effectiveness of the designed observer. We agree that 10 days maybe a little bit longer, in the revised
version we give new parameters design for the theorem 2 in order to ameliorate the time of
convergence of the observer.
Conclusion
-Line 226: same reflection as in the abstract: the states are unmeasurable, not the premises variables
they are unmeasurable consequently, as they are mathematically designed to incorporate the
nonlinearity.
This issue has been addressed in the revised version.
Answer to the comments of reviewer 2
We very much appreciate the time you have taken in carefully reviewing our paper. Your specific
suggestions have been very helpful to us in revising our paper. The following specifies how we have
responded to the issues you have pointed and incorporated your suggestions in the revised version of
our paper.
Basic reporting
The paper is generally well written.
-The authors should improve the list of references with papers like:
https://doi.org/10.1016/j.sigpro.2014.09.011 this is needed especially since the problem of using TS
fuzzy approach is not new in the case of the wastewater treatment processes.
The suggested reference has been added to the revised version.
-The paper needs a serious proof reading. There are a lot of typos such as: Takagi Seguno fuzzy
model.
In the revised version we have addressed all these typos.
-The authors should also clearly present which parts are new and which parts are taken from the
literature. For example: Lemmas are new or not because there is no citation attached to them.
The parts token from the literature has been referred into the text.
Experimental design
-The authors should test the robust solution in the case of some uncertainties. I recommend
considering some variation of the maximum specific growth rate or other parameters from the model.
This is very important since this type of process is highly affected by uncertainties.
New simulation has been provided taken into account a variation of the kinetic parameters to prove the
efficiency of the proposed controller. Moreover we converted the problem to tracking control of
disturbed systems, where we considered that the influent substrate is affected by a disturbance. A
variation
of sinusoidal form (with a period of one day and amplitude of 5% of the daily
average value
) in the influent substrate is introduced. So the substrate influent is
rewritten as
where
is the daily average value of the influent substrate and
represents the variation that occur over the simulation times.
Validity of the findings
-From my own works I didn't manage to have an observer for this model using only the dissolved
oxygen concentration as measured variable (it would interesting to linearize the model in different
operating points and test the observability of the linear model. I did that and the results are not
promising). Maybe this method can offer this possibility.
In fact, with this approach, we can control the choice of the premise variables in such a way that the
obtained models are observable and this is another advantage of the multi-modelling approach since
the obtained models are not unique it depends highly on the chosen premise variables, in the presented
work we managed to obtain an observable model with the chosen premise variables.
Authors in (Nagy et al., 2010) discussed this matter in the section „’Choice Criteria for Quasi-LPV
Form‟‟:
Nagy, A. M., Mourot, G., Marx, B., Ragot, J., and Schutz, G. (2010). Systematic
multimodeling methodology applied to an activated sludge reactor model.Industrial and
Engineering Chemistry Research, 49(6):2790–2799.
" | Here is a paper. Please give your review comments after reading it. |
36 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The design of an observer-based robust tracking controller is investigated and successfully applied to control an Activated Sludge Process (ASP) in this study. To this end, the Takagi-Sugeno fuzzy modeling is used to describe the dynamics of a nonlinear system with disturbance. Since the states of the system are not fully available, a fuzzy observer is designed. Based on the observed states and a reference state model, a reduced fuzzy controller for trajectory tracking purposes is then proposed. While the controller and the observer are developed, the design goal is to achieve the convergence and a guaranteed \(H_{\infty}\) performance. By using Lyapunov and \(H_{\infty}\) theories, sufficient conditions for synthesis of a fuzzy observer and a fuzzy controller for T-S fuzzy systems are derived. Using some special manipulations, these conditions are reformulated in terms of linear matrix inequalities (LMIs) problem. Finally, the robust and effective tracking performance of the proposed controller is tested through simulations to control the dissolved oxygen and the substrate concentrations in an activated sludge process.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In the last few years, the Takagi-Sugeno (TS) fuzzy modelling which is a multi-modelling approach has been emerged as a powerful tool, providing a consistent and efficient approach to handle problems related to modelling and control of nonlinear systems especially wastewater treatment plants (WWTP). These processes are classified as complex systems due to their non-linear dynamics, large uncertainties and the lack of measurements. Hence, it has become a topic of substantial interest exposed to several studies using the TS fuzzy approach. For example, in <ns0:ref type='bibr' target='#b4'>(Carlos-Hernandez et al., 2006)</ns0:ref> an application for an anaerobic digestion process has been proposed, where a linearization study involving various representative operating points is first carried out to obtain the TS model then a TS fuzzy observer is designed and experimentally validated. Authors in <ns0:ref type='bibr' target='#b15'>(Nagy Kiss et al., 2011)</ns0:ref> propose a proportional integral observer for uncertain TS fuzzy systems affected by unknown inputs using L 2 -gain to minimize the effect of the unknown input. The method has been applied on a reduced model of the Activated Sludge Model No.1 (ASM1). The authors, in <ns0:ref type='bibr' target='#b2'>(Belchior et al., 2012)</ns0:ref> propose the regulation of the dissolved oxygen concentration in WWTP through the implementation of an adaptive fuzzy controller. The article <ns0:ref type='bibr' target='#b1'>(Aouaouda et al., 2012)</ns0:ref> deals with a fault tolerant control problem of an activated sludge process where an uncertain TS model is considered for the states and faults estimation and used into a robust tracking control scheme using the L 2 gain. Recently in <ns0:ref type='bibr' target='#b11'>(Li et al., 2020)</ns0:ref>, the control of the dissolved oxygen is presented using a fuzzy predictive model and where the membership functions of the fuzzy model are obtained based on the fuzzy C-means cluster algorithm.</ns0:p><ns0:p>In parallel, besides stabilization problem, tracking control designs are also important issues for practical applications. There are very successful studies dealing with the output/state tracking control design based on the TS fuzzy approach. In <ns0:ref type='bibr' target='#b12'>(Lin et al., 2007)</ns0:ref> an observer based output tracking control is investigated for TS fuzzy systems with time-delay. Authors in <ns0:ref type='bibr' target='#b13'>(Nachidi et al., 2011)</ns0:ref> Manuscript to be reviewed Computer Science output tracking control of TS fuzzy uncertain discrete-time systems and its application in a DC-DC converters. An adaptive fuzzy control is proposed in <ns0:ref type='bibr' target='#b3'>(Bououden et al., 2015)</ns0:ref> for uncertain system subject to a pre-treatment of wastewater modeled using the TS approach. A TS fuzzy tracking control problem with respect to input saturation is adressed in <ns0:ref type='bibr' target='#b21'>(Yu et al., 2018)</ns0:ref> using an output feedback controller. In <ns0:ref type='bibr' target='#b0'>(Abyad et al., 2020)</ns0:ref> an output tracking control problem applied to a fermentation process has been scrutinized by considering the question of asymmetrical constraints on the control inputs. There are also relevant studies for the state tracking control even though its design is more general and more difficult than the design of the output tracking control. For example in <ns0:ref type='bibr' target='#b17'>(Senthilkumar and Mahanta, 2009)</ns0:ref>, a TS fuzzy guaranteed cost controller for trajectory tracking in nonlinear systems is investigated. A fuzzy state feedback law is used to build the controller whose performance is evaluated using a quadratic cost function.</ns0:p><ns0:p>By using observers to deal with the absence of full-state information, a robust TS fuzzy observer-based tracking controller is addressed in <ns0:ref type='bibr' target='#b6'>(Chang and Wu, 2012)</ns0:ref> where the H ∞ performance is considered to mitigate the tracking error. In our previous study <ns0:ref type='bibr' target='#b10'>(Khallouq et al., 2020)</ns0:ref>, a robust observer based tracking controller using a reference model is developed where the controller and the observer gains are obtained simultaneously in one single step by solving a set of linear matrix inequalities and where the tracking problem concerns all the state variables.</ns0:p><ns0:p>It is evident that a high number of state variables leads to high order controllers. In fact, a big problem needs to be solved for high order systems which increases numerical computations. Proceeding from the fact that for many realistic applications, it is not necessary to control all state variables. In addition, other difficulties may arise when the system is disturbed the existence of disturbance may deteriorate the performances of the system and can be a source of instability. Therefore, it is more appropriate to develop methodology which involves a low dimensional design for systems with disturbance. Motivated by the discussion above, the presented work is an extension of <ns0:ref type='bibr' target='#b10'>(Khallouq et al., 2020)</ns0:ref>. We propose to design a reduced order observer based state tracking controller in which only the key state variables has been considered and where the TS fuzzy model has been extended to deal with nonlinear systems with disturbance. Since the system states are not fully accessible to measurement a TS fuzzy observer is used to reconstruct all of them. Because of the nonlinear feature of the bioprocesses dynamics and the usually large uncertainty of some parameters, mainly the kinetic terms and the unkown inputs, the implementation of extended different versions of observers are very promising and have proved to be very successful in several applications e.g. Kalman filter to deal with gaussian disturbances <ns0:ref type='bibr' target='#b22'>(Zeng et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b18'>Silva et al., 2019)</ns0:ref> observer based on H ∞ technique <ns0:ref type='bibr' target='#b9'>(Katebi, 2001)</ns0:ref>, the minimum entropy filtering method for non-gaussian disturbances cases <ns0:ref type='bibr' target='#b23'>(Zhang et al., 2017)</ns0:ref>. The main contribution of this article can be outlined out as follows: we propose to split the TS system into two subsystems, one of which involves the part of the state variables to be controlled. Then we reformulate the problem of a robust observer based state tracking control design. The controller is then expressed by a feedback law, which is based on the classical structure of the Parallel Distributed Compensation (PDC) concept, involving the error between the estimate of the controlled state and the state of a reference model. Finally we establish sufficient conditions to guarantee tracking performance for the part to be controlled and stabilization for the remainder part. Unlike <ns0:ref type='bibr' target='#b10'>(Khallouq et al., 2020)</ns0:ref>, here the observer's synthesis is achieved separately from the controller synthesis. Whether for the controller or the observer, the used schemes produce a disturbance term. A performance criterion H ∞ is used in each problem. Based on H ∞ and Lyapunov theory, conditions are developed to ensure convergence and attenuate respectively the tracking error and the estimation error as small as possible. The results are formulated in terms of linear matrix inequalities (LMIs). Finally, the efficiency and the robustness of both the tracking control and estimation schemes are demonstrated via simulations on an activated sludge treatment process and are tested under a variety of operating conditions and simulated perturbations.</ns0:p><ns0:p>The paper is organised as follow: First, the modelling of the activated sludge process is described, followed by the design of an Observer-Based Fuzzy Robust Tracking Controller then the design of a robust TS fuzzy observer and end with an application of the proposed method to an activated sludge process.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE ACTIVATED SLUDGE PROCESS MODEL</ns0:head><ns0:p>The activated sludge schematized in figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> is used as a biological purification in waste-water treatment, consisting essentially of flocculating microorganisms, mixed with dissolved oxygen and waste-water. Thus, the microorganisms come into contact with the organic pollutants presents in the wastewater, as well as with dissolved oxygen, and are kept in suspension. Based on the natural metabolism, These microorganisms convert the organic matter into new cells, carbon dioxide and water. The process of the purification is done into tow tanks called aerator and settler. The energy required by the process is provided by the dissolved oxygen. Subsequently, carbon dioxide is released in return. The mathematical model that represents the process is given using the mass balance around the aerator and the settler as follows <ns0:ref type='bibr' target='#b16'>(Nejjari et al., 1999)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_0'>dX dt = µ(.)X − D(1 + q r )X + q r DX r dS dt = − 1 Y µ(.)X − (1 + q r )DS + DS in dC o dt = − K 0 Y µ(.)X − D(1 + q r )C o + DC oin + K La (C s −C o ) dX r dt = D(1 + q r )X − D(β + q r )X r (1)</ns0:formula><ns0:p>where where</ns0:p></ns0:div>
<ns0:div><ns0:head>96</ns0:head><ns0:p>-X(t), S(t) , C o (t) and X r (t) are respectively the biomass, the substrate, the dissolved oxygen and 97 the recycled biomass concentrations.</ns0:p></ns0:div>
<ns0:div><ns0:head>98</ns0:head><ns0:p>µ(.) corresponds to the biomass specific growth rate. It is assumed to follow the following model:</ns0:p><ns0:formula xml:id='formula_1'>µ(S,C o ) = µ max S K s + S C o K c +C o</ns0:formula><ns0:p>µ max is the maximum specific growth rate, and K c is saturation constant, K s is the affinity constant. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>-Y is a constant yield coefficient, K 0 is a constant and C s is the maximum concentration of the dissolved oxygen concentration.</ns0:p><ns0:p>q r and β represent respectively the ratio of recycled flow and the ratio of waste flow to influent flow.</ns0:p><ns0:p>For this model the states, the inputs and the output vectors are given respectively by:</ns0:p><ns0:formula xml:id='formula_2'>x(t) = X(t) X r (t) S(t) C o (t) T u(t) = D(t) K La (t) T y(t) = C o (t) (2)</ns0:formula></ns0:div>
<ns0:div><ns0:head>OBSERVER BASED FUZZY ROBUST TRACKING CONTROLLER DESIGN Problem formulation and Preliminaries</ns0:head><ns0:p>The TS fuzzy approach consists in transcribing the dynamic of a nonlinear process into a finite weighted sum of linear models. There exist three approaches in the literature to obtain the TS fuzzy model <ns0:ref type='bibr' target='#b20'>(Tanaka and Wang, 2003b)</ns0:ref>: the black box identification, the linearization technique and non-linearity sector method. We are interested in the third method which gives an accurate TS fuzzy model description of nonlinear model without information loss.</ns0:p><ns0:p>Let consider the following nonlinear disturbed system:</ns0:p><ns0:formula xml:id='formula_3'>ẋ(t) = f (x(t), u(t), d(t)) y(t) = Cx(t) (3) where x(t) ∈ R n is the state vector, u(t) ∈ R m is the input vector, d(t) ∈ R l is the disturbance, y(t) ∈ R q is</ns0:formula><ns0:p>the output vector and C a matrix ∈ R q×n Consider a T-S fuzzy model The system (3) can be approximated or represented (according to the number n r of sub-models) by the Takagi-Sugeno structure:</ns0:p><ns0:formula xml:id='formula_4'>ẋ(t) = n r ∑ i=1 h i (z)(A i x(t) + B i u(t) + Gd(t)) (4)</ns0:formula><ns0:p>where n,l . z ∈ R p denotes the so-called decision variables (premise variables) that can be available when it depends on measurable variable such as u(t) or y(t) i.e z = z(u(t), y(t)) or unavailable when it depends on non-measured system state x(t) i.e z = z(x(t)). The weighting functions h i (z) called the membership functions satisfy the convex sum property expressed in the following equations:</ns0:p><ns0:formula xml:id='formula_5'>A i ∈ R n×n , B i ∈ R n×m and G ∈ R</ns0:formula><ns0:formula xml:id='formula_6'>n r ∑ i=1 h i (z) = 1 0 ≤ h i (z) ≤ 1</ns0:formula><ns0:p>. The weighing functions h i (z) are generally nonlinear and depend on the premise variables z. Let us consider the following partition</ns0:p><ns0:formula xml:id='formula_7'>x(t) = x 1 (t)</ns0:formula><ns0:p>x 2 (t) of the system (4) with the corresponding matrices</ns0:p><ns0:formula xml:id='formula_8'>A i = A 11 i A 12 i A 21 i A 22 i , B i = B 1 i B 2 i , G = G 1</ns0:formula><ns0:p>G 2 and where x 2 (t) corresponds to the part of the state vector to be controlled and x 1 (t) corresponds to the remaining state variables.</ns0:p><ns0:p>The dynamical model can be rewritten as follows:</ns0:p><ns0:formula xml:id='formula_9'>ẋ1 (t) = n r ∑ i=1 h i (z)(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i u(t) + G 1 d(t)) (5a) ẋ2 (t) = n r ∑ i=1 h i (z)(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i u(t) + G 2 d(t)) (5b)</ns0:formula><ns0:p>Consider a linear reference model given by the following equation:</ns0:p><ns0:formula xml:id='formula_10'>ẋr 2 (t) = A r x r 2 (t) + r(t) (6) 4/16</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where x r 2 (t) is the reference state which should be tracked by the system (5b). A r is a stable matrix and r(t) is a bounded input reference.</ns0:p><ns0:p>Our goal is to synthesize a control law based on the state estimation capable to reduce the error between the reference trajectory x r 2 (t) and the state x 2 (t). The Parallel Distributed Compensation concept can be used to design a fuzzy controller where the main idea consists to design a local controller for each sub-model based on local control rule, which shares with the fuzzy model the same fuzzy sets.</ns0:p><ns0:p>The controller we choose in this paper is expressed by an observer-based law with reference model as follows:</ns0:p><ns0:formula xml:id='formula_11'>u(t) = n r ∑ i=1 h i (ẑ)K i ( x2 (t) − x r 2 (t))<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where x2 (t) is the estimation of x 2 (t) and the K i 's represent the local feedback gains that should be determined.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stability conditions</ns0:head><ns0:p>Before starting the stability analysis, some useful lemmas are recalled.</ns0:p><ns0:p>Lemma 1 <ns0:ref type='bibr' target='#b8'>(Guerra et al., 2006)</ns0:ref> For any matrices X,Y of appropriate dimensions and for any positive scalar η the following inequality holds:</ns0:p><ns0:formula xml:id='formula_12'>X T Y +Y T X ≤ ηX T X + η −1 Y T Y (8)</ns0:formula><ns0:p>Lemma 2 <ns0:ref type='bibr' target='#b8'>(Guerra et al., 2006)</ns0:ref> Considering Π < 0 a matrix X and a scalar λ , the following inequality holds:</ns0:p><ns0:formula xml:id='formula_13'>X T ΠX ≤ −λ (X T + X) − λ 2 Π −1 (9)</ns0:formula><ns0:p>Using ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) into (5), the closed loop system can then be rewritten as follows:</ns0:p><ns0:formula xml:id='formula_14'>ẋ1 (t) = n r ∑ i, j=1 h i (ẑ)h j (ẑ)(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i K j ( x2 (t) − x r 2 (t)) + ω 1 (t) ẋ2 (t) = n r ∑ i, j=1 h i (ẑ)h j (ẑ)(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i K j ( x2 (t) − x r 2 (t)) + ω 2 (t)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_15'>ω 1 (t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A 11 i x 1 (t) + A 12 i x 2 (t) + B 1 i u(t) + G 1 d(t)) ω 2 (t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A 21 i x 1 (t) + A 22 i x 2 (t) + B 2 i u(t) + G 2 d(t))</ns0:formula><ns0:p>Let us define by e r (t) = x 2 (t) − x r 2 (t) the tracking error and e o2 (t) = x 2 (t) − x2 (t) the state estimation error and consider the augmented state: x a (t) =</ns0:p><ns0:p>x 1 (t) e r (t) . Using (10), the dynamical model of the augmented system is given by:</ns0:p><ns0:formula xml:id='formula_16'>ẋa (t) = n r ∑ i=1 n r ∑ j=1 h i (ẑ)h j (ẑ)(A i j x a (t) + D i j ξ (t))<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>which can be rewritten as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>ẋa (t) = n r ∑ i=1 h 2 i (ẑ)(A ii x a (t) + D ii ξ (t)) + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ)( A i j x a (t) + D i j ξ (t) 2 )<ns0:label>(12</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where</ns0:p><ns0:formula xml:id='formula_18'>A i j = A 11 i A 12 i + B 1 i K j A 21 i A 22 i + B 2 i K j , A i j = A i j + A ji D i j = −B 1 i K j A 12 i 0 I 0 −B 2 i K j (A 22 i − A re f ) −I 0 I , D i j = D i j + D ji ξ (t) = e o2 (t) T x r 2 (t) T r(t) T ω 1 (t) T ω 2 (t) T T (13)</ns0:formula><ns0:p>The term ξ (t) is acting like a disturbance affecting the augmented state x a (t). Thus to attenuate its effect, we propose the use of the H ∞ technique applied to System (12). The weighted H ∞ performance to minimize can be presented as follows:</ns0:p><ns0:formula xml:id='formula_19'>t f 0 x T a (t)Qx a (t)dt ≤ γ 2 t f 0 ξ T (t)ξ (t)dt (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>where Q is a positive definite matrix and γ is a prescribed positive scalar that defines the attenuation level of the disturbance ξ (t).</ns0:p><ns0:p>To realize Condition ( <ns0:ref type='formula' target='#formula_19'>14</ns0:ref>), one has to use a Lyapunov function for System (12)given by:</ns0:p><ns0:formula xml:id='formula_21'>V (x a ) = x a (t) T Px a (t) (P > 0) (15)</ns0:formula><ns0:p>To achieve the performance ( <ns0:ref type='formula' target='#formula_19'>14</ns0:ref>) and ensure the stability of augmented System (12), the following condition must be realized:</ns0:p><ns0:formula xml:id='formula_22'>V (x a ) + x T a (t)Qx a (t) − γ 2 ξ (t) T ξ (t) ≤ 0 (16)</ns0:formula><ns0:p>The following result can be announced:</ns0:p><ns0:p>Lemma 3 If there exist positive definite matrices P, Q 1 and Q 2 , and positive scalars α 1 , α 2 , γ 1 and γ 2 , the augmented system in (12) is asymptotically stable, such that the following conditions are satisfied:</ns0:p><ns0:formula xml:id='formula_23'>A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P < 0 ∀i = 1, . . . , n r α 1 − γ 1 < 0 (17) 1 2 (A T i j P + PA i j + Q 2 + α −1 2 PD i j D T i j P) < 0 ∀i < j = 1, . . . , n r 1 2 (α 2 − γ 2 ) < 0 (18)</ns0:formula><ns0:p>The H ∞ performmance criteria ( <ns0:ref type='formula' target='#formula_19'>14</ns0:ref>) is guaranteed where the scalar γ and the matrix Q are given by</ns0:p><ns0:formula xml:id='formula_24'>γ = √ γ 1 + γ 2 and Q = Q 1 + Q 2 Proof:</ns0:formula><ns0:p>Using (12), the derivative of the Lyapunov function ( <ns0:ref type='formula'>15</ns0:ref>) is:</ns0:p><ns0:formula xml:id='formula_25'>V (x a ) = n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii )x a (t) + ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t)] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j )x a (t) + ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t)]<ns0:label>(19)</ns0:label></ns0:formula><ns0:p>Denote by J the expression:</ns0:p><ns0:formula xml:id='formula_26'>J = V (x a (t)) + x T a (t)Qx a (t) − γ 2 ξ (t) T ξ (t)<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>by setting Q = Q 1 + Q 2 and γ 2 = γ 1 + γ 2 and using (19), J can be written as the sum of two terms: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_27'>J = n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii )x a (t) + x T a (t)Q 1 x a (t) − γ 1 ξ T (t)ξ (t)) +ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t)] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j )x a (t) + x T a (t)Q 2 x a (t) − γ 2 ξ T (t)ξ (t) +ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t)]<ns0:label>(21</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Using the Lemma 1 on the crossed terms yields:</ns0:p><ns0:formula xml:id='formula_28'>ξ T (t)D T ii Px a (t) + x T a (t)PD ii ξ (t) ≤ α −1 1 x T a (t)PD ii D T ii Px a (t) + α 1 ξ T (t)ξ (t) ξ T (t)D T i j Px a (t) + x T a (t)PD i j ξ (t) ≤ α −1 2 x T a (t)PD i j D T i j Px a (t) + α 2 ξ T (t)ξ (t)<ns0:label>(22)</ns0:label></ns0:formula><ns0:p>(22) into (21) leads to the following inequality:</ns0:p><ns0:formula xml:id='formula_29'>J ≤ n r ∑ i=1 h 2 i (ẑ)[x T a (t)(A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P)x a (t) +ξ T (t)(α 1 − γ 1 )ξ (t))] + 2 n r ∑ i, j=1 i< j h i (ẑ)h j (ẑ) 1 2 [x T a (t)(A T i j P + PA i j + Q 2 + α −1 2 PD i j D T i j P)x a (t) +ξ T (t)(α 2 − γ 2 )ξ (t)]<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>This implies that ( <ns0:ref type='formula'>16</ns0:ref>) to be satisfied if the following sufficient conditions hold:</ns0:p><ns0:formula xml:id='formula_30'>A T ii P + PA ii + Q 1 + α −1 1 PD ii D T ii P < 0 ∀i = 1, . . . , n r α 1 − γ 1 < 0 (24) 1 2 (A T i j P + PA i j + Q 2 + α −1 3 PD i j D T i j P) < 0 ∀i < j = 1, . . . , n r 1 2 (α 2 − γ 2 ) < 0 (25)</ns0:formula><ns0:p>This ends the lemma proof.</ns0:p></ns0:div>
<ns0:div><ns0:head>The main result</ns0:head><ns0:p>To determine the controller gains K i , we present new conditions in terms of LMIs. These conditions are developed through the use of separation Lemma 2, the introduction of some slack variables and other calculations leading to the following results.</ns0:p><ns0:p>Theorem 1 There exists an observer based controller (7) for the system (12) guaranteeing the H ∞ performance criteria (14) if there exists positive matrices</ns0:p><ns0:formula xml:id='formula_31'>X 1 = X T 1 , X 2 = X T 2 , matrices Y i , i = 1, 2, • • • , n r , positive matrices Q1 1 , Q2 1 , Q1</ns0:formula><ns0:p>2 and Q2 2 and prescribed positive scalars α 1 , α 2 , α 3 , α 4 , γ 1 and γ 2 such that α 1 < γ 1 and α 2 < γ 2 and that the following conditions hold: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_32'>for i = 1, . . . , n r             M 1 M 2 −B 1 i Y i A 12 i 0 I 0 0 * M 3 −B 2 i Y i A 22 i − A r −I 0 I 0 * * −2α 3 X 2 0 0 0 0 −α 3 I * * * −α 1 I 0 0 0 0 * * * * −α 1 I 0 0 0 * * * * * −α 1 I 0 0 * * * * * * −α 1 I 0 * * * * * * * −α 1 I             < 0 (<ns0:label>26</ns0:label></ns0:formula><ns0:formula xml:id='formula_33'>)</ns0:formula><ns0:formula xml:id='formula_34'>for i < j = 1, . . . , n r 1 2             N 1 N 2 −B 1 i Y j − B 1 j Y i A 12 i + A 12 j 0 2I 0 0 * N 3 −B 2 i Y j − B 2 j Y i A 22 i + A 22 j − 2A r −2I 0 2I 0 * * −2α 4 X 2 0 0 0 0 −α 4 I * * * −α 2 I 0 0 0 0 * * * * −α 2 I 0 0 0 * * * * * −α 2 I 0 0 * * * * * * −α 2 I 0 * * * * * * * −α 2 I             < 0<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where</ns0:p><ns0:formula xml:id='formula_35'>M 1 = A 11 i X 1 + X 1 A 11 i T + Q1 1 M 2 = X 1 A 21 i T + A 12 i X 2 + B 1 i Y i M 3 = A 22 i X 2 + X 2 A 22 i T + B 2 i Y i + (B 2 i Y i ) T + Q2 1 N 1 = (A 11 i + A 11 j )X 1 + X 1 (A 11 i + A 11 j ) T + Q1 2 N 2 = X 1 (A 21 i + A 21 j ) T + (A 12 i + A 12 j )X 2 + B 1 i Y j + B 1 j Y i N 3 = (A 22 i + A 22 j )X 2 + X 2 (A 22 i + A 22 j ) T + (B 2 i Y j + B 2 j Y i ) + Q2 2 + (B 2 i Y j + B 2 j Y i ) T</ns0:formula><ns0:p>and * stands for the symmetric term of the corresponding off-diagonal term.</ns0:p><ns0:p>Solving LMIs ( <ns0:ref type='formula' target='#formula_32'>26</ns0:ref>),( <ns0:ref type='formula'>27</ns0:ref>) the controller gains K i , the attenuation level γ and the matrix Q are given by:</ns0:p><ns0:formula xml:id='formula_36'>K i = Y i X −1 2 , i = 1, • • • , n r and γ = √ γ 1 + γ 2 Q = Q 1 + Q 2 where Q 1 = Q 1 1 0 0 Q 2 1 and Q 2 = Q 1 2 0 0 Q 2 2 proof:</ns0:formula><ns0:p>For the proof of the theorem 1, we will start from the sufficient conditions given in the Lemma 3. Let us consider the first condition (17). Multiplying it post and prior by P −1 and using Schur lemma, the following inequality is obtained:</ns0:p><ns0:formula xml:id='formula_37'>P −1 A T ii + A ii P −1 + Q1 D ii D T ii −α 1 I < 0 (28)</ns0:formula><ns0:p>By choosing matrices P and Q 1 as follows:</ns0:p><ns0:formula xml:id='formula_38'>P = P 1 0 0 P 2 and Q1 = P −1 Q 1 P −1 = Q1 1 0 0 Q2 1 and</ns0:formula><ns0:p>Replacing D ii , A ii by their expressions in ( <ns0:ref type='formula'>13</ns0:ref>) , ( <ns0:ref type='formula'>28</ns0:ref>) becomes:</ns0:p><ns0:formula xml:id='formula_39'>          M 1 M 2 −B 1 i K i A 12 i 0 I 0 * M 3 −B 2 i K i A 22 i − A r −I 0 I * * −α 1 I 0 0 0 0 * * * −α 1 I 0 0 0 * * * * −α 1 I 0 0 * * * * * −α 1 I 0 * * * * * * −α 1 I           < 0 (<ns0:label>29</ns0:label></ns0:formula><ns0:formula xml:id='formula_40'>)</ns0:formula><ns0:p>where </ns0:p><ns0:formula xml:id='formula_41'>M 1 = A 11 i P −1 1 + P −1 1 A 11 i T + Q1 1 M 2 = P −1 1 A 21 i T + (A 12 i + B 1 i K i )P −1 2 M 3 = (A 22 i + B 2 i K i )P −1 2 + P −1 2 (A 22 i + B 2 i K i ) T + Q2 1 Multiplying<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_42'>          M 1 M 2 −B 1 i K i P −1 2 A 12 i 0 I 0 * M 3 −B 2 i K i P −1 2 A 22 i − A r −I 0 I * * −P −1 2 α 1 P −1 2 0 0 0 0 * * * −α 1 I 0 0 0 * * * * −α 1 I 0 0 * * * * * −α 1 I 0 * * * * * * −α 1 I           < 0 (30)</ns0:formula><ns0:p>Using Lemma 2 we have:</ns0:p><ns0:formula xml:id='formula_43'>P −1 2 (−α 1 )P −1 2 ≤ −2α 3 P −1 2 + α 2 3 (α 1 ) −1 I (31) 8/16</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and Schur complement yields to:</ns0:p><ns0:formula xml:id='formula_44'>            M 1 M 2 −B 1 i K i P −1 2 A 12 i 0 I 0 0 * M 3 −B 2 i K i P −1 2 A 22 i − A r −I 0 I 0 * * −α 3 P −1 2 0 0 0 0 α 3 I * * * −α 1 I 0 0 0 0 * * * * −α 1 I 0 0 0 * * * * * −α 1 I 0 0 * * * * * * −α 1 I 0 * * * * * * * −α 1             < 0 (32)</ns0:formula><ns0:p>Using the following variable change X 1 = P −1 1 , X 2 = P −1 2 and Y i = K i X 2 , the conditions (26) of the theorem is fulfilled.</ns0:p><ns0:p>To carry out the second LMI of the theorem 1, we proceed in the same way with the second sufficient condition of Lemma 3. Multiplying (18) post and prior with P −1 and using the Schur lemma, the following inequality is obtained:</ns0:p><ns0:formula xml:id='formula_45'>1 2 P −1 A T i j + A i j P −1 + Q2 D i j D T i j −α 2 I < 0 (<ns0:label>33</ns0:label></ns0:formula><ns0:formula xml:id='formula_46'>)</ns0:formula><ns0:p>By choosing the matrix Q2</ns0:p><ns0:formula xml:id='formula_47'>= P −1 Q 2 P −1 = Q1 2 0 0 Q2</ns0:formula><ns0:p>2 and replacing D i j and A i j by their expression in ( <ns0:ref type='formula'>13</ns0:ref>), ( <ns0:ref type='formula' target='#formula_45'>33</ns0:ref>) becomes:</ns0:p><ns0:formula xml:id='formula_48'>1 2           N 1 N 2 −(B 1 i K j + B 1 j K i ) A 12 i + A 12 j 0 2I 0 * N 3 −(B 2 i K j + B 2 j K i ) A 22 i + A 22 j − 2A r −2I 0 2I * * −α 2 I 0 0 0 0 * * * −α 2 I 0 0 0 * * * * −α 2 I 0 0 * * * * * −α 2 I 0 * * * * * * −α 2 I           < 0 (34) N 1 = (A 11 i + A 11 j )P −1 1 + P −1 1 (A 11 i + A 11 j ) T + Q1 2 N 2 = P −1 1 (A 21 i + A 21 j ) T + (A 12 i + A 12 j )P −1 2 + (B 1 i K j + B 1 j K i )P −1 2 N 3 = (A 22 i + A 22 j + B 2 i K j + B 2 j K i )P −1 2 + P −1 2 (A 22 i + A 22 j + B 2 i K j + B 2 j K i ) T + Q2 2</ns0:formula><ns0:p>Multiplying (34) left and right respectively by: diag( I I P −1 2 I I I I ) and its transpose we get:</ns0:p><ns0:formula xml:id='formula_49'>1 2           N 1 N 2 −(B 1 i K j + B 1 j K i )P −1 2 A 12 i + A 12 j 0 2I 0 * N 3 −(B 2 i K j + B 2 j K i )P −1 2 A 22 i + A 22 j − 2A r −2I 0 2I * * −P −1 2 α 2 P −1 2 0 0 0 0 * * * −α 2 I 0 0 0 * * * * −α 2 I 0 0 * * * * * −α 2 I 0 * * * * * * −α 2 I           < 0 (35)</ns0:formula><ns0:p>using Lemma 2 we have:</ns0:p><ns0:formula xml:id='formula_50'>P −1 2 (−α 2 )P −1 2 ≤ −2α 4 P −1 2 + α 2 4 (α 2 ) −1 I (36)</ns0:formula><ns0:p>and Schur complements yiels to: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_51'>1 2             N 1 N 2 −(B 1 i K j + B 1 j K i )P −1 2 A 12 i + A 12 j 0 2I 0 0 * N 3 −(B 2 i K j + B 2 j K i )P −1 2 A 22 i + A 22 j − 2A r −2I 0 2I 0 * * −2α 4 P −1 2 0 0 0 0 α 4 I * * * −α 2 I 0 0 0 0 * * * * −α 2 I 0 0 0 * * * * * −α 2 I 0 0 * * * * * * −α 2 I 0 * * * * * * * −α 2 I             < 0 (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Using the following variable change X 1 = P −1 1 , X 2 = P −1 2 and Y i = K i X 2 , the conditions ( <ns0:ref type='formula'>27</ns0:ref>) of the theorem is fulfilled. This achieves the proof of the theorem.</ns0:p></ns0:div>
<ns0:div><ns0:head>A Robust T-S fuzzy observer design</ns0:head><ns0:p>In order to estimate both the state variables and the disturbance the following augmented state vector is considered x(t) = x T (t) d T (t) T . From the T-S system (4) we have:</ns0:p><ns0:formula xml:id='formula_52'>ẋ(t) = n r ∑ i=1 h i (ẑ)( A i G 0 0 x(t) + B i 0 u(t) + ω(t) ḋ(t) ) = n r ∑ i=1 h i (ẑ)( Āi x(t) + Bi u(t) + ω(t)) y(t) = C x(t)<ns0:label>(38)</ns0:label></ns0:formula><ns0:p>With:</ns0:p><ns0:formula xml:id='formula_53'>Āi = A i G 0 0 , Bi = B i 0 , C = C 0 , ω(t) = ω(t) ḋ(t)</ns0:formula><ns0:p>and</ns0:p><ns0:formula xml:id='formula_54'>ω(t) = n r ∑ i=1 (h i (z) − h i (ẑ))(A i x(t) + B i u(t) + Gd(t))</ns0:formula><ns0:p>Let consider the following fuzzy Luenberger observer <ns0:ref type='bibr' target='#b19'>(Tanaka and Wang, 2003a)</ns0:ref> for the system (38):</ns0:p><ns0:formula xml:id='formula_55'>ẋ(t) = n r ∑ i=1 h i (ẑ)( Āi x(t) + Bi u(t) + Li C( x(t) − x(t)))<ns0:label>(39)</ns0:label></ns0:formula><ns0:p>where the Li 's represent the local observer gains. Note that this observer considers that the premise variables are unknown.</ns0:p><ns0:p>The dynamical model of the estimation error is then given by:</ns0:p><ns0:formula xml:id='formula_56'>ėo (t) = ẋ(t) − ẋ(t) = n r ∑ i=1 h i (ẑ)( Āi − Li C) ēo (t) + ω(t)<ns0:label>(40)</ns0:label></ns0:formula><ns0:p>where ω(t) is acting as disturbance. To attenuate its effect, the following H ∞ performance is used:</ns0:p><ns0:formula xml:id='formula_57'>t f 0 ēT o (t)R ēo (t)dt ≤ ν 2 t f 0 ω(t) T ω(t)dt (<ns0:label>41</ns0:label></ns0:formula><ns0:formula xml:id='formula_58'>)</ns0:formula><ns0:p>where R is a positive definite matrix and ν is the attenuation level of the disturbances ω(t).</ns0:p><ns0:p>Theorem 2 Prescribing the attenuation level ν, an observer (39) for the system (46) satisfying the H ∞ performance criterion (41) exist if there exists a symmetric and positive matrix P o = P T o > 0, a matrix R > 0 and Z i , i = 1, 2, • • • , r and a positive scalar η such that the following LMIs are feasible</ns0:p><ns0:formula xml:id='formula_59'>P o Āi − Z i C + (P o Āi − Z i C) T + R P o P o −ηI < 0 (42) η − ν 2 < 0<ns0:label>(43)</ns0:label></ns0:formula><ns0:p>Solving LMIs 42, the observer gains Li are given by: Li</ns0:p><ns0:formula xml:id='formula_60'>= P −1 o Z i , i = 1, • • • , n r</ns0:formula><ns0:p>The scalar verifying the H ∞ norm for the observer is given by: ν = √ ν 2 proof: See Appendix A.</ns0:p></ns0:div>
<ns0:div><ns0:head>APPLICATION TO AN ACTIVATED SLUDGE PROCESS</ns0:head><ns0:p>This article addresses the problem of controlling an activated sludge treatment process using the Takagi-Sugeno approach developed above. The dissolved oxygen concentration in the ASP is an important parameter in the process control that has a considerable effect on the treatment effectiveness and economical cost. The reduction of organic substrate concentration is also crucially important and presents one of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The dilution rate D(t) and the aeration flow rate K La (t) are the control variables.</ns0:p><ns0:p>• The dissolved oxygen concentration C o (t) is available.</ns0:p><ns0:p>• Biomass, substrate and recycled biomass concentrations X(t), S(t) and X r (t) are not available online.</ns0:p></ns0:div>
<ns0:div><ns0:head>The TS Fuzzy Model Design</ns0:head><ns0:p>To simulate more realistic conditions, the model ( <ns0:ref type='formula'>1</ns0:ref>) is rewrited in the form of a disturbed system. It is assumed that the concentration of the influent substrate is varying during a day instead of considering a constant value. The variation ∆Sin(t) around the daily average Sin acts as a disturbance.</ns0:p><ns0:p>To build the T-S model, the classical transformation of the non-linearity sector method is used. The following non linearities (the premise variables) are considered:</ns0:p><ns0:formula xml:id='formula_61'>z 1 (x) = z 1 (S,C o ) = µ max C o S (K s +S)(K c +C o ) − S z 2 (x) = z 2 (X) = X z 3 (x) = z 3 (X r ) = X r z 4 (x) = z 4 (C o ) = C o z 5 (x) = z 5 (X, S,C o ) = µ max C o X (K s +S)(K c +C o ) z 6 (x) = z 6 (S) = S (44)</ns0:formula><ns0:p>The nonlinear model of the Activated Sludge process described by equations ( <ns0:ref type='formula'>1</ns0:ref>) can be written in the form:</ns0:p><ns0:formula xml:id='formula_62'>ẋ(t) = A(z 1 , z 2 , z 3 , z 4 , z 5 )x(t) + B(z 2 , z 3 , z 4 , z 6 )u(t) + Gd(t) y(t) = Cx(t) (<ns0:label>45</ns0:label></ns0:formula><ns0:formula xml:id='formula_63'>)</ns0:formula><ns0:p>where the state, input and output vectors are</ns0:p><ns0:formula xml:id='formula_64'>x(t) =     X X r S C o     , u(t) = D K la , y = C o , G =     0 0 1 0     , and d(t) = D∆Sin(t)</ns0:formula><ns0:p>and the obtained matrices have the general form:</ns0:p><ns0:formula xml:id='formula_65'>A(z 1 , z 2 , z 3 , z 4 , z 5 ) =     z 1 0 z 2 0 0 −z 4 0 z 3 0 0 0 − 1 Y z 5 −z 3 z 2 0 − K 0 Y z 5     ; B(z 3 , z 4 , z 6 ) =     −(1 + q r )z 2 + q r z 3 0 (1 + q r )z 2 − (β + q r )z 3 0 S in − (1 + q r )z 6 0 C oin − (1 + q r )z 4 C s − z 4   </ns0:formula><ns0:p> and C = 0 0 0 1 6 premise variables are considered. Therefore, the model ( <ns0:ref type='formula' target='#formula_62'>45</ns0:ref>) can be represented by n r = 2 6 = 64 TS submodels, which is rewritten as follows:</ns0:p><ns0:formula xml:id='formula_66'>ẋ(t) = n r ∑ i=1 h i (z)(A i x(t) + B i u(t) + Gd(t)) (46)</ns0:formula><ns0:p>The A i 's and B i 's corresponding matrices and the h i (z)'s membership functions are obtained from A(z 1 , z 2 , z 3 , z 4 , z 5 ) and B(z 2 , z 3 , z 4 , z 6 ) and ( <ns0:ref type='formula'>44</ns0:ref>), for more explanation on the method to obtain them the reader can refer to <ns0:ref type='bibr' target='#b14'>(Nagy et al., 2010)</ns0:ref> 11/16</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Simulation parameters <ns0:ref type='bibr' target='#b16'>(Nejjari et al., 1999</ns0:ref>) </ns0:p><ns0:formula xml:id='formula_67'>Y q r β K 0 C s S in C oin µ max K s K c 0.</ns0:formula></ns0:div>
<ns0:div><ns0:head>Observer Based Controller Synthesis and simulation results</ns0:head><ns0:p>The following partition x(t) =</ns0:p><ns0:p>x 1 (t)</ns0:p><ns0:p>x 2 (t) are chosen with the corresponding vectors: </ns0:p><ns0:formula xml:id='formula_68'>x 1 (t) = X X r</ns0:formula><ns0:formula xml:id='formula_69'>(t) = S re f (t) C re f o (t)</ns0:formula><ns0:p>and the simulation parameters of table 1 are considered. To test the robustness of the proposed method, a sinusoidal variation of the influent substrate and changes in the kinetic parameters will be introduced during the simulation.</ns0:p><ns0:p>Remark 1 The LMIs in the theorem 1 and 2 are solved using matlab with the YALMIP toolbox can be downloaded from https://yalmip.github.io/.</ns0:p><ns0:p>• The resolution of the LMIs in theorem 1 for the parameters α 1 = 0.4, α 2 = 0.4, α 3 = 300, α 4 = 295, γ 1 = 0.4 and γ 2 = 0.4 leads to:</ns0:p><ns0:formula xml:id='formula_70'>X 1 = 10 3 × 29435 27668 27668 27609 , X 2 = 10 3 × 45062 3233 3233 21729 Q1 1 = Q2 1 = Q1 2 = Q2 2 = 1 0 0 1</ns0:formula><ns0:p>The attenuation level is given by: γ = √ γ 1 + γ 2 = 0.8944, bellow are given some of the controller gains: Manuscript to be reviewed • bellow are given some of the observer gains: </ns0:p><ns0:formula xml:id='formula_71'>K 1 = 0.</ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_72'>L 1 = 10 2 ×       −1668.3 11.3 −136.5 14.6 −94.6       , L 8 = 10 2 ×       −1668.3 11.3 −136.4 14.7 −94.6       , L 16 = 10 2 ×       −834.86<ns0:label>6</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this article, an observer based robust fuzzy tracking controller has been first developed then applied to a strongly nonlinear process with a biological nature. The nonlinear system is equivalently represented by a TS multi-model. Considering all the states are not fully accessible, the stability analysis and design of TS fuzzy system via an observer-based tracking controller satisfying the H ∞ performance requirement has been investigated. Controller and observer gains are obtained by solving a set of LMIs. These theoretical points has been then applied to an activated sludge process where the only measured variable available online is the concentration of dissolved oxygen, which is the most realistic and economical choice. The tracking problem of reference trajectories of two state variables (the dissolved oxygen and the substrate) making use of two manipulated variables (the dilution rate and the aeration flow rate) has been addressed.</ns0:p><ns0:p>The numerical simulation results illustrate the effectiveness of the proposed method and show good state estimation and tracking performances. Extension of the proposed approach to fault tolerant control is the focus of our future work, we investigate a Takagi-Sugeno fuzzy Proportional Integral observer-based fault-tolerant control subject to actuators and sensors fault of TS fuzzy positive systems with saturation on the inputs. To achieve the performance (41) and ensure the stability of system (40), the following condition must be realized:</ns0:p><ns0:p>V ( ēo )) + ēT o (t)R ēo (t) − ν 2 ω(t) T ω(t) < 0 (48)</ns0:p><ns0:p>Let consider the derivative of the Lyapunov function V ( ēo ): Consequently (48) will be achieved if the following condition (51) holds for i = 1, . . . By using the variable change Z i = P o L i , the BMIs (bilinear matrix inequalities) (52) are transformed into the LMIs given by ( <ns0:ref type='formula'>42</ns0:ref>). This achieves the proof of the theorem 1.</ns0:p><ns0:formula xml:id='formula_73'>V ( ēo ) =</ns0:formula></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>studied the problem of robust PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Schematic diagram of the activated sludge process.</ns0:figDesc><ns0:graphic coords='4,178.44,63.78,340.17,286.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>99-</ns0:head><ns0:label /><ns0:figDesc>D and K La represent respectively the dilution rate and aeration the flow rate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>100-</ns0:head><ns0:label /><ns0:figDesc>S in and C oin the influent substrate and dissolved oxygen concentrations 101 3/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>29) left and right respectively by diag(</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>the main issues in the treatment process. It helps keeping a hight effluent quality. The control objective is to force the substrate and the dissolved oxygen concentrations S(t) and C o (t) to track the states of a given reference model S re f (t) and C re f o (t) under the following conditions: 10/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>l] [mg/l] [mg/l] [h −1 ] [mg/l] [mg/l]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The evolution of the controlled variables, their estimates and their corresponding reference trajectories.</ns0:figDesc><ns0:graphic coords='14,167.10,63.78,362.84,181.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The manipulated variables.</ns0:figDesc><ns0:graphic coords='14,167.10,292.30,362.84,181.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>To test the robustness of the proposed method, two type of disturbances are introduced during the223 simulation as follows: 224 • A variation of sinusoidal form (with a period of one day and amplitude of 5% of the daily average 225 value S in = 200g/l) in the influent substrate; 226 13/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison between the biomass and recycled biomass with their estimates.</ns0:figDesc><ns0:graphic coords='15,167.10,63.78,362.84,181.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3shows the manipulated variables respectively the dilution rate D and the air flow rate Kla.Their dynamics change whenever the reference model changes and the control objective is clearly reached.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure4represents respectively the reconstructed biomass X and recycled biomass Xr . As indicated previously, these two variables are not considered in the tracking problem. Nevertheless, it can be seen in these results that the estimated values of theses states variables are in excellent agreement with their corresponding true simulated values. Despite perturbation in the influent substrate and changes in kinetic parameters, the estimates are smooth and reliable.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>Let consider the following Lyapunov function:V ( ēo ) = ēo (t) T P o ēo (t)(47)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>n r :( Āi − Li C) T P o + P o ( Āi − Li C) + R + η −1 P o P o < 0 (52) η − ν 2 < 0(53)Applying Schur complement to (52) we get:P o ( Āi − Li C) + ( Āi − Li C) T P o + R</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>ēT o (t)(( Āi − Li C) T P o + P o ( Āi − Li C)) ēo (t) + ω(t) T P o ēo (t) + ēT o (t)P o ω(t) T P o ēo (t) + ēT o (t)P o ω(t) ≤ η ω(t) T ω(t) + η −1 ēT o (t)P o P o ēo (t) Āi − Li C) T P o +P o ( Āi − Li C) + R + η −1 P o P o ) ēo (t) +(η − ν 2 ) ω(t) T ω(t)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(49)</ns0:cell></ns0:row><ns0:row><ns0:cell>Using Lemma 1 leads to:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>ω(t) (50)</ns0:cell></ns0:row><ns0:row><ns0:cell>(49) and (50) leads to:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>n r ∑</ns0:cell><ns0:cell>h i (ẑ)[ ēT</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>i=1</ns0:cell><ns0:cell>(51)</ns0:cell></ns0:row></ns0:table><ns0:note>V ( ēo ) + ēT o (t)R ēo (t) − ν 2 ω(t) T ω(t) ≤ o (t)((</ns0:note></ns0:figure>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53532:2:0:NEW 19 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Cover Letter
19/02/2021
Dear Qichun Zhang
Editor Journal of PeerJ Computer Science.
Subject: Submission of revised paper: 'Observer based robust H∞ fuzzy tracking control:
Application to an activated sludge process'
I acknowledge receipt of your message sent on Sunday, February 14, 2021 I would like to
thank you and the reviewers for the comments and the interest in our manuscript. The paper has been
revised. This new version is addressed taking into account the remarks raised by the Editor and the
Reviewer. Responses to the specific comments are presented in the following.
We are looking forward to your next response.
Best regard
Abdelmounaim Khallouq
PhD student
LAEPT-URAC 28, Faculty of Science Semlalia
UCA, Marrakesh, Morocco
Answer to the reviewer's comment
Paper: 'Observer based robust H∞ fuzzy tracking control:
Application to an activated sludge process'
By A. Khallouq, A. Karama, M. Abyad
We very much appreciate the time you have taken in carefully reviewing our paper. Your specific
suggestion has been very helpful to us in revising our paper.
Basic reporting
The authors provided an improved version of the manuscript. I still think that some other papers that
were dealing with estimation in the case of wastewater treatment processes (using Kalman, Hinf,
Luenberger etc.) should be included in the references section.
The following references have been added in the revised paper as suggested:
•
Zeng, J., Liu, J., Zou, T., and Yuan, D. (2016). State estimation of wastewater treatment processes using
distributed extended Kalman filters. In 2016 IEEE 55th Conference on Decision and Control, CDC
2016, pages 6721–6726. Institute of Electrical and Electronics Engineers Inc.
•
Silva, F. J. D. S., Catunda, S. Y. C., Dorea, C. E., Haandel, A. C., and Santos, H. R. D. (2019). Oxygen
Uptake Rate Measurement Using Kalman Filter and PWM Control in Activated Sludge Systems. IEEE
Transactions on Instrumentation and Measurement, 68(11):4493–4501.
•
Katebi, M. R. (2001). H∞ State Estimation in Activated Sludge Processes. IFAC Proceedings Volumes,
34(8):525–530.
•
Zhang, S., Chen, J., and Yu, Y. (2017). Control for nonlinear system with non-Gaussian noise. In 2017
IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017, volume 2017-Janua,
pages 2099–2104. Institute of Electrical and Electronics Engineers Inc.
•
Tanaka, K. and Wang, H. O. (2003a). Fuzzy Observer Design. In Fuzzy Control Systems Design and
Analysis, pages 83–96. John Wiley & Sons, Inc., New York, USA.
" | Here is a paper. Please give your review comments after reading it. |
37 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Data streams can be defined as the continuous stream of data coming from different sources and in different forms. Streams are often very dynamic, and its underlying structure usually changes over time, which may result to a phenomenon called concept drift. When solving predictive problems using the streaming data, traditional machine learning models trained on historical data may become invalid when such changes occur.</ns0:p><ns0:p>Adaptive models equipped with mechanisms to reflect the changes in the data proved to be suitable to handle drifting streams. Adaptive ensemble models represent a popular group of these methods used in classification of drifting data streams. In this paper, we present the heterogeneous adaptive ensemble model for the data streams classification, which utilizes the dynamic class weighting scheme and a mechanism to maintain the diversity of the ensemble members. Our main objective was to design a model consisting of a heterogeneous group of base learners (Naive Bayes, k-NN, Decision trees), with adaptive mechanism which besides the performance of the members also takes into an account the diversity of the ensemble. The model was experimentally evaluated on both real-world and synthetic datasets. We compared the presented model with other existing adaptive ensemble methods, both from the perspective of predictive performance and computational resource requirements.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Nowadays, the size of data is growing in a much faster fashion than in the past. Information is being collected from household appliances, tools, mobile devices, vehicles, sensors, websites, social networks, and many other devices. An increasingly large number of organizations are starting to analyze large volumes of data, as the information obtained from these data can provide a competitive advantage over other businesses. Data collection from devices is often continuous, and data come in the form of data streams.</ns0:p><ns0:p>Data stream classification is an active field of research, as more data sources can be considered as streaming data. When solving classification tasks using streaming data, the data generation process is not strictly stationary, and its underlying structure may change over time. The changes in the underlying data distribution within the streams may result in dynamic, non-stationary target concepts <ns0:ref type='bibr' target='#b30'>(Gama et al. (2014b)</ns0:ref>; <ns0:ref type='bibr'>Žliobaitė (2010)</ns0:ref>). This phenomenon is called concept drift, and from the perspective of the training of classification models on the drifting data, the most crucial requirement is the ability of the model to adapt and incorporate new data into the model in order to react to potential changes <ns0:ref type='bibr' target='#b4'>(Barddal et al. (2017)</ns0:ref>). In concept drift, the adaptive learning algorithms are advanced machine learning methods that can reflect the changing concepts in data streams in real time. Multiple approaches were proposed to extend the standard machine learning models with the ability to adapt to the changes in streams, including drift detectors <ns0:ref type='bibr' target='#b33'>(Gonc ¸alves et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Baena-García et al. (2006)</ns0:ref>) and various sliding window techniques <ns0:ref type='bibr' target='#b6'>(Bifet and Gavaldà (2007)</ns0:ref>).</ns0:p><ns0:p>Ensemble models are a popular classification method, often providing better performance when compared to the standard machine learning models <ns0:ref type='bibr' target='#b10'>(Breiman (1996</ns0:ref><ns0:ref type='bibr' target='#b11'>(Breiman ( , 2001))</ns0:ref>; <ns0:ref type='bibr' target='#b24'>Freund and Schapire (1996)</ns0:ref>).</ns0:p><ns0:p>• stationary data streams-the data distribution does not change over time, e.g., the stream elements are generated from a fixed probability distribution;</ns0:p><ns0:p>• non-stationary data streams-data are evolving, and the data distribution may change over time.</ns0:p><ns0:p>Usually, these changes may also affect the target concepts (classes).</ns0:p></ns0:div>
<ns0:div><ns0:head>Concept drift</ns0:head><ns0:p>When solving predictive data analytical tasks on the static data, the data distribution usually does not change, and data used for the training and testing of the model have the same distribution. When processing the data streams, we often observe the changing nature of the data. In predictive data stream analytical tasks, we experience a phenomenon called concept drift.</ns0:p><ns0:p>Concept drift is related to the data distribution P t (x, y), where x = (x 1 , x 2 . . . x n ) is a data sample represented by an n-dimensional feature vector appearing at time t, and y represents the target class. The concepts in the data are stable (or stationary) if all the data samples are generated with the same distribution.</ns0:p><ns0:p>If there is an x in the interval between t and t + ∆, which holds the expression P t (x, y) = P t+∆ (x, y), then concept drift is present (there is a change in the underlying data distribution) <ns0:ref type='bibr'>( Žliobaitė (2010)</ns0:ref>). Concept drift usually occurs in a non-stationary and dynamically changing environment, where the data distribution or relation between the input data and the target variable changes over time.</ns0:p><ns0:p>The concept drift phenomenon may occur in various real-world data and corresponding applications <ns0:ref type='bibr'>( Žliobaitė et al. (2016)</ns0:ref>):</ns0:p><ns0:p>• computer systems or networks, through network intrusion detection, where new techniques and methods may appear <ns0:ref type='bibr' target='#b47'>(Liu et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b57'>Mukkavilli and Shetty (2012)</ns0:ref>);</ns0:p><ns0:p>• industry, when dynamic data streams are produced by sensors in production equipment and machines <ns0:ref type='bibr' target='#b46'>(Lin et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b82'>Zenisek et al. (2019)</ns0:ref>);</ns0:p><ns0:p>• marketing and management, when users change their buying behaviour and their preferences <ns0:ref type='bibr' target='#b8'>(Black and Hickey (2003)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Chiang et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b49'>Lo et al. (2018)</ns0:ref>);</ns0:p><ns0:p>• medical data, e.g., in the case of antibiotic resistance <ns0:ref type='bibr' target='#b71'>(Stiglic and Kokol (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b75'>Tsymbal et al. (2006)</ns0:ref>);</ns0:p><ns0:p>• social networks, when users change their behavior and generated content <ns0:ref type='bibr' target='#b45'>(Lifna and Vijayalakshmi (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b44'>Li et al. (2016)</ns0:ref>);</ns0:p><ns0:p>• spam categorization, where spam keywords can change over time <ns0:ref type='bibr' target='#b18'>(Delany et al. (2005)</ns0:ref>; Ruano-Ordás et al. ( <ns0:ref type='formula'>2018</ns0:ref>)).</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b74'>(Tsymbal (2004)</ns0:ref>; <ns0:ref type='bibr'>Žliobaitė (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Khamassi et al. (2019)</ns0:ref>) describe a complex taxonomy of existing drift types. In general, there are several different types of concept drift, based on how the phenomenon occurs within the data stream:</ns0:p><ns0:p>• Sudden/Abrupt-In this case, the concept change occurs suddenly. A concept (e.g., a target class) is suddenly replaced by another one. For example, in the topic modelling domain, the main topic of interest may unexpectedly switch to a different one.</ns0:p><ns0:p>• Incremental-Changes in the data distribution are slower and proceed over time. Changes are not as visible as in a sudden drift, but they gradually emerge. The changes are usually relatively slow and can be observed when comparing the data over more extended time periods.</ns0:p><ns0:p>• Gradual-In this drift type, both concepts are present, but over time, one of them decreases, while the other one increases. For example, such a change may reflect the evolution of points of interest, e.g., when a point of interest is gradually being replaced by a newer one.</ns0:p><ns0:p>• Re-Occurring-A previously active concept reappears after some time. Re-occurrence may appear in cycles or not (e.g., reappearing fashion trends).</ns0:p></ns0:div>
<ns0:div><ns0:head>3/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Besides the mentioned drift types, some publications <ns0:ref type='bibr' target='#b29'>(Gama et al. (2014a)</ns0:ref>) distinguish between two kinds of concept drift: real drift and virtual drift. Virtual concept drift is defined by the changes in data distribution but does not affect the target concept. Real concept drift (also called concept shift) represents a change in the target concept, which may modify the decision boundaries.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> visualizes the drift types. In concept drift detection, it is also necessary to distinguish between the drift and outlier occurrence. Outliers may produce false alarms when detecting the concept drift. When processing the non-stationary drifting streams, the necessary feature of the predictive algorithms is their ability to adapt. Some of the algorithms are naturally incremental (e.g. Naive Bayes), while others require significant changes in the algorithm structure to enable incremental processing. Therefore, the learning algorithms applied on the drifting streams are usually extended with a set of mechanisms, which enhance the models with the ability of continuously forgetting the obsolete learned concepts and updating the model with the newly arrived data in the stream. There are several types of models used to handle concept drift. To detect the concept drift in data streams, we can use drift detectors. These can detect possible concept drift by analyzing the incoming data or by monitoring the classifier performance.</ns0:p><ns0:p>Detectors process the signal from the data about changes in data stream distribution. Drift detectors usually signalize drift occurrence and trigger the updating/replacement of the classifier. There are several drift detection methods available (Gonc ¸alves et al. ( <ns0:ref type='formula'>2014</ns0:ref>)), and the Drift Detection Method (DDM) <ns0:ref type='bibr' target='#b28'>(Gama et al. (2004)</ns0:ref>), the Early Drift Detection Method (EDDM) <ns0:ref type='bibr' target='#b2'>(Baena-García et al. (2006)</ns0:ref>), and ADWIN <ns0:ref type='bibr' target='#b6'>(Bifet and Gavaldà (2007)</ns0:ref>) are the most popular.</ns0:p><ns0:p>For predictive data modelling applied on the drifting streams, advanced adaptive supervised machine learning methods are used. Supervised learning methods used for drifting stream classification could be categorized from several perspectives, depending on how they approach the adaptation (Ditzler et al. <ns0:ref type='bibr' target='#b68'>(Sagi and Rokach (2018)</ns0:ref>). The main idea of the ensemble model is based on the assumption that a set of classifiers together can achieve better performance than individual classifiers <ns0:ref type='bibr' target='#b42'>(Kuncheva and Whitaker (2003)</ns0:ref>). The selection of the ensemble experts is a crucial factor, as the ideal ensemble consists of a set of diverse base learners. Ensemble models are also suitable for data stream classification, where target concepts change over time. The following section summarizes the use of ensembles in the classification of drifting streams.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>There are several approaches to the design of adaptive ensemble models. Some of them use the same technique as approaches to static data processing, such as Online Boosting <ns0:ref type='bibr' target='#b79'>(Wang and Pineau (2013)</ns0:ref>), which is based on the classic Boosting method, extended with online processing capabilities. For Further studies have focused on the design of the ensemble models that would be simple (in terms of their run time) and able to adapt to the concept drift dynamically. For example, the AWE (Accuracy Weighted Ensemble) <ns0:ref type='bibr' target='#b12'>(Brzeziński and Stefanowski (2011)</ns0:ref>) uses the assignment of weight to the base classifiers based on a prediction error. Old and weak members are gradually being replaced by the new ones, with a lower error rate. The update mechanism is based on the assumption that the latest training chunk will better represent the current test chunk. Another model, DWM (Dynamic Weighted Majority) <ns0:ref type='bibr' target='#b39'>(Kolter and Maloof (2007)</ns0:ref>), dynamically changes the weights of the base classifiers in the case of incorrect classification. A new classifier is added if the model incorrectly classifies the training example, and old classifiers are discarded if their weights fall below a threshold value. Online Bagging and Boosting algorithms were recently used as a basis for more advanced streaming ensembles, such as Adaptive Ensemble Size (AES) <ns0:ref type='bibr' target='#b60'>(Olorunnimbe et al. (2018)</ns0:ref>), which dynamically adapts the ensemble size, or an approach <ns0:ref type='bibr' target='#b37'>(Junior and Nicoletti (2019)</ns0:ref>), where boosting is applied to the new batches of data and maintains the ensemble by adding the base learners according to the ensemble accuracy rate. Learn++ (inspired by AdaBoost) is an incremental learning ensemble approach consisting of base learners trained on a subset of training data and able to learn the new classes <ns0:ref type='bibr' target='#b64'>(Polikar et al. (2001)</ns0:ref>). Several modifications of this approach exist, focused on improvement of the number of generated ensemble members <ns0:ref type='bibr' target='#b56'>(Muhlbaier et al. (2004)</ns0:ref>). The Random Forests method is probably the most popular ensemble method on static data at present. Its adaptive version for stream classification, Adaptive Random Forests (ARF), was introduced in <ns0:ref type='bibr' target='#b32'>(Gomes et al. (2017)</ns0:ref>) and has shown a high learning performance on streaming data. More recently, multiple adaptive versions of popular ensemble methods gaining improved performance or achieving speedup in execution have been introduced, e.g., the adaptive eXtreme Gradient Boosting method <ns0:ref type='bibr' target='#b54'>(Montiel et al. (2020)</ns0:ref>), the streaming Active Deep Forest method <ns0:ref type='bibr' target='#b50'>(Luong et al. (2020)</ns0:ref>), or Random Forests with an implemented resource-aware elastic swap mechanism <ns0:ref type='bibr' target='#b52'>(Marrón et al. (2019)</ns0:ref>).</ns0:p><ns0:p>All ensemble models work with the assumption of the diversity of the individual classifiers in the ensemble, while the diversity is achieved in different ways. Diversity can help in evolving data streams, as the most suitable method may also change as a result of the stream evolution <ns0:ref type='bibr' target='#b63'>Pesaranghader et al. (2018)</ns0:ref>.</ns0:p><ns0:p>Diverse ensembles by themselves cannot guarantee faster recovery from drifts, but can help to reduce the initial increase in error caused by a drift <ns0:ref type='bibr' target='#b53'>Minku et al. (2010)</ns0:ref>. There are several ways to achieve diversity in the ensemble. Either the classifiers are trained on different data samples, or the model is composed of a set of heterogeneous classifiers. Recently, <ns0:ref type='bibr' target='#b38'>Khamassi et al. (Khamassi et al. (2019)</ns0:ref>) studied the influence of diversity techniques (block-based, weighting-data, and filtering-data) on adaptive ensemble models and designed a new ensemble approach that combines the three diversity techniques. The authors in <ns0:ref type='bibr' target='#b70'>(Sidhu and Bhatia (2018)</ns0:ref>) experimented with a diversified, dynamic weighted majority voting approach consisting of two ensembles (with low and high diversity, achieved by replacing the Poisson (1) with Poisson (κ) distribution in online bagging <ns0:ref type='bibr' target='#b62'>(Oza and Russell (2001)</ns0:ref>). The Kappa Updated Ensemble (KUE) <ns0:ref type='bibr' target='#b14'>Cano and Krawczyk (2020)</ns0:ref> trains its base learners using different subsets of features and updates them with new instances with a given probability following a Poisson distribution. Such an approach results in a higher ensemble diversity and outperforms most of the current adaptive ensembles. However, there are not many studies where the model uses the model diversity score as a criterion for the base Manuscript to be reviewed Computer Science classifiers in the ensemble <ns0:ref type='bibr' target='#b40'>(Krawczyk et al. (2017)</ns0:ref>) as opposed to static data processing, where such a complex model exists <ns0:ref type='bibr' target='#b51'>(Lysiak et al. (2014)</ns0:ref>). According to <ns0:ref type='bibr' target='#b81'>(Yang (2011)</ns0:ref>), diversity correlates with model accuracy. A suitable diversity metric used in ensembles is a paired diversity Q statistic <ns0:ref type='bibr' target='#b41'>(Kuncheva (2006)</ns0:ref>), which provides information about differences between two base classifiers in the ensemble.</ns0:p><ns0:p>Another aspect of the ensemble classifiers is the composition of the base classifiers in the model. The most common are homogeneous ensemble methods, which use the same algorithm to train the ensemble members <ns0:ref type='bibr' target='#b22'>(Fernandez-Aleman et al. (2019)</ns0:ref>). On the other hand, heterogeneous approaches are based on the utilization of multiple algorithms to generate ensemble members. Such an approach could lead to the creation of more diverse ensembles. For the data stream classification, a HEFT-Stream (Heterogeneous Ensemble with Feature drift for Data Streams) <ns0:ref type='bibr' target='#b58'>Nguyen et al. (2012)</ns0:ref> builds a heterogeneous ensemble composed of different online classifiers (e.g., Online Naive Bayes). Adaptive modifications of the heterogeneous ensembles were also successfully applied on the drifting data streams (Van Rijn et al. <ns0:ref type='formula'>2020</ns0:ref>)). The approach described in this paper aims to combine the construction of the adaptive heterogeneous ensemble with a diversity-based update of the ensemble members. This approach could result in a robust model, with the adaptation mechanism ensuring that newly added members are as diverse as possible during the ensemble updates. Maintaining the diversity of the overall model can also lead to a reduction of model updates and therefore faster execution during the run time.</ns0:p></ns0:div>
<ns0:div><ns0:head>DDCW ENSEMBLE METHOD</ns0:head><ns0:p>In the following section, we introduce the design of the Diversified Dynamic Class Weighted (DDCW) ensemble model. The design of the model is based on the assumption that a robust model consists of a collection of heterogeneous base classifiers that are very diverse. When applied to static data, the diversity is used within the ensemble models to tune the combination rule for voting and the aggregation of component classifier predictions. We propose the design of a heterogeneous ensemble model, which combines the dynamic weighting of the ensemble members with the mutual diversity score criterion. The diversity measures are used to rank the members within the ensemble and update their weights according to the diversity value, so the model prefers experts with higher mutual diversity, thereby creating a more robust ensemble. When ranking the base classifiers, the diversity measurement is combined with the lifetime of individual base classifiers in the model. The criterion is expected to cause the importance of the long-lasting base classifiers to gradually fade, which should ensure the relevance of the whole ensemble to evolving and changing data streams.</ns0:p><ns0:p>The model is composed of m ensemble members e 1 , . . . , e m , trained using each chunk of incoming samples in the stream, as depicted in Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>. Each of those experts e 1 , . . . , e m , have assigned weights for each target class. The weights are tuned after each period (after each chunk is processed) based on individual base classifier performance. First, for each class that a base classifier predicts correctly, the weight is increased. Second, after each chunk is processed, the model calculates Q pairwise diversity between each of the ensemble members and uses this value to modify model weights.</ns0:p><ns0:p>Pairwise Q diversity metric is calculated as follows <ns0:ref type='bibr' target='#b42'>(Kuncheva and Whitaker (2003)</ns0:ref>): let Z = z 1 , ..., z N be a labeled data set, z j ∈ R n coming from the classification problem. The output of a classifier D i is an N-dimensional binary vector y i = [y 1,i , ..., y N,i ] T , such that y j,i = 1, if D i recognizes correctly z j , and 0 otherwise, i = 1, ..., L. Q statistic for two classifiers, D i and D k , is then computed as:</ns0:p><ns0:formula xml:id='formula_0'>Q i,k = N 11 N 00 −N 01 N 10 N 11 N 00 +N 01 N 10</ns0:formula><ns0:p>where N ab is the number of elements z j of Z for which y j,i = a and y j,k = b. Q varies between -1 and 1; classifiers that tend to recognize the same samples correctly will have positive values of Q, and those that commit errors on different objects will render Q negative. For statistically independent classifiers, the value of Q i,k is 0.</ns0:p><ns0:p>The value for each member div e i in DDCW model is obtained as the average of contributions of individual pair diversities and is calculated as follows:</ns0:p><ns0:formula xml:id='formula_1'>div e i = 1 m−1 ∑ m k=1,k =i Q i,k .</ns0:formula><ns0:p>Then, after each period, the lifetime coefficient T i of each ensemble member is increased. Afterwards, the weights of each of the base classifiers are modified using the lifetime coefficient. After this step, the weights are normalized for each class, and a score is calculated for each target class by the classifier predictions. In the last step, the global ensemble model prediction is selected as a target class with the highest score.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_2'>C 1 C 2 ... C c</ns0:formula><ns0:p>Classifier score</ns0:p><ns0:formula xml:id='formula_3'>e 1 w 1,1 w 1,2 ... w 1,c ∑ c j=1 w 1, j e 2 w 2,1 w 2,2 ... w 2,c ∑ c j=1 w 2, j ... ... ... ... ... ... e m w m,1 w m,2 ... w m,c ∑ c j=1 w m, j</ns0:formula><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Weight matrix.</ns0:p><ns0:p>The proposed ensemble model belongs to the category of passive chunk-based adaptive models. In the presented approach, the size of the model dynamically changes according to the global model performance.</ns0:p><ns0:p>The minimum size of the model is set by the parameter k, and the maximum size is set by the parameter l.</ns0:p><ns0:p>Each base classifier in the ensemble is assigned with a weight vector for each target class. If a new target class appears in the chunk used in training, the new target will be added to the weight vector for each base classifier. Initially, the ensemble consists of a randomly initialized set from a list of defined base algorithms (Hoeffding Tree or Naive Bayes). Other experts can be added in the following periods (interval representing the chunk of arriving data, where base learners and their weights are modified) until the minimum size of the model is reached, i.e. either the model size is smaller than the defined minimum size or the particular member weight falls under the defined threshold.</ns0:p><ns0:p>In each iteration, the experts are used to predict the target class of incoming samples in the processed chunk (Lines 3-5). If a prediction of an expert is correct, the weights of the particular expert and target class are multiplied by a coefficient β (Line 7). In the case period p has occurred, Q statistic diversity is Manuscript to be reviewed</ns0:p><ns0:p>Computer Science calculated for each pair of experts in the ensemble, and the weights of each expert is modified using the particular expert's diversity (Line 12 and 16). This mechanism enables the construction of more robust ensembles, by preferring the diverse base models. The weights of base classifiers are also reduced by the exponential value of their lifetime in the ensemble (Line 15). In this case, the lifetime of the expert represents the number of periods since its addition to the ensemble. The exponential function is used, so the experts are influenced minimally during their initial periods in the ensemble but become more significant for the long-lasting members. This implementation works as a gradual forgetting mechanism of the ensemble model, as the weakest experts are gradually removed from the model and replaced by the new ones.</ns0:p><ns0:p>After the update, the weights are normalized for each target class (Line 24). Afterwards, if the maximum size of the model is reached and the global prediction is incorrect, the weakest expert is removed from the ensemble (Line 27). A new random expert can then be added to the ensemble (Lines 30-31). In each period, all experts where the sum of weights is lower than defined threshold θ are removed from the ensemble. In the end, each sample is weighted by a random uniform value m times, where m represents the actual size of ensemble (Line 41 </ns0:p></ns0:div>
<ns0:div><ns0:head>Real datasets 377</ns0:head><ns0:p>In our study, we used 12 real datasets, including the frequently used ELEC dataset <ns0:ref type='bibr' target='#b35'>(Harries et al. (1999)</ns0:ref>), </ns0:p></ns0:div>
<ns0:div><ns0:head>Synthetic datasets</ns0:head><ns0:p>Besides the real-world datasets, we used synthetic data streams containing generators with various types of drifts. In most cases (except LED and STAGGER data), we used streams of 1,000,000 samples, with three simulated drifts. We used the Agrawal generator <ns0:ref type='bibr' target='#b1'>(Agrawal et al. (1993)</ns0:ref>) and SEA (Nick <ns0:ref type='bibr' target='#b59'>Street and Kim (2001)</ns0:ref>) generators with abrupt and gradual drifts, RBF and Waveform streams without any drift and with simulated gradual drift, a Stagger Concepts Generator <ns0:ref type='bibr' target='#b69'>(Schlimmer and Granger (1986)</ns0:ref>) with abrupt drift, an LED <ns0:ref type='bibr' target='#b34'>(Gordon et al. (1984)</ns0:ref>) stream with gradual drift, and a Mixed stream with an abrupt drift with balanced and imbalanced target attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>The purpose of the experiments was to determine the performance of the DDCW model in a series of different tests. We used the python implementation of all models, and the experiments were performed using the Scikit-multiflow framework <ns0:ref type='bibr' target='#b55'>(Montiel et al. (2018)</ns0:ref>). All experiments were performed on a virtual server equipped with 6 CPU cores and 8 GB RAM.</ns0:p><ns0:p>During the first series of experiments, we aimed to examine the impact of the different setting of the chunk-size parameter in which the model is updated and the diversity is calculated. The model was tested with varying sizes of the chunk, with values set to 100, 200, 300, 400, 500, and 1000 samples on all considered datasets. The primary motivation of the experiments was to find the most suitable chunk size parameter for different datasets, which will be used to compare the DDCW with other ensemble models. To evaluate the models, we used prequential evaluation (or interleaved test-then-train evaluation), in which the testing is performed on the new data before they are used to train the model.</ns0:p><ns0:p>In </ns0:p></ns0:div>
<ns0:div><ns0:head>Performance with the different chunk sizes</ns0:head><ns0:p>In this experiment, we explored the influence of the chunk window on the classifier's performance on different datasets. The main goal was to find the optimal chunk window size for a particular dataset. We set different chunk sizes and measured the model performance using selected metrics in defined periods (e.g., every 100 samples). We computed the average model performance on the entire dataset using the above-mentioned classification metrics. A comparison of the DDCW classifier accuracy and F1 with different sizes of the chunks is shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>.</ns0:p><ns0:p>Besides setting the chunk size, we fine-tuned the model hyper-parameters. Our main objective was to estimate the suitable combinations of the parameters α, β , and θ . As the experiments were computationally intensive, most of the models were trained using the default hyper-parameter settings, with particular values set to α = 0.002, β = 3, and θ = 0.02. Regarding the model behavior with different hyper-parameter settings, α influences the lifetime of an expert in the ensemble and the speed of degradation of the expert score with increasing lifetime. Increasing values led usually to a more dynamic model, able to adapt rapidly, while lower values led to a more stable composition of the model.</ns0:p><ns0:p>β influences the preference of the experts, which classified the samples correctly. Higher values of this parameter can suppress the poor-performing experts and raise the probability of updating them in the following iteration. θ serves as a threshold for the expert update. Lower values usually lead to more weak experts in the ensemble, a marginal contribution to the performance , but raise the model complexity significantly. Higher values force the updating of weak experts. However, some of them may be missing later on, after drift occurrence and the reappearance of previous concepts.</ns0:p><ns0:p>The results proved, that the chunk size does have an impact on the model performance, and there are mostly minor differences in the performance metrics with different chunk size parameter setting. Although accuracy is not affected much, F1 metric improves significantly with larger chunk sizes, especially on the BNG Zoo and BNG Lymph datasets. In general, we can observe, that on the larger data (with more than 100,000 samples in the stream), larger windows resulted in slightly better performance. On the other hand, smaller chunk sizes enable the model to react more quickly to concept drift. In some cases, the accuracy metric proved to be not very useful, as the target class is strongly unbalanced or multi-class. It is evident mostly on the KDD99 or BNG Lypmh datasets, where high accuracy values are caused mainly by the classification into the majority class, while other minor classes do not influence this metric very much.</ns0:p><ns0:p>A much better perspective on the actual model performance could be given by F1 measure.</ns0:p><ns0:p>The experiments summarized averaged results that the models achieved on the entire stream, but it is also important to explore how the performance progressed during the stream processing and observe their reactions to concept drift.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_10'>4</ns0:ref> visualizes the accuracy achieved by the DDCW models on the real datasets with both chunk sizes. The performance of the method with both settings is overall similar; however, we can see a difference in cases when a change (possible drift) occurs. On the KDD99 dataset, there is a significant decrease in the accuracy of the model after around 52,000 samples. Shorter chunk windows resulted in a much earlier reaction to the change, without any significant decrease in performance. On the Elec and Covtype datasets, the earlier reactions are also present and visible, resulting in higher performance metrics. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>was able to use more data to update the ensemble, which led to improved performance. In these experiments, we compared the DDCW model performance with the selected ensemble models. In the comparison, we included AWE, DWM, Online Boosting, OzaBagging, ARF, and KUE models. Each of the ensemble models was tested with different base learners. We evaluated Naive Bayes, Hoeffding Tree, and k-NN as base learners. Similar to the previous set of experiments, we used the accuracy, precision, recall, and F1 metrics for comparison purposes computed in the prequential fashion.</ns0:p><ns0:p>When using the DDCW model, we included the DDCW model with a combination of Naive Bayes and Hoeffding trees as base learners, as well as a tree-based homogeneous ensemble alone.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance comparison</ns0:head><ns0:p>To summarize the experiments, Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> compares the performance of all evaluated models on the real datasets and Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> provides the similar comparison on the synthetic data streams. As in the previous experiments, the tables consists of overall averaged results that the models achieved on the entire stream. While most of the studies focus only on a comparison of accuracy, we decided to analyze other classification metrics as well. Especially in the case of multi-class or heavily imbalanced data (e.g., KDD99), accuracy might not be the best choice to truly evaluate the performance of the model, therefore we choose also F1 metric as well. Please note, that we were unable to properly obtain F1 values from the KUE model on some of the datasets.</ns0:p><ns0:p>The DDCW model proved to be suitable for data streams with different concept drifts and either binary or multi-class classification tasks. When considering the composition of the DDCW ensemble, the fact that the model relies on different base learners enables it to utilize the strengths of particular learners. Dynamic composition of the ensemble enables it to adapt to the particular stream by preferring a base learner that is more suitable for the given data. In general, the DDCW performs very well on the generated streams, gaining at least competitive results compared to the other models on the real-world datasets. The method appears to struggle more with some of the imbalanced datasets, as is apparent from the F1 results achieved on the KDD 99 or Airlines dataset. During this experiment, we also used two Manuscript to be reviewed Although Online Boosting or OzaBagging model performs with higher degrees of accuracy on some of the datasets, their computational intensiveness and more extended training and scoring times may be a factor to consider a simpler model. Similarly, ARF and KUE models provide superior performance on the majority of the datasets. When compared to those state of the art methods, DDCW method produced mostly comparable results, but needed less training time with lesser memory requirements (especially on the larger synthetic data streams) than ARF method. DDCW ensemble, in this case, may offer a reasonable alternative by providing a well-performing model, while maintaining reasonable requirements on run-time.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In the presented paper, we propose a heterogeneous adaptive ensemble classifier with a dynamic weighting scheme based on the diversity of its base classifiers. The algorithm was evaluated on a wide range of datasets, including real and synthetic ones, with different types of concept drift. During the experiments, we compared the proposed method with several other adaptive ensemble methods. The results proved that the proposed model is able to adapt to drift occurrence relatively fast and is able to achieve at least comparable performance to the existing approaches, on both real and synthetically generated datasets.</ns0:p><ns0:p>While still performing well, the model also manages to maintain reasonable resource requirements in terms of memory A knowledge model could be used to extract new expert features not previously contained in the data or to extract interesting trends present in the data stream. Such extensions could represent expert knowledge and could thus be leveraged to detect frequent patterns leading to concept drift while reducing the time normally needed to adapt the models with that knowledge.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Concept drift types according to (Gama et al. (2014a)). (A) Sudden/Abrupt, (B) Incremental, (C) Gradual, (D) Re-occuring.</ns0:figDesc><ns0:graphic coords='5,183.09,147.91,330.86,212.61' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2015); Krawczyk et al. (2017)): • Active/Passive-Active methods usually utilize drift detection methods to detect the drift and to trigger the model update. Passive methods periodically update the model, without any knowledge of the drift occurrence. • Chunk-Based/Online-Chunk-based methods process the streaming data in batches (each batch consists of a specified fixed number of stream elements). Online methods process the stream elements separately when they appear. Ensemble models represent a popular solution for the classification of drifting data streams. An ensemble classification model is composed of a collection of classifiers (also called base learners, 4/22 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021) Manuscript to be reviewed Computer Science ensemble members, or experts) whose individual decisions are combined (most often by voting) to classify the new samples</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>adaptation to concept drift, it uses concept drift detection. If a concept drift occurs, the entire model is discarded and replaced by a new model. Another well-known model is the OzaBagging (Oza (2005)) ensemble. Unlike Bagging for static data, OzaBagging does not use random sampling from the training data, but each of the samples is trained k times, which leads to a Poisson distribution.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2016); Frías-Blanco et al. (2016); van Rijn et al. (2018); Idrees et al. (2020)), and many of them proved suitable to address issues such as class imbalance (Large et al. (2017); Fernández et al. (2018); Ren et al. (2018); Wang et al. (2018); Ghaderi Zefrehi and Altınc ¸ay (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Overall scheme of the proposed ensemble model.</ns0:figDesc><ns0:graphic coords='8,183.09,63.78,330.85,143.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>378the KDD 99 challenge dataset<ns0:ref type='bibr' target='#b72'>(Tavallaee et al. (2009)</ns0:ref>), Covtype<ns0:ref type='bibr' target='#b9'>(Blackard (1998)</ns0:ref>), the Airlines dataset379 introduced by Ikonomovska 1 , and data streams from the OpenML platform Bischl et al. (2019) generated 380 from a real-world dataset using a Bayesian Network Generator (BNG) van Rijn et al. (2014). We included 381 a wide range of datasets to evaluate the performance of the DDCW model on datasets with both binary 382 and multi-class targets or with balanced and imbalanced classes, especially some of them, such as KDD99 383 and Shuttle are heavily imbalanced. To examine the imbalance degree of the datasets, we included the 384 class ratios in the documentation on the GitHub repository 2 . As it is difficult to determine the type of a 385 real drift contained in such data, we tried to estimate and visualize possible drift occurrences. Multiple 386 techniques for concept drift visualization exist<ns0:ref type='bibr' target='#b65'>(Pratt and Tschapek (2003)</ns0:ref>). We used visualization based 387 on feature importance (using the Gini impurity) and the respective changes within the datasets, as they388 may signalize changes in concepts in the data. Based on the work described in (Cassidy and Deviney 389 (2015)), we used feature importance scores derived from the Online Random Forest model trained on 390 the datasets. Such an approach can help to visualize the so-called feature drift, which occurs when 391 certain features stop (or start) being relevant to the learning task. Fig. 3 shows such visualizations for the 392 real-world datasets used in the experiments. The visualization depicts how the importance of the features 393 1 http: //kt.ijs.si/elena ikonomovska/data.html 2 https://github.com/Miso-K/DDCW/blob/master/class ratios.txt 9/22 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021) Manuscript to be reviewed Computer Science changes over time in the data. x-axis represents number of samples, y-axis feature indices and the size of the dots correspond to a feature importance in the given chunk.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Feature importance progress in the real-world datasets. (A) ELEC, (B) Airlines, (C) KDD 99, (D) Covtype, (E) Shuttle, (F) Connect-4. [x axis] number of samples, [y axis] feature indices, the size of the dots correspond to a feature importance in the given chunk.</ns0:figDesc><ns0:graphic coords='11,143.80,100.29,409.44,464.04' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>the second set of the experiments, the main goal was to compare the performance of the DDCW model with the selected other streaming ensemble-based classifiers. We considered multiple ensemble models: DWM, AWE, Online Boosting, OzaBagging, and the currently best performing streaming ensembles such as ARF and KUE. To analyze the performance of these models, standard classification metrics were used (accuracy, precision, recall, and F1). Besides the comparison of the model performance, we measured the metrics related to resource consumption, such as total time required for training, scoring time of an instance, and memory requirements of the models. During these experiments, we set the chunk size to 1000 samples (although we included DDCW with a chunk size of 100 samples for comparison) and set the model's hyper-parameters to create similar-sized ensemble models (min. 5 and max. 20 members in the ensemble).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig. 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig.5depicts the DDCW model performance on the synthetic datasets with both chunk sizes. In the</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Performance of the DDCW model on the real datasets. (A) ELEC, (B) KDD 99, (C) Airlines, (D) Covtype, (E) Shuttle, (F) Powersupply, (G) Connect-4, (H) BNG Bridges, (I) BNG Bridges1vsAll, (J) BNG Hepatitis, (K) BNG Zoo, (L) BNG Lymph. [y axis] accuracy, [x axis] number of samples.</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.56,517.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Performance of the DDCW model on the synthetic datasets. (A) AGR a, (B) AGR g, (C) SEA a, (D) SEA g, (E) Stagged, (F) LED, (G) Mixed-balanced, (H) Mixed-imbalanced, (I) RBF, (J) RBF Drift, (K) Waveform, (L) Waveform drift. [y axis] accuracy, [x axis] number of samples.</ns0:figDesc><ns0:graphic coords='15,141.73,91.62,413.56,517.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>different DDCW method setups to compare the effect of the base learner selection. We used DDCW with only Hoeffding trees as a base learner and DDCW with a combination of Naive Bayes and Hoeffding trees. Although the homogeneous ensemble mostly performed slightly better, the heterogeneous one was usually faster to train and score and maintained a similar performance, which was a result of the inclusion of fast Naive Bayes ensemble members. In a similar fashion, we experimented with an integration of k-NN into the DDCW model, but as expected, k-NN base learners raised the resource requirements of the model and failed to provide a sufficient performance boost, so we did not include k-NN base learners in further experiments.Performance comparison showed that the DDCW method can produce results that are comparable to16/22 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)Manuscript to be reviewedComputer Sciencethe related ensemble models on both, real and synthetic streams. In many cases, it is able to outperform existing algorithms (e.g., AWE, DWM, OZA, and OB) in both of the explored metrics. Current state-ofthe-art methods such as KUE and ARF usually produce slightly better results, but the DDCW method showed a fairly competitive performance, surpassing on several datasets one of those methods in both, accuracy and F1 scores. However, the evaluation metrics represent only one aspect of the adaptive models' performance. During the experiments, we tried to evaluate another aspect of the studied models that may influence the run time of the models during deployment in real-world scenarios. We focused mostly on monitoring the model performance in terms of their demand on resources and resource consumption during the process. During the experiments, we collected data about the overall run-time aspects of the model. The following section compares the models from the perspective of training/scoring times and memory requirements.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Training time and memory usageWe analyzed the training and scoring times and the memory consumption during the training process to provide a different view of the models' performance, comparing performance metrics with resource consumption requirements. We measured the overall training and scoring times on the entire data by summing up all partial re-training and scoring times over the stream. Table6summarizes the results of all evaluated models. The table compares the total training time consumed in the training and re-training of the models, the total scoring time of all processed instances, and the average size of the model in memory during the entire stream processing. The results represent an averaged value of the total of five separate runs of each experiment. It is important to note that the KUE model was not included in this comparison. We used Python implementations of all evaluated models and the scikit-multiflow library during the experiments. The KUE model was available only in its Java implementation using the MOA library, therefore using different underlying technologies could influence the objective comparison of resource consumption. At the same time, it is essential to note that the Java implementation of the KUE model was significantly effective than all Python-based models, mostly in training times, which were remarkably shorter.The choice of a base classifier heavily influences the overall run-time requirements of all models.Most apparently, the choice of k-NNADWIN as a base learner for OnlineBagging and OzaBagging methods leads to a massive increase of memory consumption and data processing times. Nearestneighbour classifiers require to store the training data in memory, which could lead to increased memory consumption and therefore increased training times. On the other hand, ensembles which consist of Naive Bayes and Hoeffding Tree classifiers as the base learners are much faster to train and require significantly lower memory during the run-time. However, Online Boosting and OzaBagging methods are much more sensitive to the number of features of the processed data. It can be observed on KDD99 and Covtype datasets, where these relatively faster models, required significantly longer times to either train or score the instances. DDCW ensemble training time and memory consumption requirements reflect the fact that the model consists of a mixture of Hoeffding Tree and Naive Bayes classifiers. When experimenting with the homogeneous DDCW ensemble, the performance results were better on many of the datasets. On the other hand, heterogeneous DDCW model provided a small decrease of the performance, but in most of the cases, inclusion of a Naive Bayes base learner led to a shorter training times and reduced memory usage (most significantly on e.g., Waveform data, where the training time was reduced to a half of the total training time of the homogeneous DDCW ensemble). When taking into consideration both aspects, DDCW model can, in some cases present a compromise between performance and resource requirements.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>consumption and time needed to score the unknown samples.The proposed approach is also dependent on chunk size parameter setting, as the performance of the model on certain datasets change significantly with different chunk sizes. Further research with adaptive heterogeneous ensemble models may lead to an exploration of modifications to weighting schemes that improve performance in multi-class classification problems or classifications of heavy imbalanced data. Another interesting field for future work is the integration of adaptation mechanisms with semantic models of the application 18/22 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021) Manuscript to be reviewed Computer Science domain. A domain knowledge model could provide a description of the data, the essential domain concepts, and their relationships. Such a model could also be used to improve classification performance by capturing expert domain knowledge and utilizing it in the process of classification of unknown samples.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,141.73,111.68,413.59,297.78' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>). Each expert is than trained with a new set of incoming samples, with individual weights from the last chunk of data (Line 42). After training, the global predictions of actual samples are retrieved, and the algorithm then continues back to Line 3. Datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithm 1. Diversified Dynamic Class Weighted ensemble</ns0:cell></ns0:row></ns0:table><ns0:note>Procedure: DDCW({X, Y }, p, k, l, α, β , θ ) Input: Data and labels {x,y}, chunk size p, min experts k, max experts l, fading factor α, multiplier β , threshold θ Output: Global predictions G 1: Experts ← create random experts(k); 2: initialize class weights w i, j 3: for s = 0, ..., n do 4: for i = 1, ..., num experts(Experts) do and synthetically generated datasets. We tried to include multiple drifting datasets that contain different 375 types of concept drift. Datasets used in the experiments are summarized in Table 2. used in the experiments. [Dataset type] R: real, S: synthetic. [Drift type] A: abrupt, G: gradual, -: none, ?: unknown.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Performance of the DDCW model with different chunk sizes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Chunk size</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>300</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>accuracy accuracy accuracy accuracy accuracy accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>ELEC</ns0:cell><ns0:cell>0.849</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.831</ns0:cell><ns0:cell>0.847</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.810</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD99</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.996</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.991</ns0:cell></ns0:row><ns0:row><ns0:cell>AIRL</ns0:cell><ns0:cell>0.636</ns0:cell><ns0:cell>0.641</ns0:cell><ns0:cell>0.644</ns0:cell><ns0:cell>0.645</ns0:cell><ns0:cell>0.645</ns0:cell><ns0:cell>0.649</ns0:cell></ns0:row><ns0:row><ns0:cell>COVT</ns0:cell><ns0:cell>0.849</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.831</ns0:cell><ns0:cell>0.847</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.810</ns0:cell></ns0:row><ns0:row><ns0:cell>SHUTTLE</ns0:cell><ns0:cell>0.953</ns0:cell><ns0:cell>0.955</ns0:cell><ns0:cell>0.965</ns0:cell><ns0:cell>0.975</ns0:cell><ns0:cell>0.953</ns0:cell><ns0:cell>0.992</ns0:cell></ns0:row><ns0:row><ns0:cell>POWERSUPLY</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>0.155</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>0.153</ns0:cell><ns0:cell>0.155</ns0:cell><ns0:cell>0.157</ns0:cell></ns0:row><ns0:row><ns0:cell>CONNECT4</ns0:cell><ns0:cell>0.686</ns0:cell><ns0:cell>0.693</ns0:cell><ns0:cell>0.687</ns0:cell><ns0:cell>0.669</ns0:cell><ns0:cell>0.674</ns0:cell><ns0:cell>0.705</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES</ns0:cell><ns0:cell>0.689</ns0:cell><ns0:cell>0.703</ns0:cell><ns0:cell>0.711</ns0:cell><ns0:cell>0.715</ns0:cell><ns0:cell>0.718</ns0:cell><ns0:cell>0.725</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES1vsAll</ns0:cell><ns0:cell>0.962</ns0:cell><ns0:cell>0.963</ns0:cell><ns0:cell>0.964</ns0:cell><ns0:cell>0.965</ns0:cell><ns0:cell>0.965</ns0:cell><ns0:cell>0.966</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG HEPATITIS</ns0:cell><ns0:cell>0.858</ns0:cell><ns0:cell>0.867</ns0:cell><ns0:cell>0.870</ns0:cell><ns0:cell>0.873</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>0.897</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG ZOO</ns0:cell><ns0:cell>0.876</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell>0.900</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.921</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG LYMPH</ns0:cell><ns0:cell>0.796</ns0:cell><ns0:cell>0.811</ns0:cell><ns0:cell>0.819</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.830</ns0:cell><ns0:cell>0.879</ns0:cell></ns0:row><ns0:row><ns0:cell>AGR a</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell>0.871</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>0.876</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.879</ns0:cell></ns0:row><ns0:row><ns0:cell>AGR g</ns0:cell><ns0:cell>0.827</ns0:cell><ns0:cell>0.847</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.856</ns0:cell><ns0:cell>0.859</ns0:cell><ns0:cell>0.874</ns0:cell></ns0:row><ns0:row><ns0:cell>SEA a</ns0:cell><ns0:cell>0.873</ns0:cell><ns0:cell>0.876</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.881</ns0:cell><ns0:cell>0.888</ns0:cell></ns0:row><ns0:row><ns0:cell>SEA g</ns0:cell><ns0:cell>0.868</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>0.873</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.884</ns0:cell></ns0:row><ns0:row><ns0:cell>STAGGER</ns0:cell><ns0:cell>0.946</ns0:cell><ns0:cell>0.946</ns0:cell><ns0:cell>0.933</ns0:cell><ns0:cell>0.938</ns0:cell><ns0:cell>0.912</ns0:cell><ns0:cell>0.923</ns0:cell></ns0:row><ns0:row><ns0:cell>LED</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell>0.888</ns0:cell><ns0:cell>0.882</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.860</ns0:cell></ns0:row><ns0:row><ns0:cell>MIXED BALANCED</ns0:cell><ns0:cell>0.927</ns0:cell><ns0:cell>0.934</ns0:cell><ns0:cell>0.935</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell>0.964</ns0:cell></ns0:row><ns0:row><ns0:cell>MIXED IMBALANCED</ns0:cell><ns0:cell>0.924</ns0:cell><ns0:cell>0.930</ns0:cell><ns0:cell>0.932</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.964</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF</ns0:cell><ns0:cell>0.855</ns0:cell><ns0:cell>0.872</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.879</ns0:cell><ns0:cell>0.881</ns0:cell><ns0:cell>0.882</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF DRIFT</ns0:cell><ns0:cell>0.546</ns0:cell><ns0:cell>0.562</ns0:cell><ns0:cell>0.573</ns0:cell><ns0:cell>0.585</ns0:cell><ns0:cell>0.592</ns0:cell><ns0:cell>0.601</ns0:cell></ns0:row><ns0:row><ns0:cell>WAVEFORM</ns0:cell><ns0:cell>0.819</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.830</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.835</ns0:cell><ns0:cell>0.837</ns0:cell></ns0:row><ns0:row><ns0:cell>WAVEFORM DRIFT</ns0:cell><ns0:cell>0.820</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.829</ns0:cell><ns0:cell>0.834</ns0:cell><ns0:cell>0.835</ns0:cell><ns0:cell>0.836</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>ELEC</ns0:cell><ns0:cell>0.815</ns0:cell><ns0:cell>0.807</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.811</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.760</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD99</ns0:cell><ns0:cell>0.570</ns0:cell><ns0:cell>0.591</ns0:cell><ns0:cell>0.602</ns0:cell><ns0:cell>0.581</ns0:cell><ns0:cell>0.582</ns0:cell><ns0:cell>0.596</ns0:cell></ns0:row><ns0:row><ns0:cell>AIRL</ns0:cell><ns0:cell>0.531</ns0:cell><ns0:cell>0.538</ns0:cell><ns0:cell>0.533</ns0:cell><ns0:cell>0.537</ns0:cell><ns0:cell>0.529</ns0:cell><ns0:cell>0.535</ns0:cell></ns0:row><ns0:row><ns0:cell>COVT</ns0:cell><ns0:cell>0.815</ns0:cell><ns0:cell>0.807</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.811</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.760</ns0:cell></ns0:row><ns0:row><ns0:cell>SHUTTLE</ns0:cell><ns0:cell>0.506</ns0:cell><ns0:cell>0.510</ns0:cell><ns0:cell>0.536</ns0:cell><ns0:cell>0.606</ns0:cell><ns0:cell>0.672</ns0:cell><ns0:cell>0.702</ns0:cell></ns0:row><ns0:row><ns0:cell>POWERSUPLY</ns0:cell><ns0:cell>0.109</ns0:cell><ns0:cell>0.110</ns0:cell><ns0:cell>0.109</ns0:cell><ns0:cell>0.105</ns0:cell><ns0:cell>0.108</ns0:cell><ns0:cell>0.110</ns0:cell></ns0:row><ns0:row><ns0:cell>CONNECT4</ns0:cell><ns0:cell>0.483</ns0:cell><ns0:cell>0.486</ns0:cell><ns0:cell>0.470</ns0:cell><ns0:cell>0.466</ns0:cell><ns0:cell>0.481</ns0:cell><ns0:cell>0.464</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES</ns0:cell><ns0:cell>0.588</ns0:cell><ns0:cell>0.609</ns0:cell><ns0:cell>0.620</ns0:cell><ns0:cell>0.627</ns0:cell><ns0:cell>0.632</ns0:cell><ns0:cell>0.635</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES1vsAll</ns0:cell><ns0:cell>0.866</ns0:cell><ns0:cell>0.872</ns0:cell><ns0:cell>0.873</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.883</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG HEPATITIS</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.915</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>0.935</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG ZOO</ns0:cell><ns0:cell>0.784</ns0:cell><ns0:cell>0.806</ns0:cell><ns0:cell>0.818</ns0:cell><ns0:cell>0.825</ns0:cell><ns0:cell>0.831</ns0:cell><ns0:cell>0.853</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG LYMPH</ns0:cell><ns0:cell>0.548</ns0:cell><ns0:cell>0.577</ns0:cell><ns0:cell>0.595</ns0:cell><ns0:cell>0.607</ns0:cell><ns0:cell>0.618</ns0:cell><ns0:cell>0.752</ns0:cell></ns0:row><ns0:row><ns0:cell>AGR a</ns0:cell><ns0:cell>0.842</ns0:cell><ns0:cell>0.858</ns0:cell><ns0:cell>0.864</ns0:cell><ns0:cell>0.868</ns0:cell><ns0:cell>0.868</ns0:cell><ns0:cell>0.869</ns0:cell></ns0:row><ns0:row><ns0:cell>AGR g</ns0:cell><ns0:cell>0.808</ns0:cell><ns0:cell>0.830</ns0:cell><ns0:cell>0.835</ns0:cell><ns0:cell>0.840</ns0:cell><ns0:cell>0.845</ns0:cell><ns0:cell>0.874</ns0:cell></ns0:row><ns0:row><ns0:cell>SEA a</ns0:cell><ns0:cell>0.895</ns0:cell><ns0:cell>0.898</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.900</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>0.909</ns0:cell></ns0:row><ns0:row><ns0:cell>SEA g</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell>0.896</ns0:cell><ns0:cell>0.896</ns0:cell><ns0:cell>0.897</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.906</ns0:cell></ns0:row><ns0:row><ns0:cell>STAGGER</ns0:cell><ns0:cell>0.949</ns0:cell><ns0:cell>0.949</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell>0.941</ns0:cell><ns0:cell>0.919</ns0:cell><ns0:cell>0.928</ns0:cell></ns0:row><ns0:row><ns0:cell>LED</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell>0.888</ns0:cell><ns0:cell>0.882</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.860</ns0:cell></ns0:row><ns0:row><ns0:cell>MIXED BALANCED</ns0:cell><ns0:cell>0.928</ns0:cell><ns0:cell>0.934</ns0:cell><ns0:cell>0.935</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell>0.964</ns0:cell></ns0:row><ns0:row><ns0:cell>MIXED IMBALANCED</ns0:cell><ns0:cell>0.930</ns0:cell><ns0:cell>0.935</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell>0.940</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell>0.966</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF</ns0:cell><ns0:cell>0.850</ns0:cell><ns0:cell>0.868</ns0:cell><ns0:cell>0.873</ns0:cell><ns0:cell>0.875</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.881</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF DRIFT</ns0:cell><ns0:cell>0.528</ns0:cell><ns0:cell>0.550</ns0:cell><ns0:cell>0.562</ns0:cell><ns0:cell>0.570</ns0:cell><ns0:cell>0.576</ns0:cell><ns0:cell>0.618</ns0:cell></ns0:row><ns0:row><ns0:cell>WAVEFORM</ns0:cell><ns0:cell>0.814</ns0:cell><ns0:cell>0.822</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.828</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.834</ns0:cell></ns0:row><ns0:row><ns0:cell>WAVEFORM DRIFT</ns0:cell><ns0:cell>0.814</ns0:cell><ns0:cell>0.822</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.830</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.834</ns0:cell></ns0:row></ns0:table><ns0:note>11/22PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Comparison with other ensemble models</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>482</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>DDCWHT</ns0:cell><ns0:cell>DDCWHTNB</ns0:cell><ns0:cell>DWMNB</ns0:cell><ns0:cell>AWENB</ns0:cell><ns0:cell>DWMHT</ns0:cell><ns0:cell>AWEHT</ns0:cell><ns0:cell>OBkNN</ns0:cell><ns0:cell>OzakNN</ns0:cell><ns0:cell>OBHT</ns0:cell><ns0:cell>OzaHT</ns0:cell><ns0:cell>OBNB</ns0:cell><ns0:cell>OzaNB</ns0:cell><ns0:cell>ARFHT</ns0:cell><ns0:cell>KUEHT</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ELEC</ns0:cell><ns0:cell>0.853</ns0:cell><ns0:cell>0.810</ns0:cell><ns0:cell>0.800</ns0:cell><ns0:cell>0.756</ns0:cell><ns0:cell>0.869</ns0:cell><ns0:cell>0.788</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell>0.780</ns0:cell><ns0:cell>0.858</ns0:cell><ns0:cell>0.793</ns0:cell><ns0:cell>0.792</ns0:cell><ns0:cell>0.734</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell>0.668</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD99</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.991</ns0:cell><ns0:cell>0.983</ns0:cell><ns0:cell>0.420</ns0:cell><ns0:cell>0.989</ns0:cell><ns0:cell>0.103</ns0:cell><ns0:cell>0.999</ns0:cell><ns0:cell>0.998</ns0:cell><ns0:cell>0.998</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.995</ns0:cell><ns0:cell>0.946</ns0:cell><ns0:cell>0.999</ns0:cell><ns0:cell>0.999</ns0:cell></ns0:row><ns0:row><ns0:cell>AIRL</ns0:cell><ns0:cell>0.662</ns0:cell><ns0:cell>0.649</ns0:cell><ns0:cell>0.640</ns0:cell><ns0:cell>0.618</ns0:cell><ns0:cell>0.620</ns0:cell><ns0:cell>0.575</ns0:cell><ns0:cell>0.587</ns0:cell><ns0:cell>0.639</ns0:cell><ns0:cell>0.634</ns0:cell><ns0:cell>0.653</ns0:cell><ns0:cell>0.619</ns0:cell><ns0:cell>0.644</ns0:cell><ns0:cell>0.666</ns0:cell><ns0:cell>0.663</ns0:cell></ns0:row><ns0:row><ns0:cell>COVT</ns0:cell><ns0:cell>0.853</ns0:cell><ns0:cell>0.810</ns0:cell><ns0:cell>0.823</ns0:cell><ns0:cell>0.592</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.215</ns0:cell><ns0:cell>0.927</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>0.876</ns0:cell><ns0:cell>0.871</ns0:cell><ns0:cell>0.783</ns0:cell><ns0:cell>0.871</ns0:cell><ns0:cell>0.941</ns0:cell><ns0:cell>0.904</ns0:cell></ns0:row><ns0:row><ns0:cell>SHUTTLE</ns0:cell><ns0:cell>0.996</ns0:cell><ns0:cell>0.992</ns0:cell><ns0:cell>0.896</ns0:cell><ns0:cell>0.949</ns0:cell><ns0:cell>0.946</ns0:cell><ns0:cell>0.949</ns0:cell><ns0:cell>0.990</ns0:cell><ns0:cell>0.991</ns0:cell><ns0:cell>0.982</ns0:cell><ns0:cell>0.978</ns0:cell><ns0:cell>0.950</ns0:cell><ns0:cell>0.922</ns0:cell><ns0:cell>0.998</ns0:cell><ns0:cell>0.997</ns0:cell></ns0:row><ns0:row><ns0:cell>POWERSUPLY</ns0:cell><ns0:cell>0.158</ns0:cell><ns0:cell>0.158</ns0:cell><ns0:cell>0.074</ns0:cell><ns0:cell>0.186</ns0:cell><ns0:cell>0.074</ns0:cell><ns0:cell>0.187</ns0:cell><ns0:cell>0.029</ns0:cell><ns0:cell>0.167</ns0:cell><ns0:cell>0.013</ns0:cell><ns0:cell>0.162</ns0:cell><ns0:cell>0.007</ns0:cell><ns0:cell>0.161</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>0.218</ns0:cell></ns0:row><ns0:row><ns0:cell>CONNECT4</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.705</ns0:cell><ns0:cell>0.671</ns0:cell><ns0:cell>0.286</ns0:cell><ns0:cell>0.693</ns0:cell><ns0:cell>0.613</ns0:cell><ns0:cell>0.702</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell>0.675</ns0:cell><ns0:cell>0.696</ns0:cell><ns0:cell>0.636</ns0:cell><ns0:cell>0.572</ns0:cell><ns0:cell>0.739</ns0:cell><ns0:cell>0.650</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES</ns0:cell><ns0:cell>0.737</ns0:cell><ns0:cell>0.725</ns0:cell><ns0:cell>0.611</ns0:cell><ns0:cell>0.698</ns0:cell><ns0:cell>0.612</ns0:cell><ns0:cell>0.698</ns0:cell><ns0:cell>0.621</ns0:cell><ns0:cell>0.670</ns0:cell><ns0:cell>0.699</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.687</ns0:cell><ns0:cell>0.684</ns0:cell><ns0:cell>0.756</ns0:cell><ns0:cell>0.738</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES1vsAll</ns0:cell><ns0:cell>0.970</ns0:cell><ns0:cell>0.966</ns0:cell><ns0:cell>0.962</ns0:cell><ns0:cell>0.967</ns0:cell><ns0:cell>0.962</ns0:cell><ns0:cell>0.967</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell>0.958</ns0:cell><ns0:cell>0.970</ns0:cell><ns0:cell>0.973</ns0:cell><ns0:cell>0.957</ns0:cell><ns0:cell>0.962</ns0:cell><ns0:cell>0.973</ns0:cell><ns0:cell>0.958</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG HEPATITIS</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.897</ns0:cell><ns0:cell>0.854</ns0:cell><ns0:cell>0.877</ns0:cell><ns0:cell>0.856</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.839</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>0.868</ns0:cell><ns0:cell>0.853</ns0:cell><ns0:cell>0.922</ns0:cell><ns0:cell>0.923</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG ZOO</ns0:cell><ns0:cell>0.928</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>0.812</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.806</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.927</ns0:cell><ns0:cell>0.931</ns0:cell><ns0:cell>0.915</ns0:cell><ns0:cell>0.903</ns0:cell><ns0:cell>0.889</ns0:cell><ns0:cell>0.942</ns0:cell><ns0:cell>0.939</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG LYMPH</ns0:cell><ns0:cell>0.883</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.801</ns0:cell><ns0:cell>0.825</ns0:cell><ns0:cell>0.802</ns0:cell><ns0:cell>0.825</ns0:cell><ns0:cell>0.787</ns0:cell><ns0:cell>0.828</ns0:cell><ns0:cell>0.846</ns0:cell><ns0:cell>0.809</ns0:cell><ns0:cell>0.818</ns0:cell><ns0:cell>0.806</ns0:cell><ns0:cell>0.871</ns0:cell><ns0:cell>0.903</ns0:cell></ns0:row><ns0:row><ns0:cell>F1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ELEC</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.760</ns0:cell><ns0:cell>0.749</ns0:cell><ns0:cell>0.694</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.757</ns0:cell><ns0:cell>0.721</ns0:cell><ns0:cell>0.730</ns0:cell><ns0:cell>0.828</ns0:cell><ns0:cell>0.744</ns0:cell><ns0:cell>0.722</ns0:cell><ns0:cell>0.595</ns0:cell><ns0:cell>0.825</ns0:cell><ns0:cell>0.668</ns0:cell></ns0:row><ns0:row><ns0:cell>KDD99</ns0:cell><ns0:cell>0.587</ns0:cell><ns0:cell>0.549</ns0:cell><ns0:cell>0.502</ns0:cell><ns0:cell>0.045</ns0:cell><ns0:cell>0.397</ns0:cell><ns0:cell>0.029</ns0:cell><ns0:cell>0.712</ns0:cell><ns0:cell>0.642</ns0:cell><ns0:cell>0.593</ns0:cell><ns0:cell>0.645</ns0:cell><ns0:cell>0.577</ns0:cell><ns0:cell>0.530</ns0:cell><ns0:cell>0.649</ns0:cell><ns0:cell>NaN</ns0:cell></ns0:row><ns0:row><ns0:cell>AIRL</ns0:cell><ns0:cell>0.556</ns0:cell><ns0:cell>0.535</ns0:cell><ns0:cell>0.320</ns0:cell><ns0:cell>0.293</ns0:cell><ns0:cell>0.534</ns0:cell><ns0:cell>0.425</ns0:cell><ns0:cell>0.436</ns0:cell><ns0:cell>0.430</ns0:cell><ns0:cell>0.563</ns0:cell><ns0:cell>0.522</ns0:cell><ns0:cell>0.534</ns0:cell><ns0:cell>0.259</ns0:cell><ns0:cell>0.577</ns0:cell><ns0:cell>NaN</ns0:cell></ns0:row><ns0:row><ns0:cell>COVT</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.760</ns0:cell><ns0:cell>0.594</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>0.569</ns0:cell><ns0:cell>0.085</ns0:cell><ns0:cell>0.740</ns0:cell><ns0:cell>0.718</ns0:cell><ns0:cell>0.721</ns0:cell><ns0:cell>0.675</ns0:cell><ns0:cell>0.626</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.779</ns0:cell><ns0:cell>NaN</ns0:cell></ns0:row><ns0:row><ns0:cell>SHUTTLE</ns0:cell><ns0:cell>0.611</ns0:cell><ns0:cell>0.702</ns0:cell><ns0:cell>0.397</ns0:cell><ns0:cell>0.398</ns0:cell><ns0:cell>0.446</ns0:cell><ns0:cell>0.399</ns0:cell><ns0:cell>0.517</ns0:cell><ns0:cell>0.435</ns0:cell><ns0:cell>0.623</ns0:cell><ns0:cell>0.452</ns0:cell><ns0:cell>0.530</ns0:cell><ns0:cell>0.621</ns0:cell><ns0:cell>0.677</ns0:cell><ns0:cell>NaN</ns0:cell></ns0:row><ns0:row><ns0:cell>POWERSUPLY</ns0:cell><ns0:cell>0.113</ns0:cell><ns0:cell>0.110</ns0:cell><ns0:cell>0.066</ns0:cell><ns0:cell>0.134</ns0:cell><ns0:cell>0.066</ns0:cell><ns0:cell>0.134</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>0.161</ns0:cell><ns0:cell>0.106</ns0:cell><ns0:cell>0.014</ns0:cell><ns0:cell>0.009</ns0:cell><ns0:cell>0.107</ns0:cell><ns0:cell>0.149</ns0:cell><ns0:cell>NaN</ns0:cell></ns0:row><ns0:row><ns0:cell>CONNECT4</ns0:cell><ns0:cell>0.470</ns0:cell><ns0:cell>0.464</ns0:cell><ns0:cell>0.466</ns0:cell><ns0:cell>0.272</ns0:cell><ns0:cell>0.437</ns0:cell><ns0:cell>0.382</ns0:cell><ns0:cell>0.587</ns0:cell><ns0:cell>0.517</ns0:cell><ns0:cell>0.431</ns0:cell><ns0:cell>0.406</ns0:cell><ns0:cell>0.420</ns0:cell><ns0:cell>0.381</ns0:cell><ns0:cell>0.496</ns0:cell><ns0:cell>0.506</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES</ns0:cell><ns0:cell>0.649</ns0:cell><ns0:cell>0.635</ns0:cell><ns0:cell>0.515</ns0:cell><ns0:cell>0.601</ns0:cell><ns0:cell>0.515</ns0:cell><ns0:cell>0.601</ns0:cell><ns0:cell>0.527</ns0:cell><ns0:cell>0.567</ns0:cell><ns0:cell>0.611</ns0:cell><ns0:cell>0.669</ns0:cell><ns0:cell>0.599</ns0:cell><ns0:cell>0.596</ns0:cell><ns0:cell>0.673</ns0:cell><ns0:cell>0.656</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG BRIDGES1vsAll</ns0:cell><ns0:cell>0.895</ns0:cell><ns0:cell>0.883</ns0:cell><ns0:cell>0.871</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell>0.869</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.906</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.872</ns0:cell><ns0:cell>0.906</ns0:cell><ns0:cell>0.915</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG HEPATITIS</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell>0.935</ns0:cell><ns0:cell>0.904</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell>0.922</ns0:cell><ns0:cell>0.898</ns0:cell><ns0:cell>0.929</ns0:cell><ns0:cell>0.945</ns0:cell><ns0:cell>0.950</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>0.952</ns0:cell><ns0:cell>0.887</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG ZOO</ns0:cell><ns0:cell>0.860</ns0:cell><ns0:cell>0.853</ns0:cell><ns0:cell>0.714</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.704</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell>0.824</ns0:cell><ns0:cell>0.855</ns0:cell><ns0:cell>0.865</ns0:cell><ns0:cell>0.844</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.821</ns0:cell><ns0:cell>0.887</ns0:cell><ns0:cell>0.882</ns0:cell></ns0:row><ns0:row><ns0:cell>BNG LYMPH</ns0:cell><ns0:cell>0.740</ns0:cell><ns0:cell>0.752</ns0:cell><ns0:cell>0.610</ns0:cell><ns0:cell>0.603</ns0:cell><ns0:cell>0.609</ns0:cell><ns0:cell>0.603</ns0:cell><ns0:cell>0.602</ns0:cell><ns0:cell>0.584</ns0:cell><ns0:cell>0.735</ns0:cell><ns0:cell>0.658</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell>0.659</ns0:cell><ns0:cell>0.636</ns0:cell><ns0:cell>0.827</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of accuracy and F1 metrics of evaluated ensemble models on the real data streams.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>15/22 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Comparison of accuracy and F1 metrics of evaluated ensemble models on the synthetic data streams.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of average total training and scoring times (in seconds) and average model size in memory (in kB).</ns0:figDesc><ns0:table /><ns0:note>17/22PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54484:2:0:NEW 26 Feb 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Dear Editors,
We would like to thank the reviewers for the valuable comments on the manuscript. We have edited the manuscript to address their concerns and remarks.
We believe, that addressing the remarks from the reviews improved the manuscript.
Martin Sarnovsky
(on behalf of both authors)
To address the particular comments:
Reviewer 2
C1: The paper still requires proofreading. There are still some sentences that are somewhat unclear and some words that should be replaced. For instance, there are many places where “the” should be replaced either by “a” or “an”, and vice-versa.
-The paper has undergone another round of editing in the external language editing service. We hope, that the rest of the grammar errors and sentencing was now improved.
C2: The description provided in lines 255 to 269 must be clarified. What is Q statistic? How is it computed? This might be common ground for researchers in the area, but authors cannot assume all readers are familiar with ensemble diversity quantification.
- We added the description of the Q statistic and how it is computed in general (lines 263-270) and also added description, how it is computed and used in our model (lines 271-276).
C3: Following my previous comment, line 265 is unclear as it mixes mathematical notation with textual descriptions to declare how weights are computed. Again, please be specific and clear on these notations.
- This was corrected (as a part of rewritten paragraphs mentioned also in previous comments).
C4: Line 10 of Algorithm 1 is unclear.
- The lines were rephrased. Our intention was to describe, that the cycle loops through all items in a given chunk and mentioned condition is applied, after all samples in a given chunk were processed. Hopefully it is now clarified.
C5: Authors state in lines 369-371 that Gini index was used to compute feature importance. This is an interesting approach, yet, the computation of Gini index is not clarified. Please provide a mathematical formulation for this. This is relevant as Gini index is several times confused with Gini impurity.
- We apologize for misunderstanding here - we use Gini impurity measure, as it is used in the Random Forests model. In this case, as we have worked with streaming data, we used the Gini impurity computed in window-based fashion in Online Random Forests model, as referenced within the paper. To evaluate the models, we did not use this particular RF implementation, but rather ARF (which is more current and better performing). Anyways - the use of Gini impurity it is now clarified (lines 387-388).
C6: Figure 3 and how it should be read has been clarified in its caption (x and y axis, bullet sizes, etc), yet, this information should accompany lines 373 and 374.
- Information added to the text
C7: Regarding the y axis of Figure 3, should it be “Feature Indices” instead of “Features”?
- We were afraid, that the 'Feature Indices' could be too wordy for the multi-figure, instead, we used the feature indices in the figure label description.
C8: Please increase the font and legend sizes in Figures 4 to 6.
- Legend size in all of these Figures was increased.
Experimental design
C1: In addition to providing the number of classes available in each dataset, please clarify the class ratio in each dataset. This is relevant as accuracy may not be appropriate for all datasets. For instance, KDD99 is imbalanced (without mentioning that it is highly biased according to [1], and thus, the assessment is flawed. It is important to make sure that the same does not happen with other datasets.
- Class ratios for multi-class datasets (especially those with 20+ classes) could be difficult to fit inside the table. However, we added a sentence explaining, that in real-world datasets, some of them are unbalanced (lines 384-385). We also added a file with class ratios for all datasets to the Github repository and added a footnote in the text.
Reviewer 3
Experimental design
It is ok, that you added a few reference methods, but Learn++ should be also added, especially because you deal with online learning.
- We did not include the Learn++ method in the experiments, as we did not find a python/Java implementation, to properly use the method in the train/test framework used in other experiments. However, we found out in the literature, that the AdaBoost algorithm, which is used in the experimental comparison is inspired by Learn++ method (so we inluded a similar, or rather inspired approach in the comparison). Also, we added a mention of Learn++ method and its modification in the Related Works section (lines 197-200).
Validity of the findings
I still cannot find the discussion on the weights accumulations - why it is so important, how this will impact the quality of the proposed solution. The authors did not answer this.
- We apologize, we must have missed this in the first round of review. We added a paragraph briefly describing, how the weighting mechanisms works (lines 286-293).
" | Here is a paper. Please give your review comments after reading it. |
38 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Drug repositioning methods attempt to identify novel therapeutic indications for marketed drugs. Strategies include the use of side-effects to assign new disease indications, based on the premise that both therapeutic effects and side-effects are measurable physiological changes resulting from drug intervention. Drugs with similar side-effects might share a common mechanism of action linking side-effects with disease treatment, or may serve as a treatment by 'rescuing' a disease phenotype on the basis of their side-effects; therefore it may be possible to infer new indications based on the similarity of side-effect profiles.</ns0:p><ns0:p>While existing methods leverage side-effect data from clinical studies and drug labels, evidence suggests this information is often incomplete due to under-reporting. Here, we describe a novel computational method that uses side-effect data mined from social media to generate a sparse undirected graphical model using inverse covariance estimation with l 1 -norm regularization. Results show that known indications are well recovered while current trial indications can also be identified, suggesting that sparse graphical models generated using side-effect data mined from social media may be useful for computational drug repositioning.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Drug repositioning is the process of identifying novel therapeutic indications for marketed drugs. Compared to traditional drug development, repositioned drugs have the advantage of decreased development time and costs given that significant pharmacokinetic, toxicology and safety data will have already been accumulated, drastically reducing the risk of attrition during clinical trials. In addition to marketed drugs, it is estimated that drug libraries may contain upwards of 2000 failed drugs that have the potential to be repositioned, with this number increasing at a rate of 150-200 compounds per year <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Repositioning of marketed or failed drugs has opened up new sources of revenue for pharmaceutical companies with estimates suggesting the market could generate multi-billion dollar annual sales in coming years <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr' target='#b2'>3]</ns0:ref>. While many of the current successes of drug repositioning have come about through serendipitous clinical observations, systematic data-driven approaches are now showing increasing promise given their ability to generate repositioning hypotheses for multiple drugs and diseases simultaneously using a wide range of data sources, while also incorporating prioritisation information to further accelerate development time <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Existing computational repositioning strategies generally use similar approaches but attempt to link different concepts. Manuscript to be reviewed Computer Science the use of transcriptomics methods which compare drug response gene-expression with disease gene-expression signatures <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref><ns0:ref type='bibr' target='#b5'>[6]</ns0:ref><ns0:ref type='bibr' target='#b6'>[7]</ns0:ref><ns0:ref type='bibr' target='#b7'>[8]</ns0:ref><ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, genetics-based methods which connect a known drug target with a genetically associated phenotype <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref><ns0:ref type='bibr' target='#b10'>[11]</ns0:ref><ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b13'>[13]</ns0:ref><ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>, network-based methods which link drugs or diseases in a network based on shared features <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref><ns0:ref type='bibr' target='#b16'>[16]</ns0:ref><ns0:ref type='bibr' target='#b17'>[17]</ns0:ref><ns0:ref type='bibr' target='#b18'>[18]</ns0:ref><ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>, and methods that use side-effect similarity to infer novel indications <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref><ns0:ref type='bibr' target='#b21'>[21]</ns0:ref><ns0:ref type='bibr' target='#b22'>[22]</ns0:ref><ns0:ref type='bibr' target='#b23'>[23]</ns0:ref><ns0:ref type='bibr' target='#b24'>[24]</ns0:ref><ns0:ref type='bibr' target='#b25'>[25]</ns0:ref><ns0:ref type='bibr' target='#b26'>[26]</ns0:ref><ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>.</ns0:p><ns0:p>Drug side-effects can be attributed to a number of molecular interactions including on or off-target binding, drug-drug interactions <ns0:ref type='bibr' target='#b28'>[28,</ns0:ref><ns0:ref type='bibr' target='#b30'>29]</ns0:ref>, dose-dependent pharmacokinetics, metabolic activities, downstream pathway perturbations, aggregation effects, and irreversible target binding <ns0:ref type='bibr' target='#b20'>[20,</ns0:ref><ns0:ref type='bibr' target='#b31'>30]</ns0:ref>. While side-effects are considered the unintended consequence of drug intervention, they can provide valuable insight into the physiological changes caused by the drug that are difficult to predict using pre-clinical or animal models. This relationship between drugs and side-effects has been exploited and used to identify shared target proteins between chemically dissimilar drugs, allowing new indications to be inferred based on the similarity of side-effect profiles <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>. One rationale behind this and related approaches is that drugs sharing a significant number of side-effects might share a common mechanism of action linking side-effects with disease treatment -side-effects essentially become a phenotypic biomarker for a particular disease <ns0:ref type='bibr' target='#b21'>[21,</ns0:ref><ns0:ref type='bibr' target='#b25'>25]</ns0:ref>. Repositioned drugs can also be said to 'rescue' a disease phenotype, on the basis of their side-effects; for example, drugs which cause hair growth as a side-effect can potentially be repositioned for the treatment of hair loss, while drugs which cause hypotension as a side-effect can be used to treat hypertension <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. Examples of drugs successfully repositioned based on phenotypic rescue that have made it to market include exenatide, which was shown to cause significant weight loss as a side-effect of type 2 diabetes treatment, leading to a trial of its therapeutic effect in non-diabetic obese subjects <ns0:ref type='bibr' target='#b32'>[31,</ns0:ref><ns0:ref type='bibr' target='#b33'>32]</ns0:ref>, minoxidil which was originally developed for hypertension but found to cause hair growth as a side-effect, leading to its repositioning for the treatment of hair loss and androgenetic alopecia <ns0:ref type='bibr' target='#b34'>[33,</ns0:ref><ns0:ref type='bibr' target='#b35'>34]</ns0:ref>, and, perhaps most famously, sildenafil citrate which was repositioned while being studied for the primary indication of angina to the treatment of erectile dysfunction <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>.</ns0:p><ns0:p>Existing repositioning methods based on side-effects, such as the work of Campillos et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> and Yang and Agarwal <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>, have used data from the SIDER database <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>, which contains side-effect data extracted from drug labels, largely collected from clinical trials during the pre-marketing phase of drug development. Other resources include Meyler's Side Effects of Drugs <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref>, which is updated annually in the Side Effects of Drugs Annual <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref>, and the Drugs@FDA database <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>, while pharmacovigilance authorities attempt to detect, assess and monitor reported drug side-effects post-market. Despite regular updates to these resources and voluntary reporting systems, there is evidence to suggest that side-effects are substantially under-reported, with some estimates indicating that up to 86% of adverse drug reactions go unreported for reasons that include lack of incentives, indifference, complacency, workload and lack of training among healthcare professionals <ns0:ref type='bibr' target='#b41'>[40]</ns0:ref><ns0:ref type='bibr' target='#b42'>[41]</ns0:ref><ns0:ref type='bibr' target='#b43'>[42]</ns0:ref><ns0:ref type='bibr' target='#b44'>[43]</ns0:ref>. Side-effects reported from clinical trials also have limitations due to constraints on scale and time, as well as pharmacogenomic effects <ns0:ref type='bibr' target='#b45'>[44]</ns0:ref>. A number of cancer drug studies have also observed that women are often significantly under-represented in clinical trials, making it difficult to study the efficacy, dosing and side-effects of treatments which can work differently in women and men; similar problems of under-representation also affect paediatrics, as many drugs are only ever tested on adults <ns0:ref type='bibr' target='#b46'>[45]</ns0:ref>.</ns0:p><ns0:p>between the frequency of side-effects extracted from unlabelled data and the frequency of documented adverse drug reactions <ns0:ref type='bibr' target='#b48'>[46]</ns0:ref>. Despite this success, a number of significant natural language processing challenges remain. These include dealing with idiomatic expressions, linguistic variability of expression and creativity, ambiguous terminology, spelling errors, word shortenings, and distinguishing between the symptoms that a drug is treating and the side-effects it causes. Some of the solutions proposed to deal with these issues include the use of specialist lexicons, appropriate use of semantic analysis, and improvements to approximate string matching, modeling of spelling errors, and contextual analysis surrounding the mentions of side-effects <ns0:ref type='bibr' target='#b48'>[46,</ns0:ref><ns0:ref type='bibr' target='#b49'>47]</ns0:ref>, while maintaining a list of symptoms for which a drug is prescribed can help to eliminate them from the list of side-effects identified <ns0:ref type='bibr' target='#b50'>[48]</ns0:ref>. Although much of the focus has explored the use of online forums where users discuss their experience with pharmaceutical drugs and report side-effects <ns0:ref type='bibr' target='#b51'>[49]</ns0:ref>, the growing popularity of Twitter <ns0:ref type='bibr' target='#b52'>[50]</ns0:ref>, which at the time of writing has over 300 million active monthly users, provides a novel resource upon which to perform large-scale mining of reported drug side-effects in near real-time from the 500 millions tweets posted daily <ns0:ref type='bibr'>[51]</ns0:ref>. While only a small fraction of these daily tweets are related to health issues, the sheer volume of data available presents an opportunity to bridge the gap left by conventional side-effects reporting strategies. Over time, the accumulation of side-effect data from social media may become comparable or even exceed the volume of traditional resources, and at the very least should be sufficient to augment existing databases. Additionally, the cost of running such a system continuously is relatively cheap compared to existing pharmacovigilance monitoring, presenting a compelling economic argument supporting the use of social media for such purposes. Furthermore, the issues related to under-representation described above may be addressed.</ns0:p><ns0:p>Freifeld et al. <ns0:ref type='bibr' target='#b53'>[52]</ns0:ref> presented a comparison study between drug side-effects found on Twitter and adverse events reported in the FDA Adverse Event Reporting System (FAERS). Starting with 6.9 million tweets, they used a set of 23 drug names and a list of symptoms to reduce that data to a subset of 60,000 tweets. After manual examination, there were 4,401 tweets identified as mentioning a side-effect, with a Spearman rank correlation found to be 0.75. Nikfarjam et al. <ns0:ref type='bibr' target='#b54'>[53]</ns0:ref> introduce a method based on Conditional Random Fields (CRF) to tag mentions of drug side-effects in social media posts from Twitter or the online health community DailyStrength. They use features based on the context of tokens, a lexicon of adverse drug reactions, Part-Of-Speech (POS) tags and a feature indicating whether a token is negated or not. They also used embedding clusters learned with Word2Vec <ns0:ref type='bibr' target='#b55'>[54]</ns0:ref>. They reported an F1 score of 82.1% for data from DailyStrength and 72.1% for Twitter data. Sarker and Gonzalez <ns0:ref type='bibr' target='#b56'>[55]</ns0:ref> developed classifiers to detect side-effects using training data from multiple sources, including tweets <ns0:ref type='bibr' target='#b57'>[56]</ns0:ref>, DailyStrength, and a corpus of adverse drug events obtained from medical case reports. They reported an F1 score of 59.7% when training a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel on all three datasets. Recently, Karimi et al. <ns0:ref type='bibr' target='#b58'>[57]</ns0:ref> presented a survey of the field of surveillance for adverse drug events with automatic text and data mining techniques.</ns0:p><ns0:p>In this study, we describe a drug repositioning methodology that uses side-effect data mined from social media to infer novel indications for marketed drugs. We use data from a pharmacovigilance system for mining Twitter for drug side-effects <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. The system uses a set of cascading filters to eliminate large quantities of irrelevant messages and identify the most relevant data for further processing, before applying a SVM classifier to identify tweets that mention suspected adverse drug reactions. Using this data we apply sparse inverse covariance estimation to construct an undirected graphical model, which offers a way to describe the relationship between all drug pairs <ns0:ref type='bibr' target='#b61'>[59]</ns0:ref><ns0:ref type='bibr' target='#b62'>[60]</ns0:ref><ns0:ref type='bibr' target='#b64'>[61]</ns0:ref>. This is Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>achieved by solving a maximum likelihood problem using 1 -norm regularization to make the resulting graph as sparse as possible, in order to generate the simplest graphical model which fully explains the data. Results from testing the method on known and proposed trial indication recovery suggest that side-effect data mined from social media in combination with a regularized sparse graphical model can be used for systematic drug repositioning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Mining Twitter for drug side-effects</ns0:head><ns0:p>We used the SoMeDoSEs pharmacovigilance system <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref> to extract reports of drug sideeffects from Twitter over a 6 month period between January and June 2014. SoMeDoSEs works by first applying topic and volume filters to identify Tweets that contain keywords relating to drugs, are written in English, are not re-tweets and do not contain a hyperlink to a web page, since these posts are typically commercial offerings. Side-effect were then mapped to an entry in the FDA Adverse Event Reporting System. Tweets that pass these filters are then classified by a linear SVM to distinguish those that mention a drug side-effect from those that do not. The SVM classifier uses a number of natural language features including unigrams and bigrams, part-of-speech tags, sentiment scores, text surface features, and matches to gazetteers related to human body parts, side-effect synonyms, side-effect symptoms, causality indicators, clinical trials, medical professional roles, side effect-triggers and drugs.</ns0:p><ns0:p>For each gazetteer, three features were created: a binary feature, which is set to 1 if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. For side-effect synonyms we used the Consumer Health Vocabulary (CHV) <ns0:ref type='bibr' target='#b65'>[62]</ns0:ref>, which maps phrases to Unified Medical Language System concept universal identifiers (CUI) and partially addresses the issue of misspellings and informal language used to discuss medical issues in tweets. The matched CUIs were also used as additional features.</ns0:p><ns0:p>To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not. Using a split of 8,000 tweets for training, 1,000 for development, and 1,000 for testing, the SVM classifier that used all the features achieved a precision of 55.0%, recall of 66.9%, and F1 score of 60.4% when evaluated using the 1,000 test tweets. This is statistically significantly higher than the results achieved by a linear SVM classifier using only unigrams and bigrams as features (precision of 56.0%, recall of 54.0% and F1 score of 54.9%). One of the sources of false negatives was the use of colloquial and indirect expressions by Twitter users to express that they have experienced a side-effect. We also observed that a number of false positives discuss the efficacy of drugs rather than side-effects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Twitter data</ns0:head><ns0:p>Over the 6 month period, SoMeDoSEs typically identified ∼700 tweets per day that mentioned a drug side-effect, resulting in a data set of 620 unique drugs and 2196 unique side-effects from 108,009 tweets, once drug synonyms had been resolved to a common name using World Drug Index <ns0:ref type='bibr' target='#b66'>[63]</ns0:ref> and drugs with only a single side-effect were excluded. We were also careful to remove indications that were falsely identified as side-effects Manuscript to be reviewed Computer Science using drug indications from Cortellis Clinical Trials Intelligence <ns0:ref type='bibr' target='#b67'>[64]</ns0:ref>. We used this data to construct a 2196 row by 620 column matrix of binary variables X, where x ∈ {0, 1}, indicating whether each drug was reported to cause each side-effect in the Twitter data set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating the sample covariance matrix</ns0:head><ns0:p>Using this data, we are able to form the sample covariance matrix S for binary variables as follows <ns0:ref type='bibr' target='#b68'>[65]</ns0:ref>, such that element S i,j gives the covariance of drug i with drug j :</ns0:p><ns0:formula xml:id='formula_0'>S i,j = 1 n − 1 n k=1 (x ki − xi )(x kj − xj ) = 1 n − 1 n k=1 x ki x kj − xi xj (1)</ns0:formula><ns0:p>where xi = 1 n n k=1 x ki and x ki is the k-th observation (side-effect) of variable (drug) X i . It can be shown than the average product of two binary variables is equal to their observed joint probabilities such that:</ns0:p><ns0:formula xml:id='formula_1'>1 n − 1 n k=1 x ki x kj = P (X j = 1|X i = 1)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P (X j = 1|X i = 1) refers to the conditional probability that variable X j equals one given that variable X i equals one. Similarly, the product of the means of two binary variables is equal to the expected probability that both variables are equal to one, under the assumption of statistical independence:</ns0:p><ns0:formula xml:id='formula_2'>xi xj = P (X i = 1)P (X j = 1)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Consequently, the covariance of two binary variables is equal to the difference between the observed joint probability and the expected joint probability:</ns0:p><ns0:formula xml:id='formula_3'>S i,j = P (X j = 1|X i = 1) − P (X i = 1)P (X j = 1)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Our objective is to find the precision or concentration matrix θ by inverting the sample covariance matrix S. Using θ, we can obtain the matrix of partial correlation coefficients ρ for all pairs of variables as follows:</ns0:p><ns0:formula xml:id='formula_4'>ρ i,j = − θ i,j θ i,i θ j,j<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>The partial correlation between two variables X and Y given a third, Z, can be defined as the correlation between the residuals R x and R y after performing least-squares regression of X with Z and Y with Z, respectively. This value, denotated ρ x,y|z , provides a measure Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of the correlation between two variables when conditioned on the third, with a value of zero implying conditional independence if the input data distribution is multivariate Gaussian. The partial correlation matrix ρ, however, gives the correlations between all pairs of variables conditioning on all other variables. Off-diagonal elements in ρ that are significantly different from zero will therefore be indicative of pairs of drugs that show unique covariance between their side-effect profiles when taking into account (i.e. removing) the variance of side-effects profiles amongst all the other drugs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Shrinkage estimation</ns0:head><ns0:p>For the sample covariance matrix to be easily invertible, two desirable characteristics are that it should be positive definite, i.e. all eigenvalues should be distinct from zero, and well-conditioned, i.e. the ratio of its maximum and minimum singular value should not be too large. This can be particularly problematic when the sample size is small and the number of variables is large (n < p) and estimates of the covariance matrix become singular. To ensure these characteristics, and speed up convergence of the inversion, we condition the sample covariance matrix by shrinking towards an improved covariance estimator T, a process which tends to pull the most extreme coefficients towards more central values thereby systematically reducing estimation error <ns0:ref type='bibr' target='#b69'>[66]</ns0:ref>, using a linear shrinkage approach to combine the estimator and sample matrix in a weighted average:</ns0:p><ns0:formula xml:id='formula_5'>S = αT + (1 − α)S<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where α ∈ {0, 1} denotes the analytically determined shrinkage intensity. We apply the approach of Schäfer and Strimmer, which uses a distribution-free, diagonal, unequal variance model which shrinks off-diagonal elements to zero but leaves diagonal entries intact, i.e. it does not shrink the variances <ns0:ref type='bibr' target='#b70'>[67]</ns0:ref>. Shrinkage is actually applied to the correlations rather than the covariances, which has two distinct advantages: the offdiagonal elements determining the shrinkage intensity are all on the same scale, while the partial correlations derived from the resulting covariance estimator are independent of scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graphical lasso for sparse inverse covariance estimation</ns0:head><ns0:p>A useful output from the covariance matrix inversion is a sparse ρ matrix containing many zero elements, since, intuitively, we know that relatively few drug pairs will share a common mechanism of action, so removing any spurious correlations is desirable and results in a more parsimonious relationship model, while the non-zero elements will typically reflect the correct positive correlations in the true inverse covariance matrix more accurately <ns0:ref type='bibr' target='#b71'>[68]</ns0:ref>. However, elements of ρ are unlikely to be zero unless many elements of the sample covariance matrix are zero. The graphical lasso <ns0:ref type='bibr' target='#b62'>[60,</ns0:ref><ns0:ref type='bibr' target='#b64'>61,</ns0:ref><ns0:ref type='bibr' target='#b72'>69]</ns0:ref> provides a way to induce zero partial correlations in ρ by penalizing the maximum likelihood estimate of the inverse covariance matrix using an 1 -norm penalty function. The estimate can be found by maximizing the following log-likelihood using the block coordinate descent approach described by Friedman et al. <ns0:ref type='bibr' target='#b62'>[60]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_6'>log det θ − tr(S θ) − λ θ 1<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Here, the first term is the Gaussian log-likelihood of the data, tr denotes the trace Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>operator and θ 1 is the 1 -norm -the sum of the absolute values of the elements of θ, weighted by the non-negative tuning paramater λ. The specific use of the 1 -norm penalty has the desirable effect of setting elements in θ to zero, resulting in a sparse matrix, while the parameter λ effectively controls the sparsity of the solution. This contrasts with the use of an 2 -norm penalty which will shrink elements but will never reduce them to zero. While this graphical lasso formulation is based on the assumption that the input data distribution is multivariate Gaussian, Banerjee et al. showed that the dual optimization solution also applies to binary data, as is the case in our application <ns0:ref type='bibr' target='#b64'>[61]</ns0:ref>.</ns0:p><ns0:p>It has been noted that the graphical lasso produces an approximation of θ that is not symmetric, so we update it as follows <ns0:ref type='bibr' target='#b73'>[70]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_7'>θ ← (θ + θ T ) 2<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>The matrix ρ is then calculated according to Equation <ns0:ref type='formula' target='#formula_4'>5</ns0:ref>, before repositioning predictions for drug i are determined by ranking all other drugs according to their absolute values in ρ i and assigning their indications to drug i.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Recovering known indications</ns0:head><ns0:p>To evaluate our method we have attempted to predict repositioning targets for indications that are already known. If, by exploiting hindsight, we can recover these, then our method should provide a viable strategy with which to augment existing approaches that adopt an integrated approach to drug repositioning <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. Figure <ns0:ref type='figure'>1a</ns0:ref> shows the performance of the method at identifying co-indicated drugs at a range of λ values, resulting in different sparsity levels in the resulting ρ matrix. We measured the percentage at which a co-indicated drug was ranked amongst the top 5, 10, 15, 20 and 25 predictions for the target drug, respectively. Of the 620 drugs in our data set, 595 had a primary indication listed in Cortellis Clinical Trials Intelligence, with the majority of the remainder being made up of dietary supplements (e.g. methylsulfonylmethane) or plant extracts (e.g. Agaricus brasiliensis extract) which have no approved therapeutic effect. Rather than removing these from the data set, they were left in as they may contribute to the partial correlation between pairs of drugs that do have approved indications.</ns0:p><ns0:p>Results indiciate that the method achieves its best performance with a λ value of 10 −9 where 42.41% (243/595) of targets have a co-indicated drug returned amongst the top 5 ranked predictions (Figure <ns0:ref type='figure'>1a</ns0:ref>). This value compares favourably with both a strategy in which drug ranking is randomized (13.54%, standard error ±0.65), and another in which drugs are ranked according to the Jaccard index (28.75%). In Ye et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>, a related approach is used to construct a repositioning network based on side-effects extracted from the SIDER database, Meyler's Side Effects of Drugs, Side Effects of Drugs Annual, and the Drugs@FDA database <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref><ns0:ref type='bibr' target='#b38'>[37]</ns0:ref><ns0:ref type='bibr' target='#b39'>[38]</ns0:ref><ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>, also using the Jaccard index as the measure of drug-drug similarity. Here, they report an equivilent value of 32.77% of drugs having their indication correctly predicted amongst the top 5 results. While data sets and underlying statistical models clearly differ, these results taken together suggest that the use of side-effect data mined from social media can certainly offer comparable performance to methods using side-effect data extracted from more conventional resources, while the use of a global statistical model such as the graphical lasso does result in improved performance compared to a pairwise similarity coefficient such as the Jaccard index.</ns0:p><ns0:p>To further investigate the influence of the provenance of the data, we mapped our data set of drugs to ChEMBL identifiers <ns0:ref type='bibr' target='#b74'>[71,</ns0:ref><ns0:ref type='bibr' target='#b75'>72]</ns0:ref> which we then used to query SIDER for side-effects extracted from drug labels. This resulted in a reduced data set of 229 drugs, in part due to the absence of many combination drugs from SIDER (e.g. the antidepressant Symbyax which contains olanzapine and fluoxetine). Using the same protocol described above, best performance of 53.67% (117/229) was achieved with a slightly higher λ value of 10 −6 . Best performance on the same data set using side-effects derived from Twitter was 38.43% (88/229), again using a λ value of 10 −9 , while the randomized strategy achieved 12.05% (standard error ±1. <ns0:ref type='bibr' target='#b14'>14)</ns0:ref>, indicating that the use of higher quality side-effect data from SIDER allows the model to achieve better performance than is possible using Twitter data. Perhaps more interestingly, combining the correct predictions between the two datasources reveals that 30 are unique to the Twitter model, 59 are unique to the SIDER model, with 58 shared, supporting the use side-effect data mined from social media to augment conventional resources.</ns0:p><ns0:p>We also investigated whether our results were biased by the over-representation of particular drug classes within our data set. Using Using Cortellis Clinical Trials Intelligence, we were able to identify the broad class for 479 of the drugs (77.26%) in our data set. The five largest classes were benzodiazepine receptor agonists (3/14 drugs returned amongst the top 5 ranked predictions), analgesics (6/12), H 1 -antihistamines (8/11), cyclooxygenase inhibitors (9/11), and anti-cancer (2/11). This indicates that the over-representation of H 1 -antihistamines and cyclooxygenase inhibitors did result in a bias, and to a lesser extent analgesics, but that the overall effect of these five classes was more subtle (28/59 returned amongst the top 5 ranked predictions, 47.46%).</ns0:p><ns0:p>The best performance of our approach at the top 5 level is achieved when the resulting ρ matrix has a sparsity of 35.59% (Figure <ns0:ref type='figure'>1b</ns0:ref> and Figure <ns0:ref type='figure'>2</ns0:ref>) which both justifies the use of the 1 -norm penalized graphical lasso, and generates a graphical model with approximately a third of the parameters of a fully dense matrix, while the comparable performance at λ values between 10 −12 and 10 −7 also indicates a degree of robustness to Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>. The overall layout of the side-effect network. Drugs are yellow, connecting edges are green. The layout is performed using a relative entropy optimization-based method <ns0:ref type='bibr' target='#b76'>[73]</ns0:ref>. In total, there are 616 connected nodes, with each having an average of 267 neighbours. Painkillers such as paracetamol and ibuprofen have the highest number of connections (587 and 585, respectively), which corresponds to them having the largest number of unique side-effects (256 and 224) reported on Twitter. The strongest connection is between chondroitin and glucosamine (partial correlation coefficient (PCC) 0.628), both of which are dietary supplements used to treat osteoarthritis, closely followed by the antidepressant and anxiolytic agents phenelzine and tranylcypromine (PCC 0.614). the choice of this parameter. Beyond the top 5 ranked predictions, results are encouraging as the majority of targets (56.02%) will have a co-indicated drug identified by considering only the top 10 predictions, suggesting the method is a feasible strategy for prioritisation of repositioning candidates.</ns0:p></ns0:div>
<ns0:div><ns0:head>Predicting proposed indications of compounds currently in clinical trials</ns0:head><ns0:p>While the previous section demonstrated our approach can effectively recover known indications, predictions after the fact are -while useful -best supported by more forwardlooking evidence. In this section, we use clinical trial data to support our predictions where the ultimate success of our target drug is still unknown. Using Cortellis Clinical Trials Intelligence, we extracted drugs present in our Twitter data set that were currently undergoing clinical trials (ending after 2014) for a novel indication (i.e. for which they were not already indicated), resulting in a subset of 277 drugs currently in trials for 397 indications. Figure <ns0:ref type='figure'>3</ns0:ref> shows the percentage at which a co-indicated drug was ranked amongst the top 5, 10, 15, 20 and 25 predictions for the target. Similar to the recovery of known indications, best performance when considering the top 5 ranked predictions was achieved with a λ value of 10 −9 , resulting in 16.25% (45/277) of targets having a co-indicated drug, which again compares well to a randomized strategy (5.42%, standard error ±0.32) or a strategy using the Jaccard index (10.07%).</ns0:p><ns0:p>Recovery of proposed clinical trial indications is clearly more challenging than known indications, possibly reflecting the fact that a large proportion of drugs will fail during trials and therfore many of the 397 proposed indications analysed here will in time prove false, although the general trend in performance as the sparsity parameter λ is adjusted tends to mirror the recovery of known indications. Despite this, a number of interesting predictions with a diverse range of novel indications are made that are supported by Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>. Predicted repositioning of oxytocin (red) for the treatment of schizophrenia based on its proximity to the schizophrenia drug chlorpromazine (grey). Drugs in the graph are sized according to their degree (number of edges), while the thickness of a connecting edge is proportional to the partial correlation coefficient between the two drugs. The graph layout is performed by Cytoscape <ns0:ref type='bibr' target='#b84'>[81]</ns0:ref> which applies a force-directed approach based on the partial correlation coefficient. Nodes are arranged so that edges are of more or less equal length and there are as few edge crossings as possible. For clarity, only the top ten drugs ranked by partial correlation coefficient are shown.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oxytocin</ns0:head><ns0:p>Oxytocin is a nonapeptide hormone that acts primarily as a neuromodulator in the brain via the specific, high-affinity oxytocin receptor -a class I (Rhodopsin-like) G-proteincoupled receptor (GPCR) <ns0:ref type='bibr' target='#b77'>[74]</ns0:ref>. Currently, oxytocin is used for labor induction and the treatment of Prader-Willi syndrome, but there is compelling pre-clinical evidence to suggest that it may play a crucial role in the regulation of brain-mediated processes that are highly relevant to many neuropsychiatric disorders <ns0:ref type='bibr' target='#b78'>[75]</ns0:ref>. A number of animal studies have revealed that oxytocin has a positive effect as an antipsychotic <ns0:ref type='bibr' target='#b79'>[76,</ns0:ref><ns0:ref type='bibr' target='#b80'>77]</ns0:ref>, while human trials have revealed that intranasal oxytocin administered to highly symptomatic schizophrenia patients as an adjunct to their antipsychotic drugs improves positive and negative symptoms significantly more than placebo <ns0:ref type='bibr' target='#b81'>[78,</ns0:ref><ns0:ref type='bibr' target='#b82'>79]</ns0:ref>. These therapeutic findings are supported by growing evidence of oxytocin's role in the manifestation of schizophrenia symptoms such as a recent study linking higher plasma oxytocin levels with increased pro-social behavior in schizophrenia patients and with less severe psychopathology in female patients <ns0:ref type='bibr' target='#b83'>[80]</ns0:ref>. The mechanisms underlying oxytocin's therapeutic effects on schizophrenia symptoms are poorly understood, but its ability to regulate mesolimbic dopamine pathways are thought to be responsible <ns0:ref type='bibr' target='#b78'>[75]</ns0:ref>. Here, our method predicts schizophrenia as a novel indication for oxytocin based on its proximity to chlorpromazine, which is currently used to treat schizophrenia (Figure <ns0:ref type='figure'>4</ns0:ref>). Chlorpromazine also modulates the dopamine pathway by acting as an antagonist of the dopamine receptor, another class I GPCR. Interestingly, the subgraph indicates that dopamine also has a high partial correlation coefficient with oxytocin, adding further support to the hypothesis that oxytocin, chlorpromazine and dopamine all act on the same pathway and therefore have similar side-effect profiles. Side-effects shared by oxytocin and chlorpromazine include hallucinations, excessive salivation and anxiety, while shivering, weight gain, abdominal pain, nausea, and constipation are common side-effects also shared by other drugs within the subgraph. Currently, larger scale clinical trials of intranasal oxytocin in schizophrenia are underway. If the early positive results hold up, it may signal the beginning of an new era in the treatment of schizophrenia, a field which has seen little progress in the development of novel efficacious treatments over recent years. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Ramelteon</ns0:head><ns0:p>Ramelteon, currently indicated for the treatment of insomnia, is predicted to be useful for the treatment of bipolar depression (Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>). Ramelteon is the first in a new class of sleep agents that selectively binds the MT 1 and MT 2 melatonin receptors in the suprachiasmatic nucleus, with high affinity over the MT 3 receptor <ns0:ref type='bibr' target='#b85'>[82]</ns0:ref>. It is believed that the activity of ramelteon at MT 1 and MT 2 receptors contributes to its sleep-promoting properties, since these receptors are thought to play a crucial role in the maintenance of the circadian rhythm underlying the normal sleep-wake cycle upon binding of endogenous melatonin. Abnormalities in circadian rhythms are prominent features of bipolar I disorder, with evidence suggesting that disrupted sleep-wake circadian rhythms are associated with an increased risk of relapse in bipolar disorder <ns0:ref type='bibr' target='#b86'>[83]</ns0:ref>. As bipolar patients tend to exhibit shorter and more variable circadian activity, it has been proposed that normalisation of the circadian rhythm pattern may improve sleep and consequently lead to a reduction in mood exacerbations. Melatonin receptor agonists such as ramelteon may have a potential therapeutic effect in depression due to their ability to resynchronize the suprachiasmatic nucleus <ns0:ref type='bibr' target='#b87'>[84]</ns0:ref>. In Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, evidence supporting the repositioning of ramelteon comes from ziprasidone, an atypical antipsychotic used to treat bipolar I disorder and schizophrenia <ns0:ref type='bibr' target='#b88'>[85]</ns0:ref>. Ziprasidone is the second-ranked drug by partial correlation coefficient; a number of other drugs used to treat mood disorders can also be located in the immediate vicinity including phenelzine, a non-selective and irreversible monoamine oxidase inhibitor (MAOI) used as an antidepressant and anxiolytic, milnacipran, a serotonin-norepinephrine reuptake inhibitor used to treat major depressive disorder, and tranylcypromine, another MAOI used as an antidepressant and anxiolytic agent. The co-location of these drugs in the same region of the graph suggests a degree of overlap in their respective mechanistic pathways, resulting in a high degree of similarity between their side-effect profiles. Nodes in this subgraph also have a relatively large degree indicating a tighter association than for other predictions, with common shared side-effects including dry mouth, sexual dysfunction, migraine, and orthostatic hypotension, while weight gain is shared between ramelteon and ziprasidone.</ns0:p></ns0:div>
<ns0:div><ns0:head>Meloxicam</ns0:head><ns0:p>Meloxicam, a nonsteroidal anti-inflammatory drug (NSAID) used to treat arthritis, is predicted to be a repositioning candidate for the treatment of non-Hodgkin lymphoma, Manuscript to be reviewed via the mobilisation of autologous peripheral blood stem cells from bone marrow. By inhibiting cyclooxygenase 2, meloxicam is understood to inhibit generation of prostaglandin E 2 , which is known to stimulate osteoblasts to release osteopontin, a protein which encourages bone resorption by osteoclasts <ns0:ref type='bibr' target='#b89'>[86,</ns0:ref><ns0:ref type='bibr' target='#b90'>87]</ns0:ref>. By inhibiting prostaglandin E 2 and disrupting the production of osteopontin, meloxicam may encourage the departure of stem cells, which otherwise would be anchored to the bone marrow by osteopontin <ns0:ref type='bibr' target='#b91'>[88]</ns0:ref>. In Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, rituximab, a B-cell depleting monoclonal antibody that is currently indicated for treatment of non-Hodgkin lymphoma, is the top ranked drug by partial correlation, which provides evidence for repositioning to this indication. Interestingly, depletion of Bcells by rituximab has recently been demonstrated to result in decreased bone resorption in patients with rheumatoid arthritis, possibly via a direct effect on both osteoblasts and osteoclasts <ns0:ref type='bibr' target='#b92'>[89,</ns0:ref><ns0:ref type='bibr' target='#b93'>90]</ns0:ref>, suggesting a common mechanism of action between meloxicam and rituximab. Further evidence is provided by the fifth-ranked drug clopidogrel, an antiplatelet agent used to inhibit blood clots in coronary artery disease, peripheral vascular disease, cerebrovascular disease, and to prevent myocardial infarction. Clopidogrel works by irreversibly inhibiting the adenosine diphosphate receptor P2Y12, which is known to increase osteoclast activity <ns0:ref type='bibr'>[91]</ns0:ref>. Similarly to the ramelteon subgraph, many drugs in the vicinity of meloxicam are used to treat inflammation including diclofenac, naproxen (both NSAIDs) and betamethasone, resulting in close association between these drugs, with shared side-effects in the subgraph including pain, cramping, flushing and fever, while swelling, indigestion, inflammation and skin rash are shared by meloxicam and rituximab.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>While the side-effects shared within the subgraphs of our three examples are commonly associated with a large number of drugs, some of the side-effects shared by the three drug pairs such as hallucinations, excessive salivation and anxiety are somewhat less common. To investigate this relationship for the data set as a whole, we calculated log frequencies for all side-effects and compared these values against the normalized average rank of pairs where the side-effect was shared by both the query and target drug. If we assume that a higher ranking in our model indicates a higher likelihood of drugs sharing a protein target, this relationship demonstrates similar properties to the observations of Campillos et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> in that there is a negative correlation between the rank and frequency of a side-effect. The correlation coefficient has a value of -0.045 which is significantly different from zero at the 0.001 level, although the linear relationship appears to break down where the frequency of the side-effect is lower than about 0.025.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we have used side-effect data mined from social media to generate a sparse graphical model, with nodes in the resulting graph representing drugs, and edges between them representing the similarity of their side-effect profiles. We demonstrated that known indications can be inferred based on the indications of neighbouring drugs in the network, with 42.41% of targets having their known indication identified amongst the top 5 ranked predictions, while 16.25% of drugs that are currently in a clinical trial have their proposed trial indication correctly identified. These results indicate that the volume and diversity of drug side-effects reported using social media is sufficient to be of use in side-effect-based drug repositioning, and this influence is likely to increase as the audience of platforms such as Twitter continues to see rapid growth. It may also help to address the problem of side-effect under-reporting. We also demonstrate that global statistical models such as the graphical lasso are well-suited to the analysis of large multivariate systems such as drug-drug networks. They offer significant advantages over conventional pairwise similarity methods by incorporating indirect relationships between all variables, while the use of the lasso penalty allows a sparse, parsimonious model to be generated with fewer spurious connections resulting in a simpler theory of relationships.</ns0:p><ns0:p>While our method shows encouraging results, it is more likely to play a role in drug repositioning as a component in an integrated approach. Whether this is achieved by combining reported side-effects with those mined from resources such as SIDER, or by using predictions as the inputs to a supervised learning algorithm, a consensus approach is likely to achieve higher performance by incorporating a range of different data sources in addition to drug side-effects, while also compensating for the weaknesses of any single method <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. Limitations of our method largely stem from the underlying Twitter data <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. Only a small fraction of daily tweets contain reports of drug sideeffects, therefore restricting the number of drugs we are able to analyse. However, given that systems such as SoMeDoSEs are capable of continuously monitoring Twitter, the numbers of drugs and reported side-effects should steadily accumulate over time.</ns0:p><ns0:p>To address this, in the future it may be possible to extend monitoring of social media to include additional platforms. For example, Weibo is a Chinese microblogging site akin to Twitter, with over 600 million users as of 2013. Clearly, tools will have to be adapted to deal with multilingual data processing or translation issues, while differences in cultural attitudes to sharing medical information may present further challenges. Extensions to the statistical approach may also result in improved performance. Methods such as the joint graphical lasso allow the generation of a graphical model using data with observations belonging to distinct classes <ns0:ref type='bibr' target='#b95'>[92]</ns0:ref>. For example, two covariances matrices generated using data from Twitter and SIDER could be combined in this way, resulting in a single model that best represents both sources. An extension to the graphical lasso also allows the decomposition of the sample covariance graph into smaller connected components via a thresholding approach [93]. This leads not only to large performance gains, but significantly increases the scalability of the graphical lasso approach.</ns0:p><ns0:p>Another caveat to consider, common to many other repositioning strategies based on side-effect similarity, is that there is no evidence to suggest whether a repositioning candidate will be a better therapeutic than the drug from which the novel indication was inferred. While side-effects can provide useful information for inferring novel indications, they are in general undesirable and need to be balanced against any therapeutic benefits. Our model does not attempt to quantify efficacy or side-effect severity, but it might be possible to modify the natural language processing step during Twitter mining in order to capture comparative mentions of side-effects, since tweets discussing both the Manuscript to be reviewed Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>They include 1 / 22 PeerJ</ns0:head><ns0:label>122</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>3 / 22 PeerJ</ns0:head><ns0:label>322</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>4 / 22 PeerJ</ns0:head><ns0:label>422</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>7 / 22 PeerJFigure 1 .</ns0:head><ns0:label>7221</ns0:label><ns0:figDesc>Figure 1. Recovery of known indications (a) Percentage at which a co-indicated drug is returned amongst the top 5, 10, 15, 20 and 25 ranked predictions for a given target, at different λ values -the parameter that weights the 1 -norm penalty in the graphical lasso (Equation 7) (b) Sparsity of ρ matrix at different λ values, i.e. the number of non-zero elements in the upper triangle divided by (n 2 − n)/2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>8 / 22 PeerJ</ns0:head><ns0:label>822</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>9 / 22 PeerJFigure 3 .</ns0:head><ns0:label>9223</ns0:label><ns0:figDesc>Figure 3. Recovery of proposed clinical trial indications. Percentage at which a coindicated drug is returned amongst the top 5, 10, 15, 20 and 25 ranked predictions for a given target, at different λ values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>12 / 22 PeerJ</ns0:head><ns0:label>1222</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Predicted repositioning of ramelteon (red) for the treatment of bipolar I disorder based on its proximity to ziprasidone (grey). Along with ziprasidone, phenelzine, milnacipran and tranylcypromine are all used to treat mood disorders.</ns0:figDesc><ns0:graphic coords='14,253.65,61.20,266.70,160.20' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>13 / 22 PeerJ</ns0:head><ns0:label>1322</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Predicted repositioning of meloxicam (red) for the treatment of non-Hodgkin lymphoma based on its proximity to rituximab (grey).</ns0:figDesc><ns0:graphic coords='15,284.38,61.20,205.25,130.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>15 / 22 PeerJ</ns0:head><ns0:label>1522</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>We further investigated three 359 repositioning candidates with interesting pharmacology to understand their predicted 360 results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Drug</ns0:cell><ns0:cell cols='2'>Current indication</ns0:cell><ns0:cell>New indication</ns0:cell><ns0:cell>Evidence</ns0:cell><ns0:cell cols='2'>PCC Rank ID</ns0:cell><ns0:cell>Title</ns0:cell></ns0:row><ns0:row><ns0:cell>ramelteon</ns0:cell><ns0:cell>insomnia</ns0:cell><ns0:cell /><ns0:cell>bipolar I disorder</ns0:cell><ns0:cell>ziprasidone</ns0:cell><ns0:cell>0.197</ns0:cell><ns0:cell>2 6991</ns0:cell><ns0:cell>Ramelteon for the Treatment of Insomnia</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and Mood Stability in Patients With Eu-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>thymic Bipolar Disorder</ns0:cell></ns0:row><ns0:row><ns0:cell>denosumab</ns0:cell><ns0:cell>osteoporosis</ns0:cell><ns0:cell /><ns0:cell>breast cancer</ns0:cell><ns0:cell>capecitabine</ns0:cell><ns0:cell>0.133</ns0:cell><ns0:cell>3 85503</ns0:cell><ns0:cell>Pilot Study to Evaluate the Impact of Deno-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sumab on Disseminated Tumor Cells (DTC)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>in Patients With Early Stage Breast Cancer</ns0:cell></ns0:row><ns0:row><ns0:cell>meloxicam</ns0:cell><ns0:cell>inflammation</ns0:cell><ns0:cell /><ns0:cell cols='2'>non-Hodgkin lymphoma rituximab</ns0:cell><ns0:cell>0.131</ns0:cell><ns0:cell cols='2'>1 176379 A Phase II Trial Using Meloxicam Plus Fil-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>grastim in Patients With Multiple Myeloma</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and Non-Hodgkins Lymphoma</ns0:cell></ns0:row><ns0:row><ns0:cell>sulfasalazine</ns0:cell><ns0:cell cols='2'>rheumatoid arthritis</ns0:cell><ns0:cell>diarrhea</ns0:cell><ns0:cell>loperamide</ns0:cell><ns0:cell>0.106</ns0:cell><ns0:cell cols='2'>5 155516 Sulfasalazine in Preventing Acute Diarrhea</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>in Patients With Cancer Who Are Under-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>going Pelvic Radiation Therapy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>pyridostigmine myasthenia gravis</ns0:cell><ns0:cell>cardiac failure</ns0:cell><ns0:cell>digitoxin</ns0:cell><ns0:cell>0.100</ns0:cell><ns0:cell cols='2'>4 190789 Safety Study of Pyridostigmine in Heart</ns0:cell></ns0:row><ns0:row><ns0:cell>alprazolam</ns0:cell><ns0:cell>anxiety disorder</ns0:cell><ns0:cell /><ns0:cell>epilepsy</ns0:cell><ns0:cell>clonazepam</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell cols='2'>Failure 4 220920 Staccato Alprazolam and EEG Photoparox-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ysmal Response</ns0:cell></ns0:row><ns0:row><ns0:cell>oxytocin</ns0:cell><ns0:cell>Prader-Willi</ns0:cell><ns0:cell>syn-</ns0:cell><ns0:cell>schizophrenia</ns0:cell><ns0:cell cols='2'>chlorpromazine 0.096</ns0:cell><ns0:cell cols='2'>3 163871 Antipsychotic Effects of Oxytocin</ns0:cell></ns0:row><ns0:row><ns0:cell>interferon alfa</ns0:cell><ns0:cell>drome leukemia</ns0:cell><ns0:cell /><ns0:cell>thrombocythemia</ns0:cell><ns0:cell>hydroxyurea</ns0:cell><ns0:cell>0.094</ns0:cell><ns0:cell>3 73064</ns0:cell><ns0:cell>Pegylated Interferon Alfa-2a Salvage Ther-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>apy in High-risk Polycythemia Vera (PV)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>or Essential Thrombocythemia (ET)</ns0:cell></ns0:row><ns0:row><ns0:cell>etomidate</ns0:cell><ns0:cell cols='2'>general anesthesia</ns0:cell><ns0:cell>depression</ns0:cell><ns0:cell>trazodone</ns0:cell><ns0:cell>0.091</ns0:cell><ns0:cell cols='2'>5 157982 Comparison of Effects of Propofol and Eto-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>midate on Rate Pressure Product and Oxy-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>gen Saturation in Patients Undergoing Elec-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>troconvulsive Therapy</ns0:cell></ns0:row><ns0:row><ns0:cell>guaifenesin</ns0:cell><ns0:cell cols='2'>respiratory tract in-</ns0:cell><ns0:cell>rhinitis</ns0:cell><ns0:cell>ipratropium</ns0:cell><ns0:cell>0.090</ns0:cell><ns0:cell cols='2'>5 110111 The Effect of Oral Guaifenesin on Pediatric</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>fections</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Chronic Rhinitis: a Pilot Study</ns0:cell></ns0:row></ns0:table><ns0:note>361 10/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Predicted indications for drugs currently in clinical trials. A selection of drugs which are currently in clinical trials for a new indication,</ns0:figDesc><ns0:table /><ns0:note>and have a co-indicated drug ('Evidence') ranked amongst the top 5 predictions. 'PCC' is the absolute partial correlation coefficient, 'ID' is the Cortellis Clinical Trials Intelligence identifier. Average PCC scores for co-indicated drugs ranked amongst the top 5, 10, 15, 20 and 25 positions were 0.162, 0.0804, 0.0620, 0.0515, and 0.0468, respectively.11/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:1:1:CHECK 18 Dec 2015)</ns0:note></ns0:figure>
</ns0:body>
" | "Reviewer Comments
Reviewer 1 (Anonymous)
Basic reporting
The article is well written. There is sufficient background and introduction. The literature is well cited - of a few additional references might be in appropriate drug repositioning (21849665, 21849664), drug-drug interactions (25122524, 22422992).
We thank the reviewer for pointing out these references, they have been added.
Experimental design
I'm not familiar with the scope of the journal but the experimental design is well described and easy to understand.
Validity of the findings
The authors make the following assumption, which is inherently problematic. “drug X causes a distinct profile of side-effects, and this side-effect profile is typical of drugs used to treat a certain disease Y, then drug X should be evaluated for repositioning for the treatment of disease Y”. Repositioning shouldn’t be done on the basis of similar side effects, but side effects “rescuing” a disease phenotype.
We have removed this sentence and replaced it with the following description of phenotypic rescue, which ties in well with the examples we then give:
Repositioned drugs can also be said to “rescue” a disease phenotype, on the basis of their side-effects; for example, drugs which cause hair growth as a side-effect can potentially be repositioned for the treatment of hair loss, while drugs which cause hypotension as a side-effect can be used to treat hypertension [21]. Examples of drugs successfully repositioned based on phenotypic rescue that have made it to market include exenatide...
The abstract has been modified as follows:
Drugs with similar side-effects might share a common mechanism of action linking side-effects with disease treatment, or may serve as a treatment by “rescuing” a disease phenotype on the basis of their side-effects;
Side effects are a negative consequence and if one drug with a certain side effect profile is used to treat a disease, there is no evidence that another drug with a similarly bad side effect profile will be a better therapeutic. This should either be discussed in depth as a limitation of the approach or the study shouldn’t be framed as a drug repositioning study, but a way to identify drugs with similar mechanism.
We agree that this is a significant caveat that should be discussed. We have added the following paragraph to the conclusion:
Another caveat to consider, common to many other repositioning strategies based on side-effect similarity, is that there is no evidence to suggest whether a repositioning candidate will be a better therapeutic than the drug from which the novel indication was inferred. While side-effects can provide useful information for inferring novel indications, they are in general undesirable and need to be balanced against any therapeutic benefits. Our model does not attempt to quantify efficacy or side-effect severity, but it might be possible to modify the natural language processing step during Twitter mining in order to capture comparative mentions of side-effects, since tweets discussing both the therapeutic and side-effects of two related drugs are not uncommon. Incorporating this information into our model may allow a more quantitative assessment of the trade-off between therapeutic and side-effects to be made as part of the prediction.
In addition to mining Twitter for drug information, I would suggest mining it for disease symptoms and try to link drugs and diseases based on a directional inverse association of the side effects and symptoms.
We are currently working on another method which will address the mining of Twitter for disease symptoms and drug efficacy mentions. While this is beyond the score of the current manuscript, we hope to publish it in the near future.
Comments for the author
The authors describe a novel computational method that uses side-effect data mined from social media to show that known indications are well recovered and current trial indications can also be identified. The work relies on the assumption that Drugs with similar side-effects might share a common mechanism of action; therefore it may be possible to infer new indications based on the similarity of side-effect profiles. While the approach of mining social media in the context of drug repositioning is innovative, the computational methods applied are robust, there is an underlying flaw in the study.
In addition to the issue raised above, there several other concerns:
• Figure 2 is a hairball and doesn’t offer much value, it would be more useful to provide the statistics on the network – how many nodes, significant edges, which is the node with the highest number of connections? What are the strongest relationships? I would also incorporate side effect information here - how many different side effects were included? Which drug has the highest number of side effects?
We have added the following description of the network to the figure legend:
In total, there are 616 connected nodes, with each having an average of 267 neighbours. Painkillers such as paracetamol and ibuprofen have the highest number of connections (587 and 585, respectively), which corresponds to them having the largest number of unique side-effects (256 and 224) reported on Twitter. The strongest connection is between chondroitin and glucosamine (partial correlation coefficient (PCC) 0.628), both of which are dietary supplements used to treat osteoarthritis, closely followed by the antidepressant and anxiolytic agents phenelzine and tranylcypromine (PCC 0.614).
In total there are 2196 unique side-effects – these are mentioned in the methods section on page 4.
• Table 1 – how were these chosen? Are these the top 10? Also should include original indication
These were chosen as follows:
Despite this, a number of interesting predictions with a diverse range of novel indications are made that are supported by experimental and clinical evidence; a selection of 10 of the 45 drugs where the trial indication was correctly predicted are presented in Table 1.
We have added the original indications to Table 1.
• It would be interesting to see what are the underlying “tweets/side effects” for each of the drug pairs.
These have been added to the text as follows:
Side-effects shared by oxytocin and chlorpromazine include hallucinations, excessive salivation and anxiety, while shivering, weight gain, abdominal pain, nausea, and constipation are common side-effects also shared by other drugs within the subgraph.
Nodes in this subgraph also have a relatively large degree indicating a tighter association than for other predictions, with common shared side-effects including dry mouth, sexual dysfunction, migraine, and orthostatic hypotension, while weight gain is shared between ramelteon and ziprasidone.
Similarly to the ramelteon subgraph, many drugs in the vicinity of meloxicam are used to treat inflammation including diclofenac, naproxen (both NSAIDs) and betamethasone, resulting in close association between these drugs, with shared side-effects in the subgraph including pain, cramping, flushing and fever, while swelling, indigestion, inflammation and skin rash are shared by meloxicam and rituximab.
• How were the three candidates chosen to discuss further? Maybe list in Table 1 and expand Figure 4?
These were chosen to demonstrate the diversity of indications and due to their interesting pharmacology. The text has been updated as follows:
...a number of interesting predictions with a diverse range of novel indications are made that are supported by experimental and clinical evidence; a selection of 10 of the 45 drugs where the trial indication was correctly predicted are presented in Table 1. We further investigated three repositioning candidates with interesting pharmacology to understand their predicted results.
• The datasets used for validation namely “Cortellis Clinical Trials Intelligence” is not a resource available to those who would like to replicate the findings. At least for the compounds investigated here, that data should be provided as a supplement
We have now included these PDFs in supplementary info.
While the method shows encouraging results, I agree with the authors that it is more likely to play a role in drug repositioning as a component in an integrated approach, potentially with databases like SIDER and molecular predictions.
We have added a section in which we use SIDER data to generate predictions. The results indicate that better results are possible using SIDER data. Interestingly, a number of predictions are unique to the Twitter-based model which supports the use side-effect data mined from social media to augment conventional resources:
To further investigate the influence of the provenance of the data, we mapped our data set of drugs to ChEMBL identifiers [64, 65] which we then used to query SIDER for side-effects extracted from drug labels. This resulted in a reduced data set of 229 drugs, in part due to the absence of many combination drugs from SIDER (e.g. the antidepressant Symbyax which contains olanzapine and fluoxetine). Using the same protocol described above, best performance of 53.67% (117/229) was achieved with a slightly higher λ value of 10−6 . Best performance on the same data set using side-effects derived from Twitter was 38.43% (88/229), again using a λ value of 10−9 , while the randomized strategy achieved 12.05% (standard error ±1.14), indicating that the use of higher quality side-effect data from SIDER allows the model to achieve better performance than is possible using Twitter data. Perhaps more interestingly, combining the correct predictions between the two datasources reveals that 30 are unique to the Twitter model, 59 are unique to the SIDER model, with 58 shared, supporting the use side-effect data mined from social media to augment conventional resources.
Reviewer 2 (Anonymous)
Basic reporting
The INTRODUCTION section provides a good overview of drug repurposing and the different pharmacological aspects used by different researchers, including side-effects similarities. However, in my opinion there is information that should be described in detail in this section:
• Page 2: The authors mention that existing repositioning methods based on side-effects have used data from the SIDER database and other similar resources, but they do not reference the corresponding papers.
Thank you for pointing this out. We have updated the manuscript to include this information:
Existing repositioning methods based on side-effects, such as the work of Campillos et al. [20] and Yang and Agarwal [21], have used data from the SIDER database [37]
• As an alternative to the lack of completeness of these traditional resources, they propose using side-effects mined from social media. However, the only limitation described by the authors is that only a small fraction of daily tweets contain reports of drug side-effects, restricting the number of drugs analysed in the study. However, text mining of side effects is still an open research area confronting several challenges such as use of idiomatic expressions, spelling errors, use of ambiguous terms, or exaggerated information that might provide false positives, among others (Leaman et al., 2010; Sampathkumar, Chen, & Luo, 2014; Segura-Bedmar, Martínez, Revert, & Moreno-Schneider, 2015).
We have now added details of some of these challenges, and have included a number of additional references:
Despite this success, a number of significant natural language processing challenges remain. These include dealing with idiomatic expressions, linguistic creativity, ambiguous terminology, spelling errors, word shortenings, and distinguishing between the symptoms that a drug is treating and the side-effects it causes. Some of the solutions proposed to deal with these issues include the use of specialist lexicons, appropriate use of semantic analysis, and improvments to approximate string matching, modeling of spelling errors, and contextual analysis surrounding the mentions of side-effects [47,48], while maintaining a list of symptoms for which a drug is prescribed can help to eliminate them from the list of side-effects identified [49].
• In addition to this, side-effects were collected from social media during a 6 months period. In my opinion, it is probable that the number of side effects reported for a drug in social media during that period of time would be smaller than the number of side-effects collected in traditional resources − such as, for example, the Summary of Product Characteristics. If this is the case, using data from social media would not address the limitation of traditional resources – i.e., their lack of completeness and difficulty to be updated with new side-effects information.
While we agree that the number of side effects collected in 6 months is lower than traditional sources, we envisage this type of system running continuously and gradually accumulating side-effects to the point at which the numbers are comparable to traditional resources. We have updated the manuscript as follows:
Over time, the accumulation of side-effect data from social media may become comparable to traditional resources, and at the very least should be sufficient to augment existing databases. Additionally, the cost of running such a system continuously is relatively cheap compared to existing pharmacovigilance monitoring, presenting a compelling economic argument supporting the use of social media for such purposes. Furthermore, the issues related to under-representation describe above may be addressed.
The use of social media may also address a number of biases (the issues of under-representation just mentioned) that have been observed in clinical studies. The following text has also been added:
A number of cancer drug studies have also observed that women are often significantly under-represented in clinical trials, making it difficult to study the efficacy, dosing and side-effects of treatments which can work differently in women and men; similar problems of under-representation also affect paediatrics, as many drugs are only ever tested on adults [46].
• (Page 6) In RESULTS AND DISCUSSION section it is asserted that “our method should provide a viable alternative to existing approaches”. However, these other approaches are not discussed in the paper. Moreover, their current limitations, and how they could be overcome by the method proposed by the authors, are not described.
We have toned-down this language and replaced it as follows, as we see our approach as a way of augmenting existing approaches rather than replacing them:
If, by exploiting hindsight, we can recover these, then our method should provide a viable strategy with which to augment existing approaches that adopt an integrated approach to drug repositioning [19].
The limitations we refer to relate to the issues associated with side-effect under-reporting, limitations due to trial scale, biases due to pharamacogenomic effects, under-representation of women, and similar problems affecting paediatric studies. These issues are discussed at the end of page 2.
• (Page 2) The authors describe the hypothesis that “drugs sharing a significant number of side-effects might share a common mechanism of action linking side-effects with disease treatment”. However, the examples they use to illustrate this (exenatide, minoxidil and sildenafil) are not based on this hypothesis (i.e., drugs sharing a similar side-effects profile could share a common indication, which is not necessarily related to any of the side-effects), but in the identification of a potential indication directly related to one specific side effect. I would recommend to point this out, or to illustrate these examples in a different part of the text to avoid confusion.
Thank you for pointing this out. We have added a section discussing phenotypic rescue which these examples illustrate:
Repositioned drugs can also be said to “rescue” a disease phenotype, on the basis of their side-effects. For example, drugs which cause hypotension as a side-effect can potentially be repositioned for the treatment of hypertension [21], while the immunosuppressant drug rapamycin appears to be a promising treatment for elastin haploinsufficiency observed in the neurodevelopmental disorder Williams-Beuren syndrome, due to its antiproliferative properties [31]. Examples of drugs successfully repositioned based on phenotypic rescue that have made it to market include exenatide...
Leaman, R., Wojtulewicz, L., Sullivan, R., Skariah, A., Yang, J., & Gonzalez, G. (2010). Towards Internet-Age Pharmacovigilance : Extracting Adverse Drug Reactions from User Posts to Health-Related Social Networks. In Association for Computational Linguistics (Ed.), Proceedings of the 2010 Workshop on Biomedical Natural Language Processing. BioNLP ’10 (pp. 117–125). Stroudsburg, PA, USA.
Sampathkumar, H., Chen, X., & Luo, B. (2014). Mining Adverse Drug Reactions from online healthcare forums using Hidden Markov Model. BMC Medical Informatics & Decision Making, 14(91),1–18.
Segura-Bedmar, I., Martínez, P., Revert, R., & Moreno-Schneider, J. (2015). Exploring Spanish health social media for detecting drug effects. BMC Medical Informatics and Decision Making, 15(Suppl 2), S6.
These references have all been added.
Experimental design
My main concern is that the paper describing the methods followed to obtain drug side-effects from Twitter is described in a paper that is not published, but under preparation (reference [45] in the manuscript). I suggest that this document should be provided to the reviewers as supplementary material for this review. Otherwise, a better description of the methods − especially the evaluation and limitations − should be provided in this paper.
We have now included this manuscript in supplementary data for the purposes of this review. We intend to submit this work to a conference in January 2016. In addition, we have expanded the description of this approach in the methods section as follows:
For each gazetteer, three features were created: a binary feature, which is set to 1 if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. For side-effect synonyms we used the Consumer Health Vocabulary (CHV) [63], which maps phrases to UMLS concept universal identifiers (CUI) and partially addresses the issue of misspellings and informal language in tweets. The matched CUIs were also used as additional features.
To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not. Using a split of 8,000 tweets for training, 1,000 for development, and 1,000 for testing, the SVM classifier that used all the features achieved precision of 55.0%, recall of 66.9%, and F1 score of 60.4% when evaluated using the 1,000 test tweets. This is statistically significantly higher than the results achieved by a linear SVM classifier using only unigrams and bigrams as features (precision of 56.0%, recall of 54.0% and F1 score of 54.9%). One of the sources of false negatives was the use of colloquial and indirect expressions by Twitter users to express that they have experienced a side-effect. We also observed that a number of false positives discuss the efficacy of drugs rather than side-effects.
In addition to this, there is one possible bias in the experiment “RECOVERERING KNOWN INDICATIONS”, that is not discussed by the authors. Drugs belonging to the same group of drugs (e.g., statins), could have similar side-effect profiles. Therefore, it would be expected that two drugs, such as simvastatin and lovastatin, would be ranked to be similar, and therefore the main indication would be correctly “predicted” through this method. Authors should explain if this could have influenced these results, or how did they handle this problem.
This is a very valid point which we have addressed in the manuscript. The text has been updated as follows:
We also investigated whether our results were biased by the over-representation of particular drug classes within our data set. Using Using Cortellis Clinical Trials Intelligence, we were able to identify the broad class for 479 drugs (77.26%) in our data set. The five largest classes were benzodiazepine receptor agonists (3/14 drugs returned amongst the top 5 ranked predictions), analgesics (6/12), H1-antihistamines (8/11), cyclooxygenase inhibitors (9/11), and anti-cancer (2/11). This indicates that the over-representation of H1-antihistamines and cyclooxygenase inhibitors did result in a bias, and to a lesser extent analgesics, but that the overall effect of these five classes was more subtle (28/59 returned amongst the top 5 ranked predictions, 47.46%).
There were 3 statins (HMG CoA reductase inhibitors) in the data set but none had their indication correctly recovered.
Validity of the findings
The authors say that “While data sets and underlying statistical models clearly differ, these results taken together suggest that the use of side-effect data mined from social media can certainly offer comparable performance to methods using side-effect data extracted from more conventional resources” (page 6). However, this conclusion cannot be extracted from the present study. It is possible that the method described by the authors could obtain better results using side-effects data from the same resources than other researches (e.g., Ye et al.). The influence of the provenance of data in the results obtained with this new method has not been evaluated.
This is an interesting point which we have addressed in the manuscript. The text has been updated as follows:
To further investigate the influence of the provenance of the data, we mapped our data set of drugs to ChEMBL identifiers [64, 65] which we then used to query SIDER for side-effects extracted from drug labels. This resulted in a reduced data set of 229 drugs, in part due to the absence of many combination drugs from SIDER (e.g. the antidepressant Symbyax which contains olanzapine and fluoxetine). Using the same protocol described above, best performance of 53.67% (117/229) was achieved with a slightly higher λ value of 10^−6 . Best performance on the same data set using side-effects derived from Twitter was 38.43% (88/229), again using a λ value of 10^−9 , while the randomized strategy achieved 12.05% (standard error ±1.14), indicating that the use of higher quality side-effect data from SIDER allows the model to achieve better performance than is possible using Twitter data. Perhaps more interestingly, combining the correct predictions between the two datasources reveals that 30 are unique to the Twitter model, 59 are unique to the SIDER model, with 58 shared, supporting the use side-effect data mined from social media to augment conventional resources.
Common side-effects shared by drugs within the subgraphs in the three examples (oxytocin, ramelteon and meloxicam) are side-effects commonly associated to a large number of drugs. However, other authors have observed an inverse correlation between side-effects frequency and the likelihood of two drugs to share a protein target (Campillos, Kuhn, Gavin, Jensen, & Bork, 2008). In my opinion, this is an interesting observation that should be discussed in the paper.
We have addressed this observation in the manuscript as follows:
While the side-effects shared within the subgraphs of our three examples are commonly associated with a large number of drugs, some of the side-effects shared by the three drug pairs such as hallucinations, excessive salivation and anxiety are somewhat less common. To investigate this relationship for the data set as a whole, we calculated log frequencies for all side-effects and compared these values against the normalized average rank of pairs where the side-effect was shared by both the query and target drug. If we assume that a higher ranking in our model indicates a higher likelihood of drugs sharing a protein target, this relationship demonstrates similar properties to the observations of Campillos et al. in that there is a negative correlation between the rank and frequency of a side-effect. The correlation coefficient has a value of -0.045 which is significantly different from zero at the 0.001 level, although the linear relationship appears to break down where the frequency of the side-effect is lower than about 0.025.
Limitations of the study refer mainly to a paper that is not published. Therefore, a better description of the limitations should be provided.
As discussed above, we have now included the unpublished manuscript in supplementary data for the review in addition to significantly expanding the description of it in the methods section.
Campillos, M., Kuhn, M., Gavin, A., Jensen, L. J., & Bork, P. (2008). Drug Target Identification Using Side-Effect Similarity All us. Science, 321(5886), 263–266.
Comments for the author
All my comments are covered by the 3 areas above.
Reviewer 3 (Anonymous)
Basic reporting
The article is generally well-written but there are some typographical mistakes as well as writing style problems that I have listed in the comments for the authors.
Experimental design
The research explained in the paper is of value and useful. However, experiments and evaluations should be clarified by the authors as explained in the comments to the authors.
Validity of the findings
The data used here is Twitter data which cannot be shared (only Tweet ids can be shared). Even that, authors make no comment if they are sharing that to make their experiments repeatable by other researchers.
We have included the Twitter data containing the raw drugs and side-effects extracted in supplementary data.
Evaluations explained in the paper have some ambiguity which they have to be cleared before any decision is made for the publication of this work.
Comments for the author
The paper investigates the usability of social media (Twitter) for repositioning of the drugs based on their side effects. The motivation is clear and the problem itself is interesting and worth pursuing. The paper itself basically uses an existing method with limited modification to a new problem and does not propose a novel algorithm/method. Novelty is mostly the application. One main issue here is that the evaluations are weak and mostly based on anecdotes. How this system is useful in practice is something that the presented evaluations are not showing much. Below are more detailed comments:
- The paper hardly touches on the data it uses and its main issue due to the tweets nature: data quality. It is assumed that the tweets it gets (with the given criteria) are really talking about side effects. However, they could easily being misleading. Also, who did you deal with drug misspellings and the informal language of the tweets? Conclusions discuss the limitation of Twitter data but not this issue. I'd like to see this issue is clearly analyzed or discussed in this work. Also, it would help if a Materials or Data section come before or right after Methods.
We have modified the manuscript as follows, to include a discussion on misspellings, informal language, and data quality. We mention some of the sources of false positives and negatives.
For each gazetteer, three features were created: a binary feature, which is set to 1 if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. For side-effect synonyms we used the Consumer Health Vocabulary (CHV) [63], which maps phrases to UMLS concept universal identifiers (CUI) and partially addresses the issue of misspellings and informal language in tweets. The matched CUIs were also used as additional features.
To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not. Using a split of 8,000 tweets for training, 1,000 for development, and 1,000 for testing, the SVM classifier that used all the features achieved precision of 55.0%, recall of 66.9%, and F1 score of 60.4% when evaluated using the 1,000 test tweets. This is statistically significantly higher than the results achieved by a linear SVM classifier using only unigrams and bigrams as features (precision of 56.0%, recall of 54.0% and F1 score of 54.9%). One of the sources of false negatives was the use of colloquial and indirect expressions by Twitter users to express that they have experienced a side-effect. We also observed that a number of false positives discuss the efficacy of drugs rather than side-effects.
We follow this paragraph with a data section discussing our Twitter data set.
- Page 2, lines 75-85: A number of references to the 'recent' work in extracting adverse reactions from social media/medical forums are given. These studies are not longer any recent and in fact many more studies have been published in 2015 alone. I'd suggest authors to have a look at more advanced work than a workshop paper in 2010. You can look at a 2015 survey published in ACM computing survey for some of the later work (Text and Data Mining Techniques in Adverse Drug Reaction Detection/the social media section).
We have updated the manuscript with the following section discussing a number of more recent publications:
Freifeld et al. [53] presented a comparison study between drug side-effects found on Twitter and adverse events reported in FDA Adverse Event Reporting System (FAERS). Starting with 6.9 million tweets, they used a set of 23 drug names and a list of symptoms to reduce that data to a subset of 60,000 tweets. After manual examination, there were 4,401 tweets identified as mentioning a side-effect, with a Spearman rank correlation found to be 0.75. Nikfarjam et al. [54] introduce a method based on Conditional Random Fields (CRF) to tag mentions of drug side-effects in social media posts from Twitter or the online health community DailyStrength. They use features based on the context of tokens, a lexicon of adverse drug reactions, Part-Of-Speech (POS) tags and a feature indicating whether a token is negated or not. They also used embedding clusters learned with Word2Vec [55]. They reported an F1 score of 82.1% for data from DailyStrength and 72.1% for Twitter data. Sarker and Gonzalez [56] developed classifiers to detect side-effects using training data from multiple sources, including tweets [57], DailyStrength, and a corpus of adverse drug events obtained from medical case reports. They reported an F1 score of 59.7% when training a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel on all three datasets. Recently, Karimi et al. [58] presented a survey of the field of surveillance for adverse drug events with automatic text and data mining techniques.
- Page 3, line 90: you cannot cite a work that is in preparation. If you want to use its material you either have to include it in this paper so it gets reviewed here or wait till your other paper gets published.
We contacted PeerJ (Pete Binfield) and have been advised that we can cite this work as unpublished data. We have included it in supplementary data for the purposes of the review, but we intend to submit this work to a conference in January 2016. We have expanded our discussion of the work in the methods section, as discussed above.
- page 3, line 113: how clinical trial is used as a feature to SVM?
We used a number of gazetteers lists, one of which contained terms related to clinical trials. All matches to these gazetteer lists were encoded using the following approach which has been added to the manuscript:
...and matches to gazetteers related to human body parts, side-effect synonyms, side-effect symptoms, causality indicators, clinical trials, medical professional roles, side effect-triggers and drugs.
For each gazetteer, three features were created: a binary feature, which is set to 1 if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. For side-effect synonyms we used the Consumer Health Vocabulary (CHV) [63], which maps phrases to Unified Medical Language System concept universal identifiers (CUI) and partially addresses the issue of misspellings and informal language in tweets. The matched CUIs were also used as additional features.
- page 3: what was the training data for the SVM classifier?
We have added the following section that describe the training data for the SVM classifier:
To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not. Using a split of 8,000 tweets for training, 1,000 for development, and 1,000 for testing, the SVM classifier that used all the features achieved precision of 55.0%, recall of 66.9%, and F1 score of 60.4% when evaluated using the 1,000 test tweets. This is statistically significantly higher than the results achieved by a linear SVM classifier using only unigrams and bigrams as features (precision of 56.0%, recall of 54.0% and F1 score of 54.9%). One of the sources of false negatives was the use of colloquial and indirect expressions by Twitter users to express that they have experienced a side-effect. We also observed that a number of false positives discuss the efficacy of drugs rather than side-effects.
- Page 3: how normalization to World Drug bank was done? This step important if done automatically as there is often large error rate is associated with it.
We used exact string matching to resolve common drug names using World Drug Index. This worked for approximately half of the data set; the remainder were matched manually. This is obviously a time-consuming process but is essential to minimise this error rate.
- page 6: Figure 1(a): How the frequencies are measures that they are all less than one? Not sure if that figure is correct given the explanation (top-n, etc). Needs to be clarified.
We have changed this to a percentage and updated the figure legends and text accordingly. It is the number of drugs that are correctly ranked in the top N positions, divided by 595, multiplied by 100.
- In general it would be nicer to have figures or table top or bottom aligned in a page.
We have modified the manuscript and all figures are now aligned at the top of the page.
- Page 6, line 223: what does extracted from [32-35] means?? Why not mention exactly what here rather than ambiguously referring to others work and expect your reader to go read those work to understand yours.
These sources were mentioned in the introduction so we just used the references here for brevity. We have updated the sentence as follows:
...extracted from the SIDER database, Meyler’s Side Effects of Drugs, Side Effects of Drugs Annual , and the Drugs@FDA database [37–40]
- A discussion on what the performance/effectiveness values mean for the experiment on page 6-7. If we have a system that finds some correct indication but much more false ones, how is this helpful in a realistic setting. That is, what human workload would you expect to make use of this?
Computational drug repositioning methods such as ours aim to provide a means of prioritising candidate drugs. We envisage them being used by an analyst who would draw on multiple sources of information, with ranking data derived from side-effects contributing to the decision to perform a (potentially expensive) clinical study to assess the drug for use in the predicted indication. As mentioned on page 9 (line 332), more than half of drugs have a co-indicated drug identified in the top 10 positions, a level of performance which we believe should be useful in significantly reducing the workload of an analyst and increasing the likelihood of prioritising the most promising candidates:
...the majority of targets (56.02%) will have a co-indicated drug identified by considering only the top 10 predictions, suggesting the method is a feasible strategy for prioritisation of repositioning candidates.
- Page 8: what the value of PCC tells us? What is the range and what values are good or bad?
The PCC is used only for ranking and as such cannot be interpreted as good or bad. As a general guide to their range, the average PCC scores for co-indicated drugs ranked amongst the top 5, 10, 15, 20 and 25 positions were 0.162, 0.0804, 0.0620, 0.0515, and 0.0468, respectively. We have added this information to Table 1's legend.
Minor issues:
some proof-reading is required, for example:
- page 6: remove --> removing
- Page 7: know indications --> known indications
These have been corrected, many thanks for pointing them out.
- Page 7: Figure 2 is not showing much and can be removed. The colors are also not helpful.
At the request of reviewer 1 we have added some more detail describing the network in the figure legend as follows:
In total, there are 616 nodes, with each having an average of 267 neighbours. Painkillers such as paracetamol and ibuprofen have the highest number of connections (587 and 585, respectively), which corresponds to them having the largest number of unique side-effects (256 and 224) reported on Twitter. The strongest connection is between chondroitin and glucosamine (partial correlation coefficient (PCC) 0.628), both of which are dietary supplements used to treat osteoarthritis, closely followed by the antidepressant and anxiolytic agents phenelzine and tranylcypromine (PCC 0.614).
We feel this justifies the inclusion of the figure. We have updated the colours and used orange instead of yellow for clarity.
" | Here is a paper. Please give your review comments after reading it. |
39 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Drug repositioning methods attempt to identify novel therapeutic indications for marketed drugs. Strategies include the use of side-effects to assign new disease indications, based on the premise that both therapeutic effects and side-effects are measurable physiological changes resulting from drug intervention. Drugs with similar side-effects might share a common mechanism of action linking side-effects with disease treatment, or may serve as a treatment by 'rescuing' a disease phenotype on the basis of their side-effects; therefore it may be possible to infer new indications based on the similarity of side-effect profiles.</ns0:p><ns0:p>While existing methods leverage side-effect data from clinical studies and drug labels, evidence suggests this information is often incomplete due to under-reporting. Here, we describe a novel computational method that uses side-effect data mined from social media to generate a sparse undirected graphical model using inverse covariance estimation with l 1 -norm regularization. Results show that known indications are well recovered while current trial indications can also be identified, suggesting that sparse graphical models generated using side-effect data mined from social media may be useful for computational drug repositioning.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Drug repositioning is the process of identifying novel therapeutic indications for marketed drugs. Compared to traditional drug development, repositioned drugs have the advantage of decreased development time and costs given that significant pharmacokinetic, toxicology and safety data will have already been accumulated, drastically reducing the risk of attrition during clinical trials. In addition to marketed drugs, it is estimated that drug libraries may contain upwards of 2000 failed drugs that have the potential to be repositioned, with this number increasing at a rate of 150-200 compounds per year <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Repositioning of marketed or failed drugs has opened up new sources of revenue for pharmaceutical companies with estimates suggesting the market could generate multi-billion dollar annual sales in coming years <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr' target='#b2'>3]</ns0:ref>. While many of the current successes of drug repositioning have come about through serendipitous clinical observations, systematic data-driven approaches are now showing increasing promise given their ability to generate repositioning hypotheses for multiple drugs and diseases simultaneously using a wide range of data sources, while also incorporating prioritisation information to further accelerate development time <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Existing computational repositioning strategies generally use similar approaches but attempt to link different concepts. They include 1/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science the use of transcriptomics methods which compare drug response gene-expression with disease gene-expression signatures <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref><ns0:ref type='bibr' target='#b5'>[6]</ns0:ref><ns0:ref type='bibr' target='#b6'>[7]</ns0:ref><ns0:ref type='bibr' target='#b7'>[8]</ns0:ref><ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, genetics-based methods which connect a known drug target with a genetically associated phenotype <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref><ns0:ref type='bibr' target='#b10'>[11]</ns0:ref><ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b13'>[13]</ns0:ref><ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>, network-based methods which link drugs or diseases in a network based on shared features <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref><ns0:ref type='bibr' target='#b16'>[16]</ns0:ref><ns0:ref type='bibr' target='#b17'>[17]</ns0:ref><ns0:ref type='bibr' target='#b18'>[18]</ns0:ref><ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>, and methods that use side-effect similarity to infer novel indications <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref><ns0:ref type='bibr' target='#b21'>[21]</ns0:ref><ns0:ref type='bibr' target='#b22'>[22]</ns0:ref><ns0:ref type='bibr' target='#b23'>[23]</ns0:ref><ns0:ref type='bibr' target='#b24'>[24]</ns0:ref><ns0:ref type='bibr' target='#b25'>[25]</ns0:ref><ns0:ref type='bibr' target='#b26'>[26]</ns0:ref><ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>.</ns0:p><ns0:p>Drug side-effects can be attributed to a number of molecular interactions including on or off-target binding, drug-drug interactions <ns0:ref type='bibr' target='#b28'>[28,</ns0:ref><ns0:ref type='bibr' target='#b30'>29]</ns0:ref>, dose-dependent pharmacokinetics, metabolic activities, downstream pathway perturbations, aggregation effects, and irreversible target binding <ns0:ref type='bibr' target='#b20'>[20,</ns0:ref><ns0:ref type='bibr' target='#b31'>30]</ns0:ref>. While side-effects are considered the unintended consequence of drug intervention, they can provide valuable insight into the physiological changes caused by the drug that are difficult to predict using pre-clinical or animal models. This relationship between drugs and side-effects has been exploited and used to identify shared target proteins between chemically dissimilar drugs, allowing new indications to be inferred based on the similarity of side-effect profiles <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>. One rationale behind this and related approaches is that drugs sharing a significant number of side-effects might share a common mechanism of action linking side-effects with disease treatment -side-effects essentially become a phenotypic biomarker for a particular disease <ns0:ref type='bibr' target='#b21'>[21,</ns0:ref><ns0:ref type='bibr' target='#b25'>25]</ns0:ref>. Repositioned drugs can also be said to 'rescue' a disease phenotype, on the basis of their side-effects; for example, drugs which cause hair growth as a side-effect can potentially be repositioned for the treatment of hair loss, while drugs which cause hypotension as a side-effect can be used to treat hypertension <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. Examples of drugs successfully repositioned based on phenotypic rescue that have made it to market include exenatide, which was shown to cause significant weight loss as a side-effect of type 2 diabetes treatment, leading to a trial of its therapeutic effect in non-diabetic obese subjects <ns0:ref type='bibr' target='#b32'>[31,</ns0:ref><ns0:ref type='bibr' target='#b33'>32]</ns0:ref>, minoxidil which was originally developed for hypertension but found to cause hair growth as a side-effect, leading to its repositioning for the treatment of hair loss and androgenetic alopecia <ns0:ref type='bibr' target='#b34'>[33,</ns0:ref><ns0:ref type='bibr' target='#b35'>34]</ns0:ref>, and, perhaps most famously, sildenafil citrate which was repositioned while being studied for the primary indication of angina to the treatment of erectile dysfunction <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>.</ns0:p><ns0:p>Existing repositioning methods based on side-effects, such as the work of Campillos et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> and Yang and Agarwal <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>, have used data from the SIDER database <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>, which contains side-effect data extracted from drug labels, largely collected from clinical trials during the pre-marketing phase of drug development. Other resources include Meyler's Side Effects of Drugs <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref>, which is updated annually in the Side Effects of Drugs Annual <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref>, and the Drugs@FDA database <ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>, while pharmacovigilance authorities attempt to detect, assess and monitor reported drug side-effects post-market. Despite regular updates to these resources and voluntary reporting systems, there is evidence to suggest that side-effects are substantially under-reported, with some estimates indicating that up to 86% of adverse drug reactions go unreported for reasons that include lack of incentives, indifference, complacency, workload and lack of training among healthcare professionals <ns0:ref type='bibr' target='#b41'>[40]</ns0:ref><ns0:ref type='bibr' target='#b42'>[41]</ns0:ref><ns0:ref type='bibr' target='#b43'>[42]</ns0:ref><ns0:ref type='bibr' target='#b44'>[43]</ns0:ref>. Side-effects reported from clinical trials also have limitations due to constraints on scale and time, as well as pharmacogenomic effects <ns0:ref type='bibr' target='#b45'>[44]</ns0:ref>. A number of cancer drug studies have also observed that women are often significantly under-represented in clinical trials, making it difficult to study the efficacy, dosing and side-effects of treatments which can work differently in women and men; similar problems of under-representation also affect paediatrics, as many drugs are only ever tested on adults <ns0:ref type='bibr' target='#b46'>[45]</ns0:ref>.</ns0:p><ns0:p>between the frequency of side-effects extracted from unlabelled data and the frequency of documented adverse drug reactions <ns0:ref type='bibr' target='#b48'>[46]</ns0:ref>. Despite this success, a number of significant natural language processing challenges remain. These include dealing with idiomatic expressions, linguistic variability of expression and creativity, ambiguous terminology, spelling errors, word shortenings, and distinguishing between the symptoms that a drug is treating and the side-effects it causes. Some of the solutions proposed to deal with these issues include the use of specialist lexicons, appropriate use of semantic analysis, and improvements to approximate string matching, modeling of spelling errors, and contextual analysis surrounding the mentions of side-effects <ns0:ref type='bibr' target='#b48'>[46,</ns0:ref><ns0:ref type='bibr' target='#b49'>47]</ns0:ref>, while maintaining a list of symptoms for which a drug is prescribed can help to eliminate them from the list of side-effects identified <ns0:ref type='bibr' target='#b50'>[48]</ns0:ref>. Although much of the focus has explored the use of online forums where users discuss their experience with pharmaceutical drugs and report side-effects <ns0:ref type='bibr' target='#b51'>[49]</ns0:ref>, the growing popularity of Twitter <ns0:ref type='bibr' target='#b52'>[50]</ns0:ref>, which at the time of writing has over 300 million active monthly users, provides a novel resource upon which to perform large-scale mining of reported drug side-effects in near real-time from the 500 millions tweets posted daily <ns0:ref type='bibr'>[51]</ns0:ref>. While only a small fraction of these daily tweets are related to health issues, the sheer volume of data available presents an opportunity to bridge the gap left by conventional side-effects reporting strategies. Over time, the accumulation of side-effect data from social media may become comparable or even exceed the volume of traditional resources, and at the very least should be sufficient to augment existing databases. Additionally, the cost of running such a system continuously is relatively cheap compared to existing pharmacovigilance monitoring, presenting a compelling economic argument supporting the use of social media for such purposes. Furthermore, the issues related to under-representation described above may be addressed.</ns0:p><ns0:p>Freifeld et al. <ns0:ref type='bibr' target='#b53'>[52]</ns0:ref> presented a comparison study between drug side-effects found on Twitter and adverse events reported in the FDA Adverse Event Reporting System (FAERS). Starting with 6.9 million tweets, they used a set of 23 drug names and a list of symptoms to reduce that data to a subset of 60,000 tweets. After manual examination, there were 4,401 tweets identified as mentioning a side-effect, with a Spearman rank correlation found to be 0.75. Nikfarjam et al. <ns0:ref type='bibr' target='#b54'>[53]</ns0:ref> introduce a method based on Conditional Random Fields (CRF) to tag mentions of drug side-effects in social media posts from Twitter or the online health community DailyStrength. They use features based on the context of tokens, a lexicon of adverse drug reactions, Part-Of-Speech (POS) tags and a feature indicating whether a token is negated or not. They also used embedding clusters learned with Word2Vec <ns0:ref type='bibr' target='#b55'>[54]</ns0:ref>. They reported an F1 score of 82.1% for data from DailyStrength and 72.1% for Twitter data. Sarker and Gonzalez <ns0:ref type='bibr' target='#b56'>[55]</ns0:ref> developed classifiers to detect side-effects using training data from multiple sources, including tweets <ns0:ref type='bibr' target='#b57'>[56]</ns0:ref>, DailyStrength, and a corpus of adverse drug events obtained from medical case reports. They reported an F1 score of 59.7% when training a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel on all three datasets. Recently, Karimi et al. <ns0:ref type='bibr' target='#b58'>[57]</ns0:ref> presented a survey of the field of surveillance for adverse drug events with automatic text and data mining techniques.</ns0:p><ns0:p>In this study, we describe a drug repositioning methodology that uses side-effect data mined from social media to infer novel indications for marketed drugs. We use data from a pharmacovigilance system for mining Twitter for drug side-effects <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. The system uses a set of cascading filters to eliminate large quantities of irrelevant messages and identify the most relevant data for further processing, before applying a SVM classifier to identify tweets that mention suspected adverse drug reactions. Using this data we apply sparse inverse covariance estimation to construct an undirected graphical model, which offers a way to describe the relationship between all drug pairs <ns0:ref type='bibr' target='#b61'>[59]</ns0:ref><ns0:ref type='bibr' target='#b62'>[60]</ns0:ref><ns0:ref type='bibr' target='#b64'>[61]</ns0:ref>. This is Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>achieved by solving a maximum likelihood problem using 1 -norm regularization to make the resulting graph as sparse as possible, in order to generate the simplest graphical model which fully explains the data. Results from testing the method on known and proposed trial indication recovery suggest that side-effect data mined from social media in combination with a regularized sparse graphical model can be used for systematic drug repositioning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Mining Twitter for drug side-effects</ns0:head><ns0:p>We used the SoMeDoSEs pharmacovigilance system <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref> to extract reports of drug side-effects from Twitter over a 6 month period between January and June 2014. SoMe-DoSEs works by first applying topic filters to identify tweets that contain keywords related to drugs, before applying volume filters which remove tweets that are not written in English, are re-tweets or contain a hyperlink to a web page, since these posts are typically commercial offerings. Side-effects were then mapped to an entry in the FDA Adverse Event Reporting System. Tweets that pass these filters are then classified by a linear SVM to distinguish those that mention a drug side-effect from those that do not. The SVM classifier uses a number of natural language features including unigrams and bigrams, part-of-speech tags, sentiment scores, text surface features, and matches to gazetteers related to human body parts, side-effect synonyms, side-effect symptoms, causality indicators, clinical trials, medical professional roles, side effect-triggers and drugs.</ns0:p><ns0:p>For each gazetteer, three features were created: a binary feature, which is set to 1 if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. For side-effect synonyms we used the Consumer Health Vocabulary (CHV) <ns0:ref type='bibr' target='#b65'>[62]</ns0:ref>, which maps phrases to Unified Medical Language System Concept Universal Identifiers (CUI) and partially addresses the issue of misspellings and informal language used to discuss medical issues in tweets. The matched CUIs were also used as additional features.</ns0:p><ns0:p>To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not, resulting in a Cohen's Kappa for inter-annotator agreement on a sample of 404 tweets annotated by two non-healthcare professional of 0.535. Using a split of 8,000 tweets for training, 1,000 for development, and 1,000 for testing, the SVM classifier that used all the features achieved a precision of 55.0%, recall of 66.9%, and F1 score of 60.4% when evaluated using the 1,000 test tweets. This is statistically significantly higher than the results achieved by a linear SVM classifier using only unigrams and bigrams as features (precision of 56.0%, recall of 54.0% and F1 score of 54.9%). One of the sources of false negatives was the use of colloquial and indirect expressions by Twitter users to express that they have experienced a side-effect. We also observed that a number of false positives discuss the efficacy of drugs rather than side-effects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Twitter data</ns0:head><ns0:p>Over the 6 month period, SoMeDoSEs typically identified ∼700 tweets per day that mentioned a drug side-effect, resulting in a data set of 620 unique drugs and 2196 unique Manuscript to be reviewed Computer Science side-effects from 108,009 tweets, once drugs with only a single side-effect were excluded and drug synonyms had been resolved to a common name using exact string matches to entries in World Drug Index <ns0:ref type='bibr' target='#b66'>[63]</ns0:ref>, which worked for approximately half of the data set with the remainder matched manually. We were also careful to remove indications that were falsely identified as side-effects using drug indications from Cortellis Clinical Trials Intelligence <ns0:ref type='bibr' target='#b67'>[64]</ns0:ref>. We used this data to construct a 2196 row by 620 column matrix of binary variables X, where x ∈ {0, 1}, indicating whether each drug was reported to cause each side-effect in the Twitter data set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating the sample covariance matrix</ns0:head><ns0:p>Using this data, we are able to form the sample covariance matrix S for binary variables as follows <ns0:ref type='bibr' target='#b68'>[65]</ns0:ref>, such that element S i,j gives the covariance of drug i with drug j :</ns0:p><ns0:formula xml:id='formula_0'>S i,j = 1 n − 1 n k=1 (x ki − xi )(x kj − xj ) = 1 n − 1 n k=1 x ki x kj − xi xj (1)</ns0:formula><ns0:p>where xi = 1 n n k=1 x ki and x ki is the k-th observation (side-effect) of variable (drug) X i . It can be shown than the average product of two binary variables is equal to their observed joint probabilities such that:</ns0:p><ns0:formula xml:id='formula_1'>1 n − 1 n k=1 x ki x kj = P (X j = 1|X i = 1)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P (X j = 1|X i = 1) refers to the conditional probability that variable X j equals one given that variable X i equals one. Similarly, the product of the means of two binary variables is equal to the expected probability that both variables are equal to one, under the assumption of statistical independence:</ns0:p><ns0:formula xml:id='formula_2'>xi xj = P (X i = 1)P (X j = 1)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Consequently, the covariance of two binary variables is equal to the difference between the observed joint probability and the expected joint probability:</ns0:p><ns0:formula xml:id='formula_3'>S i,j = P (X j = 1|X i = 1) − P (X i = 1)P (X j = 1)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Our objective is to find the precision or concentration matrix θ by inverting the sample covariance matrix S. Using θ, we can obtain the matrix of partial correlation coefficients ρ for all pairs of variables as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>ρ i,j = − θ i,j θ i,i θ j,j<ns0:label>(5)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The partial correlation between two variables X and Y given a third, Z, can be defined as the correlation between the residuals R x and R y after performing least-squares regression of X with Z and Y with Z, respectively. This value, denotated ρ x,y|z , provides a measure of the correlation between two variables when conditioned on the third, with a value of zero implying conditional independence if the input data distribution is multivariate Gaussian. The partial correlation matrix ρ, however, gives the correlations between all pairs of variables conditioning on all other variables. Off-diagonal elements in ρ that are significantly different from zero will therefore be indicative of pairs of drugs that show unique covariance between their side-effect profiles when taking into account (i.e. removing) the variance of side-effects profiles amongst all the other drugs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Shrinkage estimation</ns0:head><ns0:p>For the sample covariance matrix to be easily invertible, two desirable characteristics are that it should be positive definite, i.e. all eigenvalues should be distinct from zero, and well-conditioned, i.e. the ratio of its maximum and minimum singular value should not be too large. This can be particularly problematic when the sample size is small and the number of variables is large (n < p) and estimates of the covariance matrix become singular. To ensure these characteristics, and speed up convergence of the inversion, we condition the sample covariance matrix by shrinking towards an improved covariance estimator T, a process which tends to pull the most extreme coefficients towards more central values thereby systematically reducing estimation error <ns0:ref type='bibr' target='#b69'>[66]</ns0:ref>, using a linear shrinkage approach to combine the estimator and sample matrix in a weighted average:</ns0:p><ns0:formula xml:id='formula_5'>S = αT + (1 − α)S<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where α ∈ {0, 1} denotes the analytically determined shrinkage intensity. We apply the approach of Schäfer and Strimmer, which uses a distribution-free, diagonal, unequal variance model which shrinks off-diagonal elements to zero but leaves diagonal entries intact, i.e. it does not shrink the variances <ns0:ref type='bibr' target='#b70'>[67]</ns0:ref>. Shrinkage is actually applied to the correlations rather than the covariances, which has two distinct advantages: the offdiagonal elements determining the shrinkage intensity are all on the same scale, while the partial correlations derived from the resulting covariance estimator are independent of scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graphical lasso for sparse inverse covariance estimation</ns0:head><ns0:p>A useful output from the covariance matrix inversion is a sparse ρ matrix containing many zero elements, since, intuitively, we know that relatively few drug pairs will share a common mechanism of action, so removing any spurious correlations is desirable and results in a more parsimonious relationship model, while the non-zero elements will typically reflect the correct positive correlations in the true inverse covariance matrix more accurately <ns0:ref type='bibr' target='#b71'>[68]</ns0:ref>. However, elements of ρ are unlikely to be zero unless many elements of the sample covariance matrix are zero. The graphical lasso <ns0:ref type='bibr' target='#b62'>[60,</ns0:ref><ns0:ref type='bibr' target='#b64'>61,</ns0:ref><ns0:ref type='bibr' target='#b72'>69]</ns0:ref> provides a way to induce zero partial correlations in ρ by penalizing the maximum likelihood estimate of the inverse covariance matrix using an 1 -norm penalty function. The estimate can be found by maximizing the following log-likelihood using the block coordinate descent approach described by Friedman et al. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_6'>log det θ − tr(S θ) − λ θ 1 (7)</ns0:formula><ns0:p>Here, the first term is the Gaussian log-likelihood of the data, tr denotes the trace operator and θ 1 is the 1 -norm -the sum of the absolute values of the elements of θ, weighted by the non-negative tuning paramater λ. The specific use of the 1 -norm penalty has the desirable effect of setting elements in θ to zero, resulting in a sparse matrix, while the parameter λ effectively controls the sparsity of the solution. This contrasts with the use of an 2 -norm penalty which will shrink elements but will never reduce them to zero. While this graphical lasso formulation is based on the assumption that the input data distribution is multivariate Gaussian, Banerjee et al. showed that the dual optimization solution also applies to binary data, as is the case in our application <ns0:ref type='bibr' target='#b64'>[61]</ns0:ref>.</ns0:p><ns0:p>It has been noted that the graphical lasso produces an approximation of θ that is not symmetric, so we update it as follows <ns0:ref type='bibr' target='#b73'>[70]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_7'>θ ← (θ + θ T ) 2<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>The matrix ρ is then calculated according to Equation <ns0:ref type='formula' target='#formula_4'>5</ns0:ref>, before repositioning predictions for drug i are determined by ranking all other drugs according to their absolute values in ρ i and assigning their indications to drug i.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Recovering known indications</ns0:head><ns0:p>To evaluate our method we have attempted to predict repositioning targets for indications that are already known. If, by exploiting hindsight, we can recover these, then our method should provide a viable strategy with which to augment existing approaches that adopt an integrated approach to drug repositioning <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. Figure <ns0:ref type='figure'>1a</ns0:ref> shows the performance of the method at identifying co-indicated drugs at a range of λ values, resulting in different sparsity levels in the resulting ρ matrix. We measured the percentage at which a co-indicated drug was ranked amongst the top 5, 10, 15, 20 and 25 predictions for the target drug, respectively. Of the 620 drugs in our data set, 595 had a primary indication listed in Cortellis Clinical Trials Intelligence, with the majority of the remainder being made up of dietary supplements (e.g. methylsulfonylmethane) or plant extracts (e.g. Agaricus brasiliensis extract) which have no approved therapeutic effect. Rather than removing these from the data set, they were left in as they may contribute to the partial correlation between pairs of drugs that do have approved indications.</ns0:p><ns0:p>Results indiciate that the method achieves its best performance with a λ value of 10 −9 where 42.41% (243/595) of targets have a co-indicated drug returned amongst the top 5 ranked predictions (Figure <ns0:ref type='figure'>1a</ns0:ref>). This value compares favourably with both a strategy in which drug ranking is randomized (13.54%, standard error ±0.65), and another in which drugs are ranked according to the Jaccard index (28.75%). In Ye et al. <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>, a related approach is used to construct a repositioning network based on side-effects extracted from the SIDER database, Meyler's Side Effects of Drugs, Side Effects of Drugs Annual, and the Drugs@FDA database <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref><ns0:ref type='bibr' target='#b38'>[37]</ns0:ref><ns0:ref type='bibr' target='#b39'>[38]</ns0:ref><ns0:ref type='bibr' target='#b40'>[39]</ns0:ref>, also using the Jaccard index as the measure of drug-drug similarity. Here, they report an equivilent value of 32.77% of drugs having their indication correctly predicted amongst the top 5 results. While data sets and underlying statistical models clearly differ, these results taken together suggest that the use of side-effect data mined from social media can certainly offer comparable performance to methods using side-effect data extracted from more conventional resources, while the use of a global statistical model such as the graphical lasso does result in improved performance compared to a pairwise similarity coefficient such as the Jaccard index.</ns0:p><ns0:p>To further investigate the influence of the provenance of the data, we mapped our data set of drugs to ChEMBL identifiers <ns0:ref type='bibr' target='#b74'>[71,</ns0:ref><ns0:ref type='bibr' target='#b75'>72]</ns0:ref> which we then used to query SIDER for side-effects extracted from drug labels. This resulted in a reduced data set of 229 drugs, in part due to the absence of many combination drugs from SIDER (e.g. the antidepressant Symbyax which contains olanzapine and fluoxetine). Using the same protocol described above, best performance of 53.67% (117/229) was achieved with a slightly higher λ value of 10 −6 . Best performance on the same data set using side-effects derived from Twitter was 38.43% (88/229), again using a λ value of 10 −9 , while the randomized strategy achieved 12.05% (standard error ±1.14), indicating that the use of higher quality side-effect data from SIDER allows the model to achieve better performance than is possible using Twitter data. Perhaps more interestingly, combining the correct predictions between the two datasources reveals that 30 are unique to the Twitter model, 59 are unique to the SIDER model, with 58 shared, supporting the use side-effect data mined from social media to augment conventional resources.</ns0:p><ns0:p>We also investigated whether our results were biased by the over-representation of particular drug classes within our data set. Using Cortellis Clinical Trials Intelligence, we were able to identify the broad class for 479 of the drugs (77.26%) in our data set. The five largest classes were benzodiazepine receptor agonists (3/14 drugs returned amongst the top 5 ranked predictions), analgesics (6/12), H 1 -antihistamines (8/11), cyclooxygenase inhibitors (9/11), and anti-cancer <ns0:ref type='bibr'>(2/11)</ns0:ref>. This indicates that the over-representation of H 1 -antihistamines and cyclooxygenase inhibitors did result in a bias, and to a lesser extent analgesics, but that the overall effect of these five classes was more subtle (28/59 returned amongst the top 5 ranked predictions, 47.46%).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>. The overall layout of the side-effect network. Drugs are yellow, connecting edges are green. The layout is performed using a relative entropy optimization-based method <ns0:ref type='bibr' target='#b76'>[73]</ns0:ref>. In total, there are 616 connected nodes, with each having an average of 267 neighbours. Painkillers such as paracetamol and ibuprofen have the highest number of connections (587 and 585, respectively), which corresponds to them having the largest number of unique side-effects (256 and 224) reported on Twitter. The strongest connection is between chondroitin and glucosamine (partial correlation coefficient (PCC) 0.628), both of which are dietary supplements used to treat osteoarthritis, closely followed by the antidepressant and anxiolytic agents phenelzine and tranylcypromine (PCC 0.614).</ns0:p><ns0:p>The best performance of our approach at the top 5 level is achieved when the resulting ρ matrix has a sparsity of 35.59% (Figure <ns0:ref type='figure'>1b</ns0:ref> and Figure <ns0:ref type='figure'>2</ns0:ref>) which both justifies the use of the 1 -norm penalized graphical lasso, and generates a graphical model with approximately a third of the parameters of a fully dense matrix, while the comparable performance at λ values between 10 −12 and 10 −7 also indicates a degree of robustness to the choice of this parameter. Beyond the top 5 ranked predictions, results are encouraging as the majority of targets (56.02%) will have a co-indicated drug identified by considering only the top 10 predictions, suggesting the method is a feasible strategy for prioritisation of repositioning candidates.</ns0:p></ns0:div>
<ns0:div><ns0:head>Predicting proposed indications of compounds currently in clinical trials</ns0:head><ns0:p>While the previous section demonstrated our approach can effectively recover known indications, predictions after the fact are -while useful -best supported by more forwardlooking evidence. In this section, we use clinical trial data to support our predictions where the ultimate success of our target drug is still unknown. Using Cortellis Clinical Trials Intelligence, we extracted drugs present in our Twitter data set that were currently undergoing clinical trials (ending after 2014) for a novel indication (i.e. for which they were not already indicated), resulting in a subset of 277 drugs currently in trials for 397 indications. Figure <ns0:ref type='figure'>3</ns0:ref> shows the percentage at which a co-indicated drug was ranked amongst the top 5, 10, 15, 20 and 25 predictions for the target. Similar to the recovery of known indications, best performance when considering the top 5 ranked predictions was achieved with a λ value of 10 −9 , resulting in 16.25% (45/277) of targets having a co-indicated drug, which again compares well to a randomized strategy (5.42%, standard error ±0.32) or a strategy using the Jaccard index (10.07%). Recovery of proposed clinical trial indications is clearly more challenging than known indications, possibly reflecting the fact that a large proportion of drugs will fail during trials and therfore many of the 397 proposed indications analysed here will in time prove false, although the general trend in performance as the sparsity parameter λ is adjusted tends to mirror the recovery of known indications. Despite this, a number of interesting predictions with a diverse range of novel indications are made that are supported by experimental and clinical evidence; a selection of 10 of the 45 drugs where the trial indication was correctly predicted are presented in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. We further investigated three repositioning candidates with interesting pharmacology to understand their predicted results. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>. Predicted repositioning of oxytocin (red) for the treatment of schizophrenia based on its proximity to the schizophrenia drug chlorpromazine (grey). Drugs in the graph are sized according to their degree (number of edges), while the thickness of a connecting edge is proportional to the partial correlation coefficient between the two drugs. The graph layout is performed by Cytoscape <ns0:ref type='bibr' target='#b84'>[81]</ns0:ref> which applies a force-directed approach based on the partial correlation coefficient. Nodes are arranged so that edges are of more or less equal length and there are as few edge crossings as possible. For clarity, only the top ten drugs ranked by partial correlation coefficient are shown.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oxytocin</ns0:head><ns0:p>Oxytocin is a nonapeptide hormone that acts primarily as a neuromodulator in the brain via the specific, high-affinity oxytocin receptor -a class I (Rhodopsin-like) G-proteincoupled receptor (GPCR) <ns0:ref type='bibr' target='#b77'>[74]</ns0:ref>. Currently, oxytocin is used for labor induction and the treatment of Prader-Willi syndrome, but there is compelling pre-clinical evidence to suggest that it may play a crucial role in the regulation of brain-mediated processes that are highly relevant to many neuropsychiatric disorders <ns0:ref type='bibr' target='#b78'>[75]</ns0:ref>. A number of animal studies have revealed that oxytocin has a positive effect as an antipsychotic <ns0:ref type='bibr' target='#b79'>[76,</ns0:ref><ns0:ref type='bibr' target='#b80'>77]</ns0:ref>, while human trials have revealed that intranasal oxytocin administered to highly symptomatic schizophrenia patients as an adjunct to their antipsychotic drugs improves positive and negative symptoms significantly more than placebo <ns0:ref type='bibr' target='#b81'>[78,</ns0:ref><ns0:ref type='bibr' target='#b82'>79]</ns0:ref>. These therapeutic findings are supported by growing evidence of oxytocin's role in the manifestation of schizophrenia symptoms such as a recent study linking higher plasma oxytocin levels with increased pro-social behavior in schizophrenia patients and with less severe psychopathology in female patients <ns0:ref type='bibr' target='#b83'>[80]</ns0:ref>. The mechanisms underlying oxytocin's therapeutic effects on schizophrenia symptoms are poorly understood, but its ability to regulate mesolimbic dopamine pathways are thought to be responsible <ns0:ref type='bibr' target='#b78'>[75]</ns0:ref>. Here, our method predicts schizophrenia as a novel indication for oxytocin based on its proximity to chlorpromazine, which is currently used to treat schizophrenia (Figure <ns0:ref type='figure'>4</ns0:ref>). Chlorpromazine also modulates the dopamine pathway by acting as an antagonist of the dopamine receptor, another class I GPCR. Interestingly, the subgraph indicates that dopamine also has a high partial correlation coefficient with oxytocin, adding further support to the hypothesis that oxytocin, chlorpromazine and dopamine all act on the same pathway and therefore have similar side-effect profiles. Side-effects shared by oxytocin and chlorpromazine include hallucinations, excessive salivation and anxiety, while shivering, weight gain, abdominal pain, nausea, and constipation are common side-effects also shared by other drugs within the subgraph. Currently, larger scale clinical trials of intranasal oxytocin in schizophrenia are underway. If the early positive results hold up, it may signal the beginning of an new era in the treatment of schizophrenia, a field which has seen little progress in the development of novel efficacious treatments over recent years.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Ramelteon</ns0:head><ns0:p>Ramelteon, currently indicated for the treatment of insomnia, is predicted to be useful for the treatment of bipolar depression (Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>). Ramelteon is the first in a new class of sleep agents that selectively binds the MT 1 and MT 2 melatonin receptors in the suprachiasmatic nucleus, with high affinity over the MT 3 receptor <ns0:ref type='bibr' target='#b85'>[82]</ns0:ref>. It is believed that the activity of ramelteon at MT 1 and MT 2 receptors contributes to its sleep-promoting properties, since these receptors are thought to play a crucial role in the maintenance of the circadian rhythm underlying the normal sleep-wake cycle upon binding of endogenous melatonin. Abnormalities in circadian rhythms are prominent features of bipolar I disorder, with evidence suggesting that disrupted sleep-wake circadian rhythms are associated with an increased risk of relapse in bipolar disorder <ns0:ref type='bibr' target='#b86'>[83]</ns0:ref>. As bipolar patients tend to exhibit shorter and more variable circadian activity, it has been proposed that normalisation of the circadian rhythm pattern may improve sleep and consequently lead to a reduction in mood exacerbations. Melatonin receptor agonists such as ramelteon may have a potential therapeutic effect in depression due to their ability to resynchronize the suprachiasmatic nucleus <ns0:ref type='bibr' target='#b87'>[84]</ns0:ref>. In Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>, evidence supporting the repositioning of ramelteon comes from ziprasidone, an atypical antipsychotic used to treat bipolar I disorder and schizophrenia <ns0:ref type='bibr' target='#b88'>[85]</ns0:ref>. Ziprasidone is the second-ranked drug by partial correlation coefficient; a number of other drugs used to treat mood disorders can also be located in the immediate vicinity including phenelzine, a non-selective and irreversible monoamine oxidase inhibitor (MAOI) used as an antidepressant and anxiolytic, milnacipran, a serotonin-norepinephrine reuptake inhibitor used to treat major depressive disorder, and tranylcypromine, another MAOI used as an antidepressant and anxiolytic agent. The co-location of these drugs in the same region of the graph suggests a degree of overlap in their respective mechanistic pathways, resulting in a high degree of similarity between their side-effect profiles. Nodes in this subgraph also have a relatively large degree indicating a tighter association than for other predictions, with common shared side-effects including dry mouth, sexual dysfunction, migraine, and orthostatic hypotension, while weight gain is shared between ramelteon and ziprasidone.</ns0:p></ns0:div>
<ns0:div><ns0:head>Meloxicam</ns0:head><ns0:p>Meloxicam, a nonsteroidal anti-inflammatory drug (NSAID) used to treat arthritis, is predicted to be a repositioning candidate for the treatment of non-Hodgkin lymphoma, Manuscript to be reviewed via the mobilisation of autologous peripheral blood stem cells from bone marrow. By inhibiting cyclooxygenase 2, meloxicam is understood to inhibit generation of prostaglandin E 2 , which is known to stimulate osteoblasts to release osteopontin, a protein which encourages bone resorption by osteoclasts <ns0:ref type='bibr' target='#b89'>[86,</ns0:ref><ns0:ref type='bibr' target='#b90'>87]</ns0:ref>. By inhibiting prostaglandin E 2 and disrupting the production of osteopontin, meloxicam may encourage the departure of stem cells, which otherwise would be anchored to the bone marrow by osteopontin <ns0:ref type='bibr' target='#b91'>[88]</ns0:ref>. In Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, rituximab, a B-cell depleting monoclonal antibody that is currently indicated for treatment of non-Hodgkin lymphoma, is the top ranked drug by partial correlation, which provides evidence for repositioning to this indication. Interestingly, depletion of Bcells by rituximab has recently been demonstrated to result in decreased bone resorption in patients with rheumatoid arthritis, possibly via a direct effect on both osteoblasts and osteoclasts <ns0:ref type='bibr' target='#b92'>[89,</ns0:ref><ns0:ref type='bibr' target='#b93'>90]</ns0:ref>, suggesting a common mechanism of action between meloxicam and rituximab. Further evidence is provided by the fifth-ranked drug clopidogrel, an antiplatelet agent used to inhibit blood clots in coronary artery disease, peripheral vascular disease, cerebrovascular disease, and to prevent myocardial infarction. Clopidogrel works by irreversibly inhibiting the adenosine diphosphate receptor P2Y12, which is known to increase osteoclast activity <ns0:ref type='bibr'>[91]</ns0:ref>. Similarly to the ramelteon subgraph, many drugs in the vicinity of meloxicam are used to treat inflammation including diclofenac, naproxen (both NSAIDs) and betamethasone, resulting in close association between these drugs, with shared side-effects in the subgraph including pain, cramping, flushing and fever, while swelling, indigestion, inflammation and skin rash are shared by meloxicam and rituximab.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>While the side-effects shared within the subgraphs of our three examples are commonly associated with a large number of drugs, some of the side-effects shared by the three drug pairs such as hallucinations, excessive salivation and anxiety are somewhat less common. To investigate this relationship for the data set as a whole, we calculated log frequencies for all side-effects and compared these values against the normalized average rank of pairs where the side-effect was shared by both the query and target drug. If we assume that a higher ranking in our model indicates a higher likelihood of drugs sharing a protein target, this relationship demonstrates similar properties to the observations of Campillos et al. <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> in that there is a negative correlation between the rank and frequency of a side-effect. The correlation coefficient has a value of -0.045 which is significantly different from zero at the 0.001 level, although the linear relationship appears to break down where the frequency of the side-effect is lower than about 0.025.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we have used side-effect data mined from social media to generate a sparse graphical model, with nodes in the resulting graph representing drugs, and edges between them representing the similarity of their side-effect profiles. We demonstrated that known indications can be inferred based on the indications of neighbouring drugs in the network, with 42.41% of targets having their known indication identified amongst the top 5 ranked predictions, while 16.25% of drugs that are currently in a clinical trial have their proposed trial indication correctly identified. These results indicate that the volume and diversity of drug side-effects reported using social media is sufficient to be of use in side-effect-based drug repositioning, and this influence is likely to increase as the audience of platforms such as Twitter continues to see rapid growth. It may also help to address the problem of side-effect under-reporting. We also demonstrate that global statistical models such as the graphical lasso are well-suited to the analysis of large multivariate systems such as drug-drug networks. They offer significant advantages over conventional pairwise similarity methods by incorporating indirect relationships between all variables, while the use of the lasso penalty allows a sparse, parsimonious model to be generated with fewer spurious connections resulting in a simpler theory of relationships.</ns0:p><ns0:p>While our method shows encouraging results, it is more likely to play a role in drug repositioning as a component in an integrated approach. Whether this is achieved by combining reported side-effects with those mined from resources such as SIDER, or by using predictions as the inputs to a supervised learning algorithm, a consensus approach is likely to achieve higher performance by incorporating a range of different data sources in addition to drug side-effects, while also compensating for the weaknesses of any single method <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. Limitations of our method largely stem from the underlying Twitter data <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. Only a small fraction of daily tweets contain reports of drug sideeffects, therefore restricting the number of drugs we are able to analyse. However, given that systems such as SoMeDoSEs are capable of continuously monitoring Twitter, the numbers of drugs and reported side-effects should steadily accumulate over time.</ns0:p><ns0:p>To address this, in the future it may be possible to extend monitoring of social media to include additional platforms. For example, Weibo is a Chinese microblogging site akin to Twitter, with over 600 million users as of 2013. Clearly, tools will have to be adapted to deal with multilingual data processing or translation issues, while differences in cultural attitudes to sharing medical information may present further challenges. Extensions to the statistical approach may also result in improved performance. Methods such as the joint graphical lasso allow the generation of a graphical model using data with observations belonging to distinct classes <ns0:ref type='bibr' target='#b95'>[92]</ns0:ref>. For example, two covariances matrices generated using data from Twitter and SIDER could be combined in this way, resulting in a single model that best represents both sources. An extension to the graphical lasso also allows the decomposition of the sample covariance graph into smaller connected components via a thresholding approach [93]. This leads not only to large performance gains, but significantly increases the scalability of the graphical lasso approach.</ns0:p><ns0:p>Another caveat to consider, common to many other repositioning strategies based on side-effect similarity, is that there is no evidence to suggest whether a repositioning candidate will be a better therapeutic than the drug from which the novel indication was inferred. While side-effects can provide useful information for inferring novel indications, they are in general undesirable and need to be balanced against any therapeutic benefits. Our model does not attempt to quantify efficacy or side-effect severity, but it might be possible to modify the natural language processing step during Twitter mining in order to capture comparative mentions of side-effects, since tweets discussing both the </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>4 / 22 PeerJ</ns0:head><ns0:label>422</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b62'>[60]</ns0:ref>: 6/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>7 / 22 PeerJFigure 1 .</ns0:head><ns0:label>7221</ns0:label><ns0:figDesc>Figure 1. Recovery of known indications (a) Percentage at which a co-indicated drug is returned amongst the top 5, 10, 15, 20 and 25 ranked predictions for a given target, at different λ values -the parameter that weights the 1 -norm penalty in the graphical lasso (Equation 7) (b) Sparsity of ρ matrix at different λ values, i.e. the number of non-zero elements in the upper triangle divided by (n 2 − n)/2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>9 / 22 PeerJFigure 3 .</ns0:head><ns0:label>9223</ns0:label><ns0:figDesc>Figure 3. Recovery of proposed clinical trial indications. Percentage at which a coindicated drug is returned amongst the top 5, 10, 15, 20 and 25 ranked predictions for a given target, at different λ values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Predicted repositioning of ramelteon (red) for the treatment of bipolar I disorder based on its proximity to ziprasidone (grey). Along with ziprasidone, phenelzine, milnacipran and tranylcypromine are all used to treat mood disorders.</ns0:figDesc><ns0:graphic coords='14,253.65,61.20,266.70,160.20' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>13 / 22 PeerJ</ns0:head><ns0:label>1322</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Predicted repositioning of meloxicam (red) for the treatment of non-Hodgkin lymphoma based on its proximity to rituximab (grey).</ns0:figDesc><ns0:graphic coords='15,284.38,61.20,205.25,130.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>15 / 22 PeerJ</ns0:head><ns0:label>1522</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Predicted indications for drugs currently in clinical trials. A selection of drugs which are currently in clinical trials for a new indication,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>10/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016) and have a co-indicated drug ('Evidence') ranked amongst the top 5 predictions. 'PCC' is the absolute partial correlation coefficient, 'ID' is the Cortellis Clinical Trials Intelligence identifier. Average PCC scores for co-indicated drugs ranked amongst the top 5, 10, 15, 20 and 25 positions were 0.162, 0.0804, 0.0620, 0.0515, and 0.0468, respectively. 11/22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6894:2:0:NEW 18 Jan 2016)</ns0:note></ns0:figure>
</ns0:body>
" | "Dear PeerJ,
We thank the reviewers for their efforts in assessing our manuscript. We have corrected the typos identified by reviewer 2, and have addressed the comments of reviewer 3. Responses are in bold and black, while changes to the text are in bold and blue. In the manuscript PDF, these changes are highlighted in yellow. Please do not hesitate to get in touch if there are further questions.
Your sincerely,
Timothy Nugent, Vassilis Plachouras and Jochen L. Leidner
Reviewer Comments
Reviewer 2 (Anonymous)
The authors have addressed all my comments and have updated the manuscript in a way that the description of methods and results, as well as limitations of their approach, are clearly exposed. Therefore, I believe that the paper meets the PeerJ criteria and should be accepted as is.
However, I would like to point out two minor mistakes:
page 4, 151: Side effect – Side effects
Corrected
page 8, 318: Using Using -- Using
Corrected
Reviewer 3 (Anonymous)
Basic reporting
The paper has improved since its original version addressing concerns mentioned the previous review. Thanks.
Experimental design
While authors have included more details on their experimental setting in the revised version, I still have a few more questions on the settings and methodology:
1- Line 149 mentions topic and volume filters. Could you please clarify in the text what are these filters. If that's already there it is not very clear.
These filters operate on the Twitter firehose. The topic filter requires tweets to contain a drug-related keyword for it to pass, while the volume filter reduces overall volume by removing tweets that are non-English, contain a link or are re-tweets. The text has been updated as follows:
SoMeDoSEs works by first applying topic filters to identify tweets that contain keywords related to drugs, before applying volume filters which remove tweets that are not written in English, are re-tweets or contain a hyperlink to a web page, since these posts are typically commercial offerings
2- CHV is very limited. Do you have any idea how much its limitation may have affected your data collection?
Consumer Health Vocabulary (CHV) was used to generate features for machine learning, rather than used as part of a filter, so its limitation will not have affected data collection. CHV matches were mapped to Unified Medical Language System Concept Universal Identifiers, which were then encoded as binary features.
3- How the annotation of the tweets were done? Who annotated them and if they were annotated by multiple annotators, did you calculate inter-annotator agreement?
Cohen’s Kappa for inter-annotator agreement on a sample of 404 tweets annotated by two non-healthcare professional was 0.535. We consider this a fair level of agreement, given that the annotators were not trained in healthcare, the often ambiguous text of tweets, and the lack of context or conversations in which a tweet was posted. The text has been updated as follows:
To develop the system, 10,000 tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not, resulting in a Cohen’s Kappa for inter-annotator agreement on a sample of 404 tweets annotated by two non-healthcare professional of 0.535.
4- Did you randomly divide the data into training, testing and development or did you sort them based on a filter such as time? Past work has shown that it is possible to overfit a SVM classifier if tweet data is randomly divided because of the dependency of the tweets (look at: Reading the markets: Forecasting public opinion of political candidates by news analysis, in COLING 2008, and Evaluation Methods for Statistically Dependent Text, Computational Linguistics journal 2015).
If your data is in fact divided randomly, I'd be interested to know if you get similar results by sorting them and using old tweets as training/dev and new ones as testing.
In any case, please clarify this in your manuscript for your readers.
No - the data was not divided randomly. Tweets were order by time, ensuring that all tweets used for testing (and development) came after tweets used for training. Test tweets came after development tweets.
Validity of the findings
Please see above.
Comments for the author
- I still think that Figure 2 could be removed and the corresponding information being shown in the text. The caption does include interesting information, but the figure itself not.
We have further modified the image and included a zoomed-in section; we believe the graph as a whole is useful as it makes a conceptual link to the inverse covariance matrix, which can be interpreted as this graph. We also describe three subgraphs so believe there is some value in illustrating where these subgraphs originate from.
- Normalization method has to be explained in the text the way you explained in response to my comment before. Also, mention this to the limitation of your work.
This has been added to the text as follows:
... and drug synonyms had been resolved to a common name using exact string matches to entries in World Drug Index [63], which worked for approximately half of the data set with the remainder matched manually.
" | Here is a paper. Please give your review comments after reading it. |
40 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background Any contamination in the human body can prompt changes in blood cell morphology and various parameters of cells. The minuscule images of blood cells are examined for recognizing the contamination inside the body with an expectation of maladies and variations from the norm. Appropriate segmentation of these cells makes the detection of a disease progressively exact and vigorous. Microscopic blood cell analysis is a critical activity in the pathological analysis. It highlights the investigation of appropriate malady after exact location followed by an order of abnormalities, which assumes an essential job in the analysis of various disorders, treatment arranging, and assessment of results of treatment.</ns0:p><ns0:p>Methodology A survey of different areas where microscopic imaging of blood cells is used for disease detection is done in this paper. Research papers from this area are obtained from a popular search engine, google scholar. The articles are searched considering the basics of blood such as its composition followed by staining of blood, that is most important and mandatory before microscopic analysis. Different methods for classification, segmentation of blood cells are reviewed. Microscopic analysis using image processing, computer vision and machine learning are the main focus of the analysis and the review here. Methodologies employed by different researchers for blood cells analysis in terms of these mentioned algorithms is the key point of review considered in the study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Different methodologies used for microscopic analysis of blood cells are analyzed and are compared according to different performance measures. From the extensive review the conclusion is made.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>There are different machine learning and deep learning algorithms employed by researchers for segmentation of blood cell components and disease detection considering microscopic analysis. There is a scope of improvement in terms of different performance evaluation parameters. Different bioinspired optimization algorithms can be used for improvement. Explainable AI can analyze the features of AI implemented system and will make the system more trusted and commercially suitable.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION 41</ns0:head><ns0:p>Blood, the most integral part of the body, is constituted of white blood cells (WBC), red blood cells (RBC), There are different pathological procedures for the detection of diseases. In most cases, microscopic imaging plays a vital role in predicting and detecting abnormalities and occurrences of diseases within the body. Typically health of any person is judged by analyzing different features of blood cells and their counts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Why the study needed</ns0:head><ns0:p>Previously manual methods of blood cells analysis were used by pathologists. This might cause error in disease prediction since manual methods are dependent on experience and skills of pathologists. Hence, it is proposed that an automated system of image processing be developed using different algorithms.</ns0:p><ns0:p>A simplified, automated and cost effective system is required for detection of diseases. Thus the above components explained are analyzed for knowing health indication of human being and thereby detecting abnormalities, if any. Though many researchers contributed in the study, there is a need to explore the research in many perspectives.</ns0:p><ns0:p>1. Segmentation of different blood components is still having some shortcomings, such as overlapping cells during the staining.</ns0:p><ns0:p>2. There are different parasitic components in blood cells those need to be identified. So that an existence of a particular malady could be highlighted.</ns0:p><ns0:p>3. There are many challenging diseases like leukemia that have many sub-types depending upon the cell morphology. To detect the exact type of leukemia is still challenging. <ns0:ref type='bibr' target='#b4'>4</ns0:ref>. In medical imaging, the use of artificial intelligence will have a critical issue that it is used as a black box. Hence it could not be considered full proof and trusted at all times. The technique known as explainable artificial intelligence is the need of study in relation to these analysis concepts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Who it is intended for</ns0:head><ns0:p>It is always a critical and crucial job for diagnosing diseases in the medical field Since these decisions are further related to a patient's life. To provide significant contributions in the current diagnostic system is to be intended in many ways. This area is to be studied by a variety of disciplinary and inter-disciplinary researchers. Following are details that will show to whom the study is intended for:</ns0:p></ns0:div>
<ns0:div><ns0:head>Academic Researchers</ns0:head><ns0:p>Artificial intelligence, machine learning, and deep learning are prime research areas in academics. A disease detection system's performance utilizing microscopic imaging could be improved by applying these algorithms to the current system.</ns0:p></ns0:div>
<ns0:div><ns0:head>Doctors</ns0:head><ns0:p>For diagnosis of diseases, doctors rely on the analysis of blood in many cases, in addition to the symptoms of a patient. Blood cell analysis proves to be the most accurate method of diagnosis in the medical field in most disorders. This study provides diagnostic assistance to doctors for further treatment of the patients.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pathologists</ns0:head><ns0:p>Blood cells analysis and diagnosis is the leading job of a pathologist. He is responsible for statistical analysis of blood, urine, etc. In some instances, the morphology of blood is important, which is analyzed by microscopic methods. The predictions are dependent upon the experience and skill set of the pathologist in some critical diagnostic conditions. The automated and sophisticated methods of diagnosis via microscopic analysis will prove an assisted diagnostic system for them.</ns0:p></ns0:div>
<ns0:div><ns0:head>Commercial Use</ns0:head><ns0:p>Pathological analysis is either equipment-based, chemical-based, or morphology-based. Currently, equipment-based systems are costlier, and there is a need to develop a cost-effective automated system for the same. Morphology-based systems can be studied and employed for commercial use that will prove to be cost-effective.</ns0:p></ns0:div>
<ns0:div><ns0:head>Bioinformations</ns0:head><ns0:p>A bioinformatician is a person with research in biology, medicine, and health-related studies. These fields are studied with information technology. It involves the collection and interpretation of data covering a range of fields, including genetics or pharmaceutics.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Haematologics</ns0:head><ns0:p>Hematology is the science that deals with the study of blood and its related disorders. There are some methods related to blood disorders that contribute a fair amount of suggestive diagnosis in hematology.</ns0:p></ns0:div>
<ns0:div><ns0:head>machine learning experts</ns0:head><ns0:p>A machine and deep learning are the branches that prove to be the future of technology in the medical field. There are different methods of machine and deep learning those could improve the decisions in medical imaging. This article provides the guidelines and challenges towards the utilization of machine learning in macroscopic imaging.</ns0:p></ns0:div>
<ns0:div><ns0:head>Technicians in laboratories</ns0:head><ns0:p>The use of AI in laboratories can guide the technician with less experience.</ns0:p></ns0:div>
<ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head><ns0:p>Different approaches are used for searching the articles.</ns0:p><ns0:p>1. Articles are searched through keywords on one of the popular platforms, google scholar. The popular keyword considered are white blood cell, red blood cell, disease, machine learning, deep learning, image processing, and explainability 2. The search is refined by re-arranging the keywords to make the search article specific in-lined with the area. Papers are considered only from the English language.</ns0:p><ns0:p>3. After getting a large number of articles, their abstract is read to finalize them for the reviewing process.</ns0:p><ns0:p>4. Some papers are finalized for the review; those proved to be significantly contributed to the research subject.</ns0:p><ns0:p>5. Cross-references are also searched by scrutinizing some papers having good research contributions.</ns0:p><ns0:p>The following figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the co-occurrences of different keywords, considering 02 keywords per article as a threshold. The co-occurrence is analyzed by VOSviewer 1.65. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>REVIEW OVERVIEW</ns0:head><ns0:p>This overview is divided into different sections. The first section includes some terminologies related to blood analysis-blood composition and staining process. The second section comprises the microscopic imaging applied to various diseases and their analysis. This section is followed by the generalized methodology for the detection of disease by utilizing image processing and computer vision algorithms.</ns0:p><ns0:p>The methods and algorithms adopted by different researchers are discussed in this section. A note on publicly available databases is presented further. Staining of Blood Smear For analysis of microscopic blood images, the blood film needs to be prepared.</ns0:p></ns0:div>
<ns0:div><ns0:head>Some basic terminology related to blood</ns0:head><ns0:p>Glass slide is used for making the blood film. For examination and analysis of this film under the microscope, staining is required. Preparation of blood film requires a slide, a tube, and a blood spreader.</ns0:p><ns0:p>Generally, weldge method is used for this purpose. On a base slide, a drop of blood is placed. A spreader slide is moved over this blood drop backward to touch the blood to get the blood spread over the slide uniformly. To get perfection and accuracy in the smear, the spreader slide should be inclined at an angle of 30 degrees to 45 degrees to the blood base slide. The prepared blood smear is dried using an air dryer, and then staining is performed. The dried smear is fixed by absolute methanol or ethyl alcohol. Afterward, it is stained using any of the staining methods -rewmanosky stain, leishmon stain, may-grawald giema or wright-giemsa stain, which differs in the liquid used for staining purpose. These stained slides are then used for analysis under the microscope.(26)(42) <ns0:ref type='bibr' target='#b62'>(60)</ns0:ref>. Generally, laboratories are used to their respective homebrew technics, and therefore, as peripheral blood differential is aphenotypical method, technicians and doctors but also machine learning tools may have problems translating their experience for other laboratories.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science preferably it analyzes the morphological abnormality of different components of blood(49) <ns0:ref type='bibr' target='#b35'>(34)</ns0:ref>. The following are various areas where image processing and computer vision could be utilized for blood cell analysis(63)(9)(Lavanya et al.) <ns0:ref type='bibr' target='#b64'>(61)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Blood Cell Counts</ns0:head><ns0:p>RBC and WBC counts are characteristics of a patient's well-being. It is seen that in a large portion of the cases, the absolute blood cell count is influenced because of an infection within the body. Typically machines are present to count the blood cells and other components in the blood. Nevertheless, when required to get certain particular kinds of observations and abnormalities, there is a need for microscopic analysis. Also, the counting of RBC and WBC is possible by automated computer vision methods considering the cell morphologies. The blood smear is formed after staining that outcomes the film of blood. These films are observed under the microscope, and the photographs of these images are used for counting. The cells are preprocessed after the microscopic photograph and then segmented to get the required region of interest for counting. Image processing and computer vision methods are utilized for counting purposes. Isolated blood cells are counted via automated algorithms rather than manual, which enhances accuracy(1)(15) <ns0:ref type='bibr' target='#b38'>(37)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Detection of viral diseases such as Malaria, dengue, chikunguniya, hepatitis</ns0:head><ns0:p>The decrease in RBC and platelets are observed during viral infections. Moreover, the infections' parasites are also identified to realize the viral ailment like intestinal sickness, dengue, chikungunya, or hepatitis.</ns0:p><ns0:p>Pathologist distinguishes these contaminations by microscopic blood cell analysis. This process of identification of different parasites is done by automated techniques involving computer vision, image processing, and machine learning algorithms <ns0:ref type='bibr' target='#b47'>(46)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Leukemia Detection</ns0:head><ns0:p>Harmful reasons, such as leukemia, seriously influence the body's blood-forming tissues and lymphatic framework. In leukemia, the white blood cells created by bone marrow are anomalous. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows the distinction between normal and leukemia cells. It has two significant subtypes, acute leukemia, and chronic leukemia. Leukemia can further be classified into other following types, namely, acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML). Recognition of these malignant growth cells is done manually by microscopic image analysis and requires a competent pathologist. An improved automated system of leukemia detection is designed based on image processing and machine learning techniques, which ends up being proficient when contrasted with manual detection. (41)(17)(56)(54)(48)(29) <ns0:ref type='bibr' target='#b41'>(40)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Anemia and Sickle Cell Detection</ns0:head><ns0:p>Abatement in hemoglobin or absence of solid RBC prompts anemia. It can cause contamination of viral sicknesses and issues identified with relaxing. Anemia detection is mostly done by identifying sickle cells in the blood. These sickle cells have a typical crescent moon shape. These cells are recognized and categorized as ordinary cells and sickle cells via automated algorithms involving computer vision and machine learning(11)(22)(10) <ns0:ref type='bibr' target='#b29'>(28)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Parkinson's diseases, there are no exact criteria related to clinical ways. For improvements in these kinds of diagnoses emerging metabolomics serves as a powerful technique. This includes a study of novel biomarkers and biochemical pathways <ns0:ref type='bibr' target='#b67'>(64)</ns0:ref> There is one more challenge to identify proteins from the complex biological network that interact with each other as well as with the cell's environment.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Pseudo-Guided Multi-Objective Genetic Algorithm (PGMOGA) proposed that reconstitutes pathways by assigning orientation weighted network edges <ns0:ref type='bibr' target='#b28'>(27)</ns0:ref>. A gene encoder presented that incorporates twostage feature selection with an unsupervised type for the classification of cancer samples <ns0:ref type='bibr' target='#b7'>(7)</ns0:ref> There is a requirement of finding the correct DNA sequence to get the desired information about the genetic makeup of an organism. A hybrid approach presented utilized Restarting and Recentering Genetic Algorithm (RRGA) with integrated PALS <ns0:ref type='bibr' target='#b24'>(24)</ns0:ref>. For working on the different datasets, there is a need to get a set of visualization metrics. These are used to quantify visualization techniques-an approach of visualization metrics based on effectiveness, expressiveness, readability, and interactivity. Evolutionalry algorithm (EA) is used here as a case study. This methodology can also be utilized further for other visualization techniques <ns0:ref type='bibr' target='#b25'>(25)</ns0:ref>. There is also a requirement for the extraction of information from larger datasets. A popular Frequent Itemsets (FIs) mining is a task to find itemsets in a transactional database.</ns0:p><ns0:p>The graph-based approach is used for the representation of a complete transactional database <ns0:ref type='bibr' target='#b23'>(23)</ns0:ref>.</ns0:p><ns0:p>Paragraph Out of these different diseases, leukemia is one of the most dangerous in its later stages <ns0:ref type='bibr' target='#b19'>(19)</ns0:ref>.</ns0:p><ns0:p>It develops blasts cells in the blood, which later affect the generation of normal white blood cells. As the number of these blast cells increases, the body gets shortened of healthy cells, leading to frequent infections. Different types of leukemia define the way to treat it. So it is always necessary that the type of leukemia be detected with great accuracy. The morphological differences in the normal and leukemia cells are shown in figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. This image is taken either by camera directly or from an adaptor connected to a microscope. This image is considered for further analysis. Acquired images may have some unwanted regions and overlapping Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Generalized Methodology</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>further classified into their respective sub-classes. This will help specify a particular sub-class image for extracting features for analysis of blood cells, and depending upon the analysis, detection of disease is done. After the segmentation, different features are extracted by considering different components of blood. Features include size, shape, color, count of different blood components like WBC and RBC counts. Analysis of these features will further detect the disease or count the cells. Depending upon different features extracted, the decision about the disease could be taken. To make the decisions, different classifiers could be designed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Image Pre-Processing</ns0:head><ns0:p>Following are the different methods used for pre-processing. Self dual multi-scale morphological toggle (SMTT) block <ns0:ref type='bibr' target='#b12'>(12)</ns0:ref>, wiener filter <ns0:ref type='bibr' target='#b44'>(43)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Feature Extraction</ns0:head><ns0:p>There are number of features that could be considered for feature extraction purpose. Some of them are given below 1. Color Features Color of the cell can be one of the features which can separate a cell from other types. For example is the color of plasma is very different (yellow) than other blood components.</ns0:p><ns0:p>In many cases, the color of the cell talks much about the abnormalities.</ns0:p><ns0:p>2. Geometric Features These are the features based on the geometry or shape of the cell. These include following,and like this. </ns0:p><ns0:formula xml:id='formula_0'>Elongation =</ns0:formula><ns0:formula xml:id='formula_1'>Compactness = 4 × pi × area Perimeter 2<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>3. Statistical Features Statistical moments such as mean and standard deviation gives information about the appearance of distribution. Skewness and kurtosis shape the distribution along with the area and perimeter of the shape. The following are the different statistical features.</ns0:p><ns0:formula xml:id='formula_2'>Mean, x = 1 N N ∑ i=1 (x i )<ns0:label>(6</ns0:label></ns0:formula><ns0:p>) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>StandardDeviation, σ = 1 N − 1 N ∑ i=1 (X i − x) 2<ns0:label>(7</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_4'>Skewness, SK = 1 N N ∑ i=1 (x i − x) 3 σ 3 (8) Kurtosis, K = 1 N N ∑ i=1 (x i − x) 4 σ 4 (9)</ns0:formula><ns0:p>4. Texture Features There are different texture features that are defined such as entropy, correlation, energy, contrast, homogeneity, and so on.</ns0:p><ns0:p>Entropy generally defines randomness in the characterization of texture of an image. When cooccurrence elements are same, entropy leads to its maximum value. The equation of entropy as follows.</ns0:p><ns0:p>Entropy, Ent =</ns0:p><ns0:formula xml:id='formula_5'>N−1 ∑ i=0 N−1 ∑ j=0 M(i, j)(− ln(M(i, j))) (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>Contrast is the intensity variations in the neighboring pixels in an image.</ns0:p><ns0:formula xml:id='formula_7'>Con = N−1 ∑ i=0 N−1 ∑ j=0 (i − j) 2 (M(i, j)<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>Energy (E) is the measure of the extent of repetitions of pixel pairs. It gives an uniformity of the image. It gives a larger value for similar pixels.</ns0:p><ns0:p>Energy, E =</ns0:p><ns0:formula xml:id='formula_8'>N−1 ∑ i=0 N−1 ∑ j=0 M 2 (i, j)<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>5. Correlation Features The repetitive nature of the texture elements position in the image is an important. An auto-correlation function gives the coarseness in an image.</ns0:p><ns0:formula xml:id='formula_9'>Auto − correlation, P(x, y) = ∑ N u=0 ∑ N v=0 I(u, v)I(u + x, v + y) ∑ N u=0 ∑ N v=0 I 2 (u, v)<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>6. Inverse Difference Moment or Homogeneity gauges the local homogeneity of a picture. IDM features acquire the proportions of the closeness of the distribution of the GLCM components to the diagonal of GLCM. IDM has a scope of determining the image and classify it as textured or non-textured.</ns0:p><ns0:formula xml:id='formula_10'>IDM = N−1 ∑ i=0 N−1 ∑ j=0 1 1 + (i − j) 2 M(i, j)<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>7. Directional Moment In this, the image alignment is considered with respect to the angle.</ns0:p><ns0:formula xml:id='formula_11'>DM = N−1 ∑ i=0 N−1 ∑ j=0 M(i, j) |i = j| (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Classifier for Disease Detection</ns0:head><ns0:p>There are different classifiers for the classification of images which are employed for microscopic imaging of blood cells. These include machine learning algorithms as below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decision Tree Classifier</ns0:head><ns0:p>It falls under the supervised learning type. It is employed for regression as well as classification. It has roots, branches, nodes, and leaves. Manuscript to be reviewed Decision trees may be affected by the problem of overfitting, which could be further analyzed and is taken care of. Regularization is the process to be adapted during the generation of decision trees for compensating for the overfitting issue.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Random Forest</ns0:head><ns0:p>It has many decision trees, with each tree having different training sets. Hence this algorithm is more effective in solving classification problems. An important issue in this algorithm is the selection of a pruning method and branching criteria selection. Popular gain measurement techniques here are the Gain ratio and the Gini index. This algorithm is based on the number of trees to be developed and samples used by the particular node. Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> shows the random forest algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>K-Nearest Neighbours (KNN)</ns0:head><ns0:p>This classifier employs the nearest instances of training in the attribute space. Here according to the k value of neighbor, the new sample value of the class is decided. For getting the class of the new vector, the closest k samples from training data are selected. There are specific methods for calculating the distances according to the classified samples. These are illustrated as follows. Manuscript to be reviewed Minkowski,</ns0:p><ns0:formula xml:id='formula_13'>Euclidean, k ∑ i=1 (x i − y i ) 2<ns0:label>(16</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_14'>  k ∑ i=1 (|x i − y i |) q   1 q (18)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Logistic Regression</ns0:head><ns0:p>Regression can be of either linear regression and logistic regressions. Linear regression is a supervised regression algorithm while logistic regression is a supervised classification algorithm. It is categorized into different types. These are binary, multinomial, and the last one is the ordinal. The first type of regression is the, Binary Logistic Regression model This is the simplest type where the dependent variable can have either 0 or 1 showing only two possible types. So here we have predictor variables multiple but the target variable is binary or binomial.</ns0:p><ns0:formula xml:id='formula_15'>h θ (x) = g(θ T x), where0 ≤ h θ ≤ 1<ns0:label>(19)</ns0:label></ns0:formula><ns0:p>g is the logistic or sigmoid function, given as below.</ns0:p><ns0:formula xml:id='formula_16'>g(z) = 1 1 + e − z , wherez = θ x (20)</ns0:formula><ns0:p>A loss function is defined to know the performance of the algorithm using the weights where, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>h = g(Xθ ) J(θ ) = 1 m .(−y T log(h) − (1 − y) T log(1 − h))<ns0:label>(21</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>After this, loss function is minimized by decreasing and increasing the weights. The gradient descent tells us how the loss changes with the modification of parameters. </ns0:p><ns0:formula xml:id='formula_18'>δ J(θ ) δ θ j = 1 m X T (g(Xθ ) − y) (22)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Multinomial logistic regression</ns0:head><ns0:formula xml:id='formula_19'>π j (x i ) = e α i +β 1 jx 1 i+β 2 jx 2 i+......+β p jx p 1 + ∑ k−1 j=1 e α i +β 1 jx 1 i+β 2 jx2i+...+β p jx p (<ns0:label>24</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>here j 1 , j 2 .. j k are the k categories and n(i 1 , i 2 ...i n ) are the possibleindependent levels.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ordinal regression</ns0:head><ns0:p>It deals with a quantitative significance having 3 or more ordered types for the dependent variable. As an example, variables can be 'poor', 'good', 'very good ' or 'excellent' categories, can be represented as a score of 0 to 3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Naïve Bayes Algorithm</ns0:head><ns0:p>It is based on Bayes' theorem and is a probabilistic algorithm. It assumes the occurrence of a feature independent of the other features. Bayes theorem forms its basis,</ns0:p><ns0:formula xml:id='formula_21'>P(Y |X) = P(X|Y )P(Y ) P(X)<ns0:label>(25)</ns0:label></ns0:formula><ns0:p>It gives the relation between an arbitrary event X with y causing due to some random process. Here P(X) is the input probability, and P(Y) is considered as output probability, while P(Y-X) defines the probability of Y states versus the X input probability.</ns0:p><ns0:p>The major disadvantage of this algorithm is, it assumes all features as independent features. Hence not possible to get the relationship between the features.</ns0:p><ns0:p>It has three types viz Gaussian, Multinomial, and Bernoulli.</ns0:p><ns0:p>Gaussian Naive Bayes In this type, features have values that are continuous in nature and are considered to have Gaussian distribution.</ns0:p><ns0:p>Multinomial Naive Bayes: In this case, events are considered to occur in multinoial distribution type.</ns0:p><ns0:p>The feature vector is formed by the frequencies of occurrence of some events. Typical use includes the document classification.</ns0:p><ns0:p>Bernoulli Naive Bayes: Here, input describes the features with a binary variable independent of each other. In this model, features used are binary term occurrence instead of frequencies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Support Vector Machine (SVM)</ns0:head><ns0:p>Support vector machine (SVM) is a machine learning algorithm that has a supervised type. It finds its application in classification as well as regression. It has two types. Linear SVM, used for linearly separable data and non-linear SVM, is used for the data that is non-linearly separable; a non-linear type of SVM is used. Manuscript to be reviewed Support Vectors These are the data points which are closest to the hyperplane. These data points define the separating line.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Margin The gap between two lines on Closest data points is considered to define the margin. It is given by the gap in the lines of different classes. The perpendicular distance between the line and the support vectors is used to find the margin. Larger margins are good, and smaller ones prove to be bad.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolutional Neural Networks</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_14'>8</ns0:ref> shows the generalized architecture of convolutional neural network <ns0:ref type='bibr' target='#b46'>(45)</ns0:ref>. The CNN has two basic building blocks:</ns0:p><ns0:p>The Convolution Block -Consists of two layers, namely, the Convolution and the Pooling Layer. For feature extraction, these layers prove to be the essential components.</ns0:p><ns0:p>The Fully Connected Block -has a simple neural network that is fully connected architecture. This </ns0:p></ns0:div>
<ns0:div><ns0:head>Publicly Available Databases</ns0:head><ns0:p>There are different databases publicly available for the analysis of microscopic blood images. This Dataset contain microscopic images of different parts of blood cells, including white blood cells (WBC) and red blood cells (RBC), along with their sub-classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>BCCD Database</ns0:head><ns0:p>Total images in this data are Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Atlas of Hematology by Nivaldo Mediros</ns0:head><ns0:p>This database contains different types of microscopic blood cell images. Different image types include maturation sequence, alteration of erythrocytes, anemia images, leukemia images, parasites, fungus images, and some miscellaneous images. The photos of images were taken with the magnification of 200, 400, 360, and 1000 by the photo-microscopes of Zeiss and Nicon(3).</ns0:p></ns0:div>
<ns0:div><ns0:head>ASH Image Bank</ns0:head><ns0:p>This database is from the American Society of Hematology (ASH). It is basically an online image bank of having leukemia cells images. This image library web-based. This offers a wide collection of hematology categories. They provide images with variations in resolutions(4).</ns0:p></ns0:div>
<ns0:div><ns0:head>Leukocyte Images for Segmentation and Classification (LISC)</ns0:head><ns0:p>This dataset contains images of healthy subjects. The total number of images in the dataset is 400 from 100 different blood slides of healthy eight subjects. The size of the images is 720X576. These images are from Hematology-Oncology and BMT Research Center of Imam Khomeini Hospital in Tehran, Iran.</ns0:p><ns0:p>A total of 250 images are available for experimentation purposes. Images have different elements of leukocyte like eosinophil, basophil, lymphocyte, monocyte, and neutrophil (50) <ns0:ref type='bibr' target='#b21'>(21)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>C-NMC Dataset</ns0:head><ns0:p>This database is owned by The Cancer Imaging Archive (TCIA) public access. This database has a total of 118 participants, with a total of 15135 images. All images are in bit map image (BMP) format. This data is divided into train sets, preliminary test sets, and final test sets with different cancerous and normal images. Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref> shows the comparison of different publicly available datasets, and table <ns0:ref type='table' target='#tab_6'>2</ns0:ref> gives the information of datasets availability along with urls.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref> explores the different methods used by researchers along-with the performance measures, datasets, and a number of images used for disease detection. </ns0:p></ns0:div>
<ns0:div><ns0:head>GAP ANALYSIS</ns0:head><ns0:p>After having reviewed the related literature, the following research gaps are obtained.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.'>Gaps in Segmentation of cells</ns0:head><ns0:p>Overlapping cells are not considered at the segmentation stage by many researchers. As many practical cases have the overlapping of cells, during the staining procedure(Duan et al.) <ns0:ref type='bibr' target='#b53'>(51)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Gaps in algorithms and methodology</ns0:head><ns0:p>There are different bio-inspired optimization algorithms that are not used by most researchers for detection purposes. The hybrid system of different algorithms of the machine and deep learning also used by limited researchers that may give improved results(30).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Gaps in system performance</ns0:head><ns0:p>Performance measurement parameters in most of the cases are limited to accuracy only <ns0:ref type='bibr' target='#b45'>(44)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Gaps in detected diseases</ns0:head><ns0:p>Many diseases are not assured to be detected and diagnosed with their sub-types <ns0:ref type='bibr' target='#b66'>(63)</ns0:ref> This includes leukemia that has many sub-types. Its sub-types such as L1, L2, L3 are not considered in the case of acute lymphoblastic leukemia in most of the cases. In the case of acute myelogenous leukemia, its different subtypes T1 to T7, are not clearly detected in most of the cases(37) <ns0:ref type='bibr' target='#b56'>(54)</ns0:ref>. There is less investigation in terms of stages of diseases that gives a threshold for determining the severity of the disease for anemia, thalassemia <ns0:ref type='bibr' target='#b14'>(14)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5.Databases</ns0:head><ns0:p>Accuracy of different stages of blood cell analysis is tested on a limited database.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Gaps in critical decisions</ns0:head><ns0:p>In medical imaging, artificial intelligence and deep learning are employed by most researchers. It is less acceptable in the case of critical decisions because the implementations of these algorithms have a black box. Hence its features can not be analyzed, and there is always a problem of getting the right detection by wrong reasons. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion of explainable AI for critical analysis</ns0:head><ns0:p>In medical imaging, artificial intelligence is mostly used for detection and diagnosis(6) <ns0:ref type='bibr' target='#b61'>(59)</ns0:ref>. It is generally not been preferred as the final decision about the diagnosis. The main reason for this is, it works as a black box with only input and outputs are known. There might be the occurrence of the right decisions related to the diagnosis, but it might be due to wrong reasons. The algorithms such as decision trees elaborate their implementation to a good extent. However, it limits the parameters such as accuracy of diagnosis, although advanced AI algorithms and deep learning algorithms assure good accuracy in terms of diagnosis but unable to explain the insides of implementation-black box <ns0:ref type='bibr' target='#b36'>(35)</ns0:ref>. Hence, explainable AI came into the picture to justify the trust of diagnosis in medical imaging <ns0:ref type='bibr' target='#b57'>(55)</ns0:ref>. This will analyze the black box's features and characteristics in the implementation of AI or deep learning algorithms. In order to make the system to be used with more trust for commercial purposes for the general public, the explainability will prove to be the most suitable and appropriate.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>Blood cell analysis assumes a crucial job in location and expectation of various issues and maladies identified with the person. There are distinctive neurotic strategies for the equivalent by and by, which ends up being exorbitant and furthermore requires a long understanding of the location. Image processing and computer vision strategies are produced for the investigation of blood cells and the discovery of maladies. The microscopic blood cell analysis framework has various stages of being specific, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>techniques. There is always a need to get proper and trusted diagnoses in the medical science field.</ns0:p><ns0:p>Machine learning and computer vision can prove the system of suggestive diagnosis to achieve better accuracy than existing approaches. In a major case, when the morphology is concerned with microscopic imaging, pathologists' experience and skillset always matter. There is a need to have software frameworks utilizing machine learning and artificial intelligence to conquer this problem. Also, there is a problem of explainability of AI algorithms towards the particular diagnostic decisions. For this to take care, AI explainable frameworks could be utilized in the future. For the treatment of different diseases, AI can also put its role in a very trusted manner.</ns0:p></ns0:div>
<ns0:div><ns0:head>ACTIVE RESEARCH QUESTIONS AND DIRECTIONS TO FUTURE RESEARCH</ns0:head><ns0:p>After an extensive literature survey, the following research questions were raised, motivating further research in this field.</ns0:p><ns0:p>• RQ.1 How to increase the performance of different stages of microscopic analysis for disease detection?</ns0:p><ns0:p>• RQ.2 How to increase the accuracy of detection of critical diseases such as leukemia and its subtypes by using the morphological diagnosis?</ns0:p><ns0:p>• RQ.3 What is the societal impact of analyzing critical diseases at their early stages and defining the stage of diseases?</ns0:p><ns0:p>• RQ.4 Why is there a need to apply hybridized algorithms for the classification of microscopic cells for disease detection?</ns0:p><ns0:p>• RQ5 What are the different performance measures and validation techniques for analyzing the designed framework's performance?</ns0:p><ns0:p>• RQ6 Which validation techniques are to be employed for system validation of AI implemented systems?</ns0:p><ns0:p>• RQ7 How could AI be trusted and used commercially for medical diagnosis by analyzing the features of the black box in AI implementation?</ns0:p><ns0:p>Following are the future perspectives of the work to improve the contributions in this area.</ns0:p><ns0:p>A powerful division of white and red cells in minuscule blood smear pictures to meet better precision could be actualized. To conquer overlapping cells issue at the hour of division will likewise end up being a significant extension. Viable feature extraction by utilizing distinctive image transforms will likewise demonstrate to a significant degree. Different optimization algorithms could be utilized efficiently for the classification of blood cells. Different deep learning algorithms may demonstrate productivity and might give high accuracy to various phases of examining blood cells. The designed algorithms must be tasted with various publicly accessible databases for precision. The precision of the calculation should be similar enough with all the databases. Another parameter like vigor can be presented for this reason. The relative accuracy of various databases can be determined. To gauge the exhibition of a framework with various measures such as true positive, true negative, faults, sensitivity, specificity, precision, FI score, J-score in addition with accuracy. The contribution is still needed for various ailments location, such as diabetes, viral diseases such as chikungunya and dengue, anemia diseases such as pancytopenia, thalassemia, and leukemia.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Co-occurrence of keyword Source: VOSviewer 1.65</ns0:figDesc><ns0:graphic coords='4,141.73,420.03,413.56,262.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .Figure 2</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 2. Components of Blood</ns0:figDesc><ns0:graphic coords='5,141.73,201.90,413.57,179.74' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. a)Normal Cells, b)Leukemia Cells, c)Different Components</ns0:figDesc><ns0:graphic coords='6,152.79,63.78,83.68,86.39' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 indicates the different blood cells' components through microscopic examination and Wright stained smear of normal blood. For certain neurological disorders diagnoses such as Alzheimer's and</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Generalized Methodology</ns0:figDesc><ns0:graphic coords='7,178.95,397.00,339.14,168.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>) 7/ 20</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5shows the different components of the decision tree. The bottom and the upper part are termed roots. A node represents a feature in the dataset. A branch connects two nodes. Different decision tree learning algorithms are there. These include ID3 (Iterative Dicotomizer3), C4.5, C5.0, CART. These algorithms have different characteristics that decide their uses in a particular application.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Decision tree Source:https://pianalytix.com/decision-tree-algorithm/</ns0:figDesc><ns0:graphic coords='10,141.73,63.78,413.57,275.71' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>) 9/ 20</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Random Forest Algorithm. Source https://www.javatpoint.com/machine-learning-random-forest-algorithm</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.57,275.71' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>) 10/ 20</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>explains the SVM and its different concepts. The followings are important terms in SVMs 11/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. SVM and its Different Concepts Source: https://www.javatpoint.com/machine-learning-support-vector-machine-algorithm</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.57,268.49' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>layer's function is the classification based on the input coming from the convolutional block. Pooling Layer -It has the process where the extraction of a value is done from a set of values. It generally uses the maximum or the average value. This reduces the output matrix size. There are some popular architectures of CNN given as, Classic network architectures viz LeNet-5, AlexNet, VGG 16, and Modern architectures namely Inception, ResNeXt, ResNet, DenseNet.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Generalized CNN Architecture (45)</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.56,246.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>pre-processing, segmentation, feature extraction, classifier, and illness identification. Pre-processing comprises improving the gained picture quality and commotion expulsion. This incorporates gray-scale conversion, thresholding, filtering, histogram stretching, morphological operations. Pre-processed image is portioned to get the locale of interest for further processing. Here WBC and RBC, and platelets are isolated. Distinctive computer vision techniques utilized for segmentation are edge detection, watershed transformation, mathematical morphology, zack algorithm, k-means clustering, SDM, HSV thresholding, otsu's algorithm. There are overlapping cells at the time of staining of blood smear. Expulsion of these overlapping cells at the time of segmentation is a difficult undertaking. Hough transform evacuates certain overlapping; however, it makes the framework slower. Segmented images are classified by algorithms like SVM, ANN classifier, ELM classifier, circular hough transform. There are various databases accessible for experimentation and investigation of microscopic blood cells, such as BCCD (Kaggle) Database, ALL-IDB1, ALL-IDB2, Atlas of Hematology by Nivaldo Meridos, Leukocyte pictures for division and characterization (LISC), Ash image bank, and C-NMC dataset. There are different application territories where microscopic blood cell examination assumes a crucial job. RBC, WBC count, blood group identification, leukemia detection, sickle cells detection, the partition of various WBC sub-classes, malaria parasite detection, could be performed utilizing complex image processing and computer vision 15/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Here 3 or more possible un-ordered types of dependent variables are present with no quantitative significance, such as different types A, B or C. The MLR model is an extension of LR (logistic regression) model. LR Model: π(x) = α + β 1 x 1 + β 2 x 2 + ...... + β p x p 1 + e α+β 1 x 1 +β 2 x 2 +......+β p x p (23) π(x) is an event probability, αrepresents the dependent variable, β 1 , β 2 are independent variables , x 1 , x 2 ... are regression coe f f icients, p is the number o f independent variables and e is the term representing the error. From this LR model, MLR can be obtained as an extended version as below. MLR Model:</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Different databases available</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Image</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>Color depth</ns0:cell><ns0:cell>Remark</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>For-</ns0:cell><ns0:cell>images</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mats</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>BCCD</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>JPEG,</ns0:cell><ns0:cell>12500</ns0:cell><ns0:cell>Not mentioned Different sub-types</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Database(Yildirim and C ¸inar)</ns0:cell><ns0:cell>xml,</ns0:cell><ns0:cell /><ns0:cell>of blood cells</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>meta-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>data</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>ALL-IDB (Acute Lym-</ns0:cell><ns0:cell>JPEG</ns0:cell><ns0:cell>109 (510 lym-</ns0:cell><ns0:cell>24-bit</ns0:cell><ns0:cell>Cancerous</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>phoblastic</ns0:cell><ns0:cell cols='2'>Leukemia</ns0:cell><ns0:cell /><ns0:cell>phoblast)</ns0:cell><ns0:cell>2592×1944</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Database ALL-IDB-1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>ALL-IDB-2(32)(18)</ns0:cell><ns0:cell /><ns0:cell>JPEG</ns0:cell><ns0:cell>260 (130 lym-</ns0:cell><ns0:cell>24-bit</ns0:cell><ns0:cell>Cancerous</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>phoblast)</ns0:cell><ns0:cell>257×257</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Atlas of Hematology by Ni-</ns0:cell><ns0:cell>JPEG</ns0:cell><ns0:cell>300</ns0:cell><ns0:cell>Not mentioned Visceral leishma-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>valdo Mediros(3)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>niasis,</ns0:cell><ns0:cell>cellular</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>simlilarity, morpho-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>logic similarities</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>ASH Image Bank(4)</ns0:cell><ns0:cell /><ns0:cell>JPEG</ns0:cell><ns0:cell>5084</ns0:cell><ns0:cell>Not mentioned Cancerous and other</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>different types of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>images</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Leukocyte</ns0:cell><ns0:cell cols='2'>Images</ns0:cell><ns0:cell cols='2'>(LISC) BMP</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>Healthy subjects</ns0:cell></ns0:row><ns0:row><ns0:cell>for</ns0:cell><ns0:cell cols='2'>Segmentation</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>720×576</ns0:cell><ns0:cell>with</ns0:cell><ns0:cell>different</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Classification.(50) (21)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sub-types of blood</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>cells.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>C-NMC Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>BMP</ns0:cell><ns0:cell>15135</ns0:cell><ns0:cell>Not mentioned Normal and cancer-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ous images of blood</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>cells</ns0:cell></ns0:row></ns0:table><ns0:note>13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>://github.com/Shenggan/BCCD D ataset, htt ps : //www.kaggle.com/paultimothymooney/blood − cells ALL-IDB-1 and 2(32)<ns0:ref type='bibr' target='#b18'>(18)</ns0:ref> https://homes.di.unimi.it/scotti/all/ Atlas of Hematology by Ni-Different databases available</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)https</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Manuscript to be reviewed Comparison of different methods for disease detection</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>16/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54359:1:2:NEW 21 Feb 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "REBUTTAL LETTER
Dear Editor,
We thank you and all the reviewers for their generous comments and suggestions. It helped us to improve our paper that will add value to our research and the community. All the changes are made and are stated below.
Reviewer 1 Comments
1. I could see a few linguistic issues here and there. The authors are advised to have an independent round of proofreading.
Fixed: proofreading is done.
2. Availability of datasets is always an issue. The authors must add a section listing the available public datasets for the problem at hand with their URLs.
Fixed: Table is added with the URLs of datasets availability in section –Publicaly available databses (Table-2).
3. Fig. 4 is too generic plus it gets blurred with zooming. Kindly revise.
Fixed: Figure is revised with standard draw.io tool.
4. The authors have missed a few recent and important referenced. Pls. consider adding following
4a. Recent advances in metabolomics in neurological disease, and future perspectives
4b. Orienting conflicted graph edges using genetic algorithms to discover pathways in protein-protein interaction networks
4c. Gene encoder: a feature selection technique through unsupervised deep learning-based clustering for large gene expression data
4d. Optimizing the DNA fragment assembly using metaheuristic-based overlap layout consensus approach
4e. Quantifying and optimizing visualization: An evolutionary computing-based approach
4f. On the efficient representation of datasets as graphs to mine maximal frequent itemsets
4g. Coronavirus disease pandemic (COVID-19): challenges and a global perspective.
Fixed: All above references are added and cited at the end of the section-Different areas where microscopic blood analysis is used.
Reviewer 2 Comments:
1. In figure two, please do not use the word agranulocytes, this is not a good way to discriminate white blood cells, please follow the WHO classification, book published 2017.
Fixed: Agraulocytes term is changed in figure 2.
2. For staining such as Romanowsky stain: please also prefer already in this paragraph that laboratories are used to their respective homebrew technics and therefor, as peripheral blood differential is aphenotypical method, technicians and doctors but also machine learning tools may have problems to translate their experience for other laboratories.
Fixed: Added the description in the section-Staining of Blood Smear
3. The paragraph starting with line 138 is not clear, what the authors want to say here? Where do they get this information from? Please also mention that normally machines are doing cell counting of peripheral blood and any unknown result by flagging will start the process of preparing peripheral blood smears. Please also refer to the aspect that (line 146) that there may be several technics to take the pictures not only 'segmentation.
Fixed: Paragraph is revised and citations are added
4. Figure 3 is only an idea what is going on, however one should really try to paint the lymphocytes and the monocytes a little bit more according to reality, as showing here they look much too similar.
Fixed: One more figure is added to indicate reality in morphologies of lymphocytes and monocytes
5. The sentence line 160 is quite funny, please delete the sentence 'it is found to be…' as AML is already killing patients several days after diagnoses, the authors see problems in future areas of interest to devide white and red blood cells on smears, for sure this is a problem, however the reviewer does not see the problem to devide those, but to approach those in different ways with respect to machine learning.
Fixed: The sentence 160 is deleted. The future of dividing RBC and WBC is discussed.
6. It would be also helpful to understand the expected audience and readers for this paper: bioinformations, haematologics, machine learning experts, technicians in laboratories with less experience or personal …, please speculate for the future perspectives in 'conclusions' about the unmet medical needs and what machine learning can contribute for better diagnostics and better treatment and also to guide algorithms that follow the investigation of peripheral blood in the next 5 to 10 years.
Fixed: Audiences are added as per the suggestions in section-Who it is intended for. Tried to revise the conclusion as per the suggestions.
Note: For changes done, blue color is used in LATEX PDF to indicate the tracked changes.
" | Here is a paper. Please give your review comments after reading it. |
41 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>resources to D2D and cellular user equipments (CUE) users in the uplink of a 5G based network which considers the estimation of delay bound value. The proposed algorithm considers minimization of total delay for users in the uplink and solves the problem by forming conflict graph and by finding the maximal weight independent set. For the user delay estimation, it is proposed an approach considering multifractal traffic envelope process and service curve for the uplink. The performance of the algorithm is evaluated through computer simulations comparing to those of other algorithms in the literature in terms of throughput, delay, fairness and computational complexity in a scenario with channel modeling that describes the propagation of millimeter waves at frequencies above 6 GHz. Simulation results show that the proposed allocation algorithm outperforms other algorithms in the literature, being highly efficient to 5G systems.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Wireless communication applications have become very popular and several technologies were developed to improve the quality of service (QoS) during last decades. The increasing demand for wireless traffic through various applications such as augmented reality, machine-to-machine (M2M) communication, internet of things (IoT), among others, has driven telecommunications operators to increase the bandwidth of their systems and provide service with lower latency and higher throughput. However, there are several limitations of the system that hinder the increase in bandwidth, making it necessary to think about more advanced cellular communication systems with better spectral efficiency to support the growing number of devices in the network <ns0:ref type='bibr' target='#b11'>(Mishra et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Among the strands of 5G communication networks, one that stands out is device-to-device (D2D) communication. The D2D communication was introduced in LTE-A (long term evolution advanced).</ns0:p><ns0:p>However, this technology was in the background for a long time, not being used much by telecommunications operators. In this scenario, the D2D devices communicate with each other with low interference from the base station (BS), depriving the BS of traffic overload <ns0:ref type='bibr' target='#b11'>(Mishra et al., 2016)</ns0:ref>. Most works related to D2D communication deal with sharing resources in the uplink instead of the downlink, due to the interest in taking advantage of the asymmetric characteristic of data traffic in Internet networks. More specifically, Internet traffic is much higher on the downlink, thus it seems reasonable to suppose that there PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science is a greater availability of idle resources to be explored in the uplink than in the downlink <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>.</ns0:p><ns0:p>For resource sharing on the uplink, <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> has developed an heuristic that guarantees a maximum number of device pairs communicating with each other with minimal interference, denoted as greedy resource allocation (GRA) algorithm. The authors show that the problem is essentially an allocation problem and proposed an optimal algorithm based on the Hungarian method. However, the algorithm proposed by <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> was not developed for 5G networks, where can exist a large number of devices in the network. Moreover, the authors considered a single-sharing scenario where each allocated resource block (RB) can be reused by only one pair of devices. In 5G networks, it is desirable that any RB allocated to a device can be reused by several devices, which characterizes multi-sharing communication. <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> then proposed a method based on the algorithm described by <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> to solve the multi-sharing resources allocation problem in order to increase the throughput of the system and to guarantee the signal-to-interference-plus-noise ratio (SINR) requirements, denoted as greedy throughput maximization plus (GTM+). The authors formulated the multi-sharing allocation problem and proved that it is a non-deterministic polynomial-time hard problem. More specifically, <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> proposed an algorithm with a fast and efficient solution that explores conflict graphs and maximal weight independent, showing that it outperforms other existing schemes.</ns0:p><ns0:p>This paper proposes a resource allocation algorithm for multi-sharing communication which considers the estimation of delay information of the uplink in order to optimize overall system performance considering system delay minimization. In this context of multi-sharing communication, the delay is considered as an essential QoS parameter, especially for real-time applications with variable transmission rates and specified bandwidth requirements, such as videoconferencing services. To this end, first it is proposed an approach to estimate delay bound using envelope process for traffic flows and a service curve for the uplink transmission. The proposed algorithm uses concepts such as conflict graph and maximal weight independent set. However, different from the algorithms of the previous mentioned works, we propose an algorithm to solve the multi-sharing resource allocation problem that considers the minimization of the estimated delay information of the uplink.</ns0:p><ns0:p>The implemented scenario simulation considers the propagation of millimeter waves (mmWaves) above 6 GHz, an important characteristic of 5G networks which leads to improvements in throughput and latency for network users. Simulation results show that the proposed algorithm provides considerable gains in terms of throughput, delay and fairness to the performance of the considered 5G scenario, presenting lower computational complexity compared to some optimization heuristics.</ns0:p><ns0:p>The remainder of the paper is organized as follows: Section 2 describes the system model and formulates the multi-sharing resource allocation problem. Section 3 presents related works. Section 4 presents the algorithm proposed in this paper. Section 5 presents the performance evaluation of the algorithms. Section 6 presents the final considerations. This system is composed of M CUEs, including real and virtual CUEs, N DUEs, K idle resource blocks (RBs) scheduled for transmission during a transmission time interval (TTI) and M − K real CUEs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>SYSTEM MODEL AND PROBLEM FORMULATION</ns0:head><ns0:p>The RBs correspond to the minimum allocation unit in a wireless communication system, and each idle RB, not used by any real CUEs, is regarded as a virtual CUE with zero transmit power and no signal-to-interference-plus-noise ratio (SINR) requirement. The CUEs are denoted by C 1 ,C 2 , ...,C M and the DUEs by D 1 , D 2 , ..., D N . The DUE pair transmitter is denoted by D n,T x and the receiver by D n,Rx . In order to facilitate the comprehension of the proposed approach and of the considered system model, Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> presents the variables and parameters considered in this paper. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Base Station</ns0:note><ns0:formula xml:id='formula_0'>P m G mB σ 2 m + ∑ n∈∆ m P n G nB ≥ γ m ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where P m is the transmit power for CUE m, G mB is the channel gain between CUE m and the BS, σ 2 m is the noise power for CUE m, P n is the transmit power for DUE pair n, G nB is the channel gain between DUE pair n transmitter and the BS and γ m is the SINR threshold required by CUE m.</ns0:p><ns0:p>There are also SINR requirements for the DUE pairs. Certain RBs can be reused by a DUE pair n only if the received SINR is higher than:</ns0:p><ns0:formula xml:id='formula_1'>P n G nn σ 2 n + P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ≥ γ n , ∀m ∈ Θ n ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P n is the transmit power for DUE pair n, G nn is the channel gain between the DUE pair n two ends, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>σ</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The multi-sharing resource allocation problem objective is determine which DUEs reuse RBs from CUEs such that the total system throughput F is maximized. The total system throughput F is defined as the sum of all CUEs' and DUEs' Shannon capacities and is formulated as follows:</ns0:p><ns0:formula xml:id='formula_3'>F = max ∆ 1 ,∆ 2 ,...,∆ M M ∑ m=1 W m log2 1 + P m G mB σ 2 m + ∑ n∈∆ m P n G nB + ∑ n∈∆ m W m log2 1 + P n G nn σ 2 n + P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>subject to:</ns0:p><ns0:formula xml:id='formula_4'>∑ n∈∆ m P n G nB ≤ P m G mB γ m − σ 2 m , ∀m ∈ {1, 2, ..., M − K} ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ≤ P n G nn γ n − σ 2 n , ∀n ∈ {1, 2, ..., N} , ∀m ∈ Θ n ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>n ∈ ∆ m , ∀n ∈ {1, 2, ..., N} , ∀m ∈ Θ n ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where constraint (4) represents the maximum tolerable interference on m's transmission and constraint ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>)</ns0:p><ns0:p>represent the maximum tolerable interference on n's transmission.</ns0:p><ns0:p>The multi-sharing resource allocation problem described by equation (3) under constraints ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>) is non-deterministic polynomial-time hard. That is, it is NP-hard as proved in <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>, which justifies the need to find an efficient and fast algorithm to solve the problem, such as the one presented in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RELATED WORKS</ns0:head><ns0:p>In this section, we briefly comment about some works related to D2D communications and multi-sharing resource allocation. The greedy throughput maximization plus algorithm (GTM+) is an iterative algorithm proposed by <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> which attempt to find a solution to the multi-sharing resources allocation problem. This algorithm is based on the maximization of a utility function given in terms of the system throughput considering that RBs are reused <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>. The authors in <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> use the heuristic algorithm proposed by <ns0:ref type='bibr' target='#b1'>(Basagni, 2001)</ns0:ref> to obtain a maximal weight independent set, with time complexity O(n 3 ). The worst-case complexity of GTM+ is O(n 4 ), because in each iteration, at least one DUE pair is granted to reuse the RBs. Contrary to our proposal, the GTM+ algorithm does not consider the system delay as an optimization metric.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>(Zhang et al., 2016)</ns0:ref>, it is proposed to use a distance based power control scheme for D2D communication in order to achieve expected performance gain without causing performance degradation to the primary cellular users due to system interference. The authors applies the Poisson point process (PPP) model, a stochastic geometry model, to get tractable analysis results. Numeric results presented in simulations show that the proposed scheme is benefit for both CUEs and DUEs. The initial results demonstrate the advantages of using the power control scheme, although comparisons are not made with state-of-the-art schemes in terms of throughput and delay for resource allocation in D2D communication.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b11'>(Mishra et al., 2016)</ns0:ref>, the authors propose a resource allocation scheme for D2D communication in the uplink which consist of two phases. In the first phase, if multihop communication (two-hops) is required, a relay is selected from the available relays between cell edge device and BS. The relay selection scheme selects the parameters such as battery power and reliability to minimize packet loss.</ns0:p><ns0:p>In the second phase, an efficient resource allocation scheme is proposed that reduces the upload time and optimizes the number of resource blocks. In this work, we also address resource allocation scheme for D2D communication in the uplink but the throughput maximization occurs via a delay minimization based algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>, the authors study power control and the problem of resource allocation in D2D communication underlaying a non-orthogonal multiple access (NOMA) cellular network. The objective of the work is to maximize the throughput sum of all D2D pairs by meeting the minimum rate constraints of users and using techniques of successive interference cancellation. The optimal conditions for power control of cellular users in each subchannel are derived first. Then, it is proposed a dual-based iterative algorithm to solve the resource allocation problem. The results show that the proposed scheme outperforms the conventional scheme for the network with high data requirements. Different from <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>, we consider in this work an uplink of CP-OFDM (cyclic prefix -orthogonal frequencydivision multiple access) based communication system. However, we also aim to enhance network QoS parameters such as throughput and delay through the application of our resource allocation approach.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b8'>(Khuntia and Hazra, 2019)</ns0:ref>, it is proposed by the authors a Deep Q-learning with an extended Kalman filter (EKF) scheme to solve the channel and power allocation issue for D2D communication devices when the prior traffic information is not known to the BS. They explore an optimal policy for resource and power allocation with the aim of maximizing the total throughput of the system. The and Rao T, 2019) and <ns0:ref type='bibr' target='#b8'>(Khuntia and Hazra, 2019)</ns0:ref> produce interesting results, we alternatively propose an algorithm in this paper that is not based on reinforcement learning that can solve the problem with a lower computational complexity.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b16'>(Song et al., 2019)</ns0:ref> it is proposed a joint uplink and downlink (JUAD) resource allocation scheme which maximizes system capacity and guarantees the SINR constraint for CUEs and DUEs. The authors formulate the optimization problem as a mixed integer nonlinear programming problem (MINLP) and divide it into two sub-problems, the power allocation and channel assignment. At first, the optimal transmission power is obtained through the convex objective function. Then, it is developed the Hungarian algorithm to achieve joint uplink and downlink channel assignment, improving system capacity performance and increasing spectrum efficiency. Authors show through simulations that the performance of the algorithm is better than that of schemes for independent allocation. The JUAD algorithm has among its advantages the fact that it works with allocation in the uplink and downlink.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b10'>(Li et al., 2019)</ns0:ref>, the authors propose to use a resource allocation scheme that integrates a probabilistic approach to a quasi-convex optimization algorithm based on channel probability statistical characteristics for D2D communication mode selection and resource optimization. Authors propose also a sub-optimal allocation algorithm when number of users is too large, in order to save costs and improve efficiency. The presented results show that the algorithm optimizes total throughput of the system and reduces communication interference between users. However, in the same way as JUAD and the other mentioned algorithms, the algorithm proposed in <ns0:ref type='bibr' target='#b10'>(Li et al., 2019)</ns0:ref> does not consider the user's system delay in the optimization process, an important parameter in any resource allocation system, especially if we consider that D2D communication takes advantage of the proximity between users to increase spectral efficiency. That is, by decreasing communication delays of the links, spectral efficiency can be increased.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DELAY MINIMIZATION BASED D2D RESOURCE ALLOCATION ALGO-</ns0:head></ns0:div>
<ns0:div><ns0:head>RITHM</ns0:head><ns0:p>In this section, a multi-sharing resource allocation algorithm for D2D communication is proposed which considers the estimated delay information as a utility function to be optimized. To estimate the delay, an approach based on deterministic network calculus concepts is also proposed.</ns0:p><ns0:p>Deterministic network calculus can be used to estimate resources in order to provide quality of service Network calculus can also be seen as the systems theory that applies to computer networks, but the main difference is to consider another algebra (Gonzaga Ferreira and Teles Vieira, 2020).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>MFBAP envelope process</ns0:head><ns0:p>In general, fractals are described in the literature according to a set of behaviors and characteristics, such as self-similarity, phenomena with patterns that are repeated at different scales and with irregular structures. Multifractals are characterized by a set of fractal dimensions and are used to treat phenomena that occur in multiple scales and dimensions <ns0:ref type='bibr' target='#b4'>(Feldmann et al., 1998)</ns0:ref>.</ns0:p><ns0:p>Network traffic traces captured at small scales in general tend to be multifractal. This means that they present highly dependent structure between samples with burst incidences at various scales. These characteristics can degrade network performance in relation to traffic flows considered Gaussian and short-dependent <ns0:ref type='bibr' target='#b4'>(Feldmann et al., 1998)</ns0:ref>. In this work, we propose to use a multifractal envelope process once it was shown that it can better describe real traffic envelope processes at the time scale considered in this paper than monofractal based or short-range based envelope processes <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>.</ns0:p><ns0:p>The envelope process for incoming packet traffic is an upper bound for the actual accumulated packet traffic process. For a deterministic envelope process, the function Â(t) corresponds to the maximum value of a flow A(t) in the time slot [s, s + t], and is defined by equation as follows (Le <ns0:ref type='bibr' target='#b9'>Boudec and Thiran, 2004)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_7'>Â(t) = sup s≥0 A [s, s + t] ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where sup is an operator that returns the maximum value of A[s, s +t] without establishing an upper bound value s ≥ 0 in this case.</ns0:p><ns0:p>The multifractal bounded arrival process (MFBAP) is a deterministic alternative to obtain the envelope process that limits the volume of traffic in a given time interval, calculated as follows <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>ÂMFBAP (t) = āt + kσt H(t) + B,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where H(t) is the Hölder exponent <ns0:ref type='bibr'>(P et al., 2003)</ns0:ref>, that represents the degree of the singularity of the function, t is the instant of time, ā and σ are respectively the mean and standard deviation of incoming traffic, k is the constant related to the probability of violation (for ε = 10 −6 ) of the envelope process and B is the buffer size.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Delay Bound Estimation</ns0:head><ns0:p>The service curve concept has been explored as estimation tool involving various technologies and scenarios, mainly in the area of deterministic and statistical network calculus <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>.</ns0:p><ns0:p>The advantage of the network calculus theory is due to the very intuitive convolution formulas that can be used to determine traffic output processes of a communication system from its arrival envelope process and service curve (Le <ns0:ref type='bibr' target='#b9'>Boudec and Thiran, 2004)</ns0:ref>.</ns0:p><ns0:p>The upper bound on delay, denoted by d, is given by <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_9'>d = inf {d ≥ 0|∀t ≥ 0 : A * (t − d) ≤ S(t)} , (<ns0:label>9</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where inf is an operator that returns, in this case, the lowest value of d ≥ 0 obeying A * (t − d) ≤ S(t). A * is the MFBAP envelope process, calculated according to equation ( <ns0:ref type='formula' target='#formula_8'>8</ns0:ref>) and S is the generalized service curve of an OFDM (orthogonal frequency-division multiple access) system for any user served for the same time interval T , and can be denoted as <ns0:ref type='bibr' target='#b3'>(Costa, 2013)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_11'>S n S (t) = cT P + cT min max [t − PN S T − (n S − 1)T ; 0] T ; 1 ,<ns0:label>(10)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where c is the average service rate on the system server and N S is the number of time slots T per complete cycle P given by P = t N S T . The operator represents the smallest integer closest to t N S T . The use of service curve allows to obtain network performance and behavior parameters in an analytical way, as well as estimates of the delay and backlog bounds.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Delay Minimization Conflict Graph Algorithm</ns0:head><ns0:p>In this section, we propose a resource allocation algorithm for the uplink of communication system with D2D users that considers the minimization of delay, called DMCG (delay minimization conflict graph) algorithm. It is proposed in this paper to use equation ( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) to estimate the delay due to its precision as verified in previously works such as <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref> and (Gonzaga Ferreira and Teles Vieira, 2020), which turns it possible to make early decisions on wireless networks resource scheduling.</ns0:p><ns0:p>The The set of vertices of the conflict graph G m ′ corresponds to the DUE pairs in group m ′ . A weight value is assigned to each vertex according to the utility of the corresponding DUE pair when it joins group m ′ . In the conflict graph, an edge is added for every two vertices if the mutual interference does not meet the threshold. The same RB could not be reused by two DUE pairs. This explains why the edge connects two vertices.</ns0:p><ns0:p>The conflict graph G m ′ formed by DUE pairs representing the vertices helps to determine the candidates. Thus, the objective is to remove the DUE pairs that cannot coexist due to the large mutual interference and to keep the DUE pairs that maximize the throughput of the system. Candidates are chosen as the maximum weight independent set of the conflict graph G m ′ . Finally, the set ∆ 1 , ∆ 2 , ..., ∆ M of DUE pairs n that reuse RBs allocated to CUE m is returned as the algorithm result.</ns0:p><ns0:p>Algorithms 1 and 2 show the pseudo-code for the proposed multi-sharing resource allocation algorithm, an efficient solution to the maximization problem described by equation ( <ns0:ref type='formula' target='#formula_3'>3</ns0:ref>) and subject to constraints (4), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>).</ns0:p><ns0:p>The operation of Algorithm 1 is similar to that of the GTM+ algorithm, but with the difference of using the estimated delay function described by equation ( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) instead of a utility function based on throughput. The proposed algorithm seeks to find a solution that minimizes the delay bound estimated to each user and simultaneously meeting the constraints imposed by mutual interference.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>SIMULATIONS AND RESULTS</ns0:head><ns0:p>This section presents the simulation parameters of the wireless network and the channel modeling used in the simulations, as well as the obtained results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Channel Modeling for mmWaves</ns0:head><ns0:p>Recent studies developed by 3GPP regarding high frequency bands between 0.5 and 100 GHz defined two channel models for this scenario: clustered delay line (CDL) and tapped delay line (TDL). These channel models are applicable for system-level simulations subject to the following conditions (3GPP, 2017):</ns0:p><ns0:p>• For system-level simulations, the supported scenarios are microcell and macrocell urban, indoor office and macrocell rural;</ns0:p><ns0:p>• Bandwidth is supported up to 10 % of the center frequency, but not greater than 2 GHz.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:formula xml:id='formula_12'>∆ m ′ ← maximum weight independent set of G m ′ ; for n ′ ∈ ∆ m ′ do / * Check if DUE pair n ′ meet the SINR requirement * / if P m ′ G m ′ n ′ + ∑ n∈∆ m ′ −{n ′ } P n G nn ′ ≥ I n ′ then Remove n ′ from ∆ m ′ ; end end</ns0:formula><ns0:p>In descending order, sort DUEs in ∆ m ′ by their interference on C m ′ ; / * Remove one DUE from ∆ m ′ until the maximum tolerable interference The CDL model is a type of channel modeling where the received signal is composed of several separate clusters in delay, as shown in Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>. Each cluster contains a number of multipath components with the same delay value, but with different arrival and departure angles.</ns0:p><ns0:formula xml:id='formula_13'>I m ′ constraint is met * / while ∑ n∈∆ m ′ P n G nB ≥ I m ′ do Remove the first element from ∆ m ′ ; end for n ∈ Γ m ′ − ∆ m ′ do / * Call Algorithm 2 * / m * ← OptimizeDelay (n,U − {m ′ }); / * D n joins group m * * / Γ m * ← Γ m * ∪ {n}; end / * Group m ′ is marked * / U ← U − {m ′ };</ns0:formula><ns0:p>CDL models are defined for the frequency range between 0.5 GHz and 100 GHz with a maximum bandwidth of 2 GHz. CDL models can be implemented through the generation of coefficients or the generation of TDL model using spatial filtering.</ns0:p><ns0:p>Three CDL models, called CDL-A, CDL-B and CDL-C, are defined to represent three different channel profiles for non-line of sight (NLOS) environments, while CDL-D and CDL-E models are built for line of sight (LOS) environments (3GPP, 2017).</ns0:p><ns0:p>It is considered in this work the multipath model clustered delay line A (CDL-A) (3GPP, 2018b) (3GPP, 2018a), suitable for the mmWaves scenario, and the Rayleigh fading in order to verify the channel modeling impacts via simulations. The carrier frequency of 26 GHz was chosen based on recent studies by Brazilian National Telecommunications Agency (ANATEL) with this frequency range, which should also be adopted in Europe <ns0:ref type='bibr' target='#b18'>(Tecnoblog, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>System Parameters and Results</ns0:head><ns0:p>The simulations were conducted using MATLAB software version R2018a and a microcomputer with the following configuration: Intel Core i7-4785T CPU 2.20 GHz, 8 GB RAM, SSHD SATA III and Windows 10 64 bits. The simulation functions and routines were implemented instead of using available network simulation tools in order to have more control on the simulation scenario configuration and CDL channel modeling parameters than some commercial softwares.</ns0:p><ns0:p>It was compared the simulation results of the proposed DMCG algorithm with those of the greedy throughput maximization plus (GTM+) presented in <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>, a genetic algorithm (GA) based approach, which aims to find a solution for the maximization problem described by equation (3) subject to constraints (4), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>), and a random reuse scheme. The GA-based algorithm was developed using 30 individuals and 100 iterations.</ns0:p><ns0:p>All CUEs and DUE pairs are randomly distributed in a single cell with the BS at the center and are set to have the same parameters such as SINR threshold, transmission power and noise spectral density, varying only the location in the cell. Each figure reports simulation results averaged over 1000 transmission time intervals (TTIs) in order to reflect the average system performance. The system simulation parameters are given in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. Most parameter values were set according to references <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In this work, it is simulated an uplink based on CP-OFDM (cyclic prefix -orthogonal frequency- The total throughput values of the system calculated according to equation (3) and the sum of all devices throughput are shown in Figures <ns0:ref type='figure'>3 and 4</ns0:ref>. It can be seen that the proposed algorithm has the highest values in the two simulated scenarios, both with Rayleigh and CDL-A modeling. The GTM+ and the DMCG algorithms presented higher throughput values compared to the GA-based allocation, showing that the conflict graph strategy is more efficient than solving the problem represented by equations (3) (4) ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>) by directly applying optimization algorithms. The proposed DMCG outperforms the GTM+ because it is considered the conflict graph strategy in conjunction to delay minimization. In this way, in order to reduce system delay, the proposed DMCG algorithm must increase system throughput similar as done by the GTM+ or find a solution that optimizes the allocation process yielding the system delay minimization. The performance of the proposed algorithm improves as the number of CUEs in the network increases, while the performance of the GA-based algorithm tends to become worse.</ns0:p><ns0:p>It can be observed a similar behavior regarding the D2D devices throughput, as shown in Figures <ns0:ref type='figure'>5 and 6</ns0:ref>. The proposed algorithm improves considerably in terms of throughput compared to the GTM+ algorithm.</ns0:p><ns0:p>Fairness is an important measure that determines how fair the distribution of resources among system users is, calculated as described in <ns0:ref type='bibr' target='#b7'>(Jain et al., 1998)</ns0:ref>. It can be seen in Figures <ns0:ref type='figure'>7 and 8</ns0:ref> by the GTM+ algorithm. This result proves that the DMCG algorithm is efficient in its strategy of using the estimated delay as a utility function in the allocation system, resulting in a considerable drop in the delay value in both simulated scenarios. The delay values of the GA-based algorithm and the random algorithm vary widely in the simulated scenarios. In the tests, the presence of these peaks of values was verified due to the characteristic of these algorithms of not distributing resources in a fair way, i.e., the characteristic of penalizing some users in terms of delay in detriment of total throughput of the system.</ns0:p><ns0:p>The performance improvement of the proposed algorithm in relation to the other studied algorithms is mainly due to its characteristic of reducing the average delay of the system through the verification of the utility function defined by the estimated delay parameter. The reduction of the average delay of the system has a direct consequence in the improvement of the system throughput and in the fairness parameter.</ns0:p><ns0:p>Regarding the processing time, shown in Figures <ns0:ref type='figure' target='#fig_6'>11 and 12</ns0:ref>, it can be noticed that the algorithm based on GA has the highest values. GA-based heuristics is the one that demands the most processing among the compared algorithms, due to the large number of variables involved in the system. The processing time tends to increase as the number of CUEs in the network increases. The proposed DMCG algorithm had a considerable performance improvement in processing time when compared to the GTM+ algorithm, although both have computational complexity O(n 4 ). The simulation results confirm that the proposed algorithm based on delay minimization outperforms the GTM+ algorithm for all considered performance parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>This paper presented a multi-sharing resource allocation algorithm for device-to-device (D2D) communication networks in a scenario with millimeter waves above 6 GHz. The proposed DMCG algorithm considers the minimization of a delay function estimated using concepts of network calculus such as These results show that the use of estimated delay information proved to enhance the multi-sharing resource allocation performance that is based on conflict graphs and maximal weight independent set, with improvement in all requirements in relation to the GTM+ algorithm and with the advantage of being able to anticipate the allocation of resources in a D2D communication scenario with mmWaves.</ns0:p><ns0:p>As a proposal for future work, we intend to extend the communication scenario considering optimized resource allocation for CUEs devices and the application of a variation of the proposed approach involving downlink data transmission. The use of the proposed algorithm in the downlink transmission could considerably improve spectral efficiency by reusing idle resources of network users.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 represents the system model of D2D communications which reuse the uplink resources of equipments in wireless networks. In these systems, there are two types of equipments: pairs of D2D user equipments (DUEs) and cellular user equipments (CUEs). The first one reuse radio resource allocated to the CUEs and communicate directly without communication load on the base station (BS).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Before sharing the RBs to DUEs during transmission, it is pre-allocated a set of uplink RBs for each CUE. The allocated bandwidth denoted by W m for each CUE m is proportional to the number of allocated RBs. Each CUE can share its RBs with several pairs of DUEs, making it possible a large number 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. System model of D2D communications reusing the uplink resources of CUES.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>resource allocation scheme comprises of four phases, i.e., cell splitting, clustering, queuing model and channel allocation simultaneously with power allocation. It is used EKF together with Deep Q-Network to incorporate weight uncertainty of the Q-function as well as the state uncertainty during transition, helping the learner in achieving an optimal policy. The authors show the advantage of the resource sharing scheme over other existing schemes through numerical simulations. The author propose in (G. and Rao T, 2019) to use reinforcement-learning based latency controlled D2D connectivity (RL-LCDC) algorithm and its Q-Learning approach in an indoor D2D communication network for 5G connectivity with minimum latency. The algorithm discovers the neighbors, decides the D2D link and adaptively controls the communication range for maximum network connectivity. Resultspresented in<ns0:ref type='bibr' target='#b5'>(G. and Rao T, 2019)</ns0:ref> show that RL-LCDC optimizes the connectivity with lower delay and better energy efficiency when compared with other conventional schemes. Although both approaches (G.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>QoS) in networks and has provided powerful tools for estimating backlog and delay in a network with 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021) Manuscript to be reviewed Computer Science guaranteed service for individual traffic flows. Using the notion of envelope process, arrival curves and service curves, several studies have shown that the backlog and delay bound can be concisely expressed by the Min-Plus algebra (Le Boudec and Thiran, 2004).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021) Manuscript to be reviewed Computer Science Algorithm 2: Proposed Algorithm: Delay Optimization Data: DUE n and set C; Result: Optimum group m * ;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Representation of clusters for CDL model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>that the proposed algorithm presents the highest fairness values in all simulated scenarios, showing that it presents a fair distribution of resources among the users of the system. The algorithm based on GA presents the lowest 11/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .Figure 4 .Figure 5 .Figure 6 .</ns0:head><ns0:label>3456</ns0:label><ns0:figDesc>Figure 3. CDL-A channel model: Total throughput for different number of CUEs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .Figure 12 .</ns0:head><ns0:label>789101112</ns0:label><ns0:figDesc>Figure 7. CDL-A channel model: Fairness index for different number of CUEs</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Notation table</ns0:figDesc><ns0:table><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>Number of CUEs.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>Number of DUEs.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell cols='2'>Number of idle resource blocks.</ns0:cell></ns0:row><ns0:row><ns0:cell>C 1 ,C 2 , ...,C M</ns0:cell><ns0:cell>CUEs denotation.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>D 1 , D 2 , ..., D N DUEs denotation.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W m</ns0:cell><ns0:cell cols='2'>Allocation bandwidth for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>Θ n</ns0:cell><ns0:cell cols='2'>Set of CUEs that share their RBs with DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>∆ m</ns0:cell><ns0:cell cols='2'>Set of DUE pairs that reuse the RBs allocated to CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>P m</ns0:cell><ns0:cell cols='2'>Transmit power for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>G mB</ns0:cell><ns0:cell cols='2'>Channel gain between CUE m and the BS.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ 2 m</ns0:cell><ns0:cell cols='2'>Noise power for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>P n</ns0:cell><ns0:cell cols='2'>Transmit power for DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G nB</ns0:cell><ns0:cell cols='2'>Channel gain between DUE pair n transmitter and the BS.</ns0:cell></ns0:row><ns0:row><ns0:cell>γ m</ns0:cell><ns0:cell cols='2'>SINR threshold required by CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>G nn</ns0:cell><ns0:cell cols='2'>Channel gain between the DUE pair n two ends.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ 2 n</ns0:cell><ns0:cell cols='2'>Noise power for DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G mn</ns0:cell><ns0:cell cols='2'>Channel gain from CUE m to DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G n ′ n</ns0:cell><ns0:cell>Channel gain from D n ′</ns0:cell><ns0:cell>,T x to D n,Rx .</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell cols='2'>Total system throughput.</ns0:cell></ns0:row><ns0:row><ns0:cell>Â(t)</ns0:cell><ns0:cell cols='2'>Maximum value of a flow A(t) in the time slot [s, s + t].</ns0:cell></ns0:row><ns0:row><ns0:cell>ÂMFBAP (t)</ns0:cell><ns0:cell cols='2'>Multifractal bounded arrival process.</ns0:cell></ns0:row><ns0:row><ns0:cell>H(t)</ns0:cell><ns0:cell>Hölder exponent.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>t</ns0:cell><ns0:cell>Instant of time.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ā</ns0:cell><ns0:cell cols='2'>Mean of the incoming traffic.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ</ns0:cell><ns0:cell cols='2'>Standard deviation of the incoming traffic.</ns0:cell></ns0:row><ns0:row><ns0:cell>k</ns0:cell><ns0:cell cols='2'>Constant related to the probability of violation of the envelope process.</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>Buffer size.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>d</ns0:cell><ns0:cell>Estimated delay.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell cols='2'>Generalized service curve of an OFDM system.</ns0:cell></ns0:row><ns0:row><ns0:cell>c</ns0:cell><ns0:cell cols='2'>Average service rate on the system server.</ns0:cell></ns0:row><ns0:row><ns0:cell>N S</ns0:cell><ns0:cell cols='2'>Number of time slots T per complete cycle P given by P = t N S T .</ns0:cell></ns0:row><ns0:row><ns0:cell>Γ m</ns0:cell><ns0:cell cols='2'>Set of unmarked pair m.</ns0:cell></ns0:row><ns0:row><ns0:cell>∆ m ′</ns0:cell><ns0:cell cols='2'>Set of DUE pairs that will reuse RBs of CUE m ′ .</ns0:cell></ns0:row><ns0:row><ns0:cell>G m ′</ns0:cell><ns0:cell cols='2'>Conflict graph formed by DUE pairs representing the vertices.</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell cols='2'>Set of unmarked groups.</ns0:cell></ns0:row><ns0:row><ns0:cell>I m ′</ns0:cell><ns0:cell cols='2'>Maximum tolerable interference for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>C m ′</ns0:cell><ns0:cell cols='2'>Set of DUEs in ∆ m ′ sorted by their interference.</ns0:cell></ns0:row></ns0:table><ns0:note>3/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>2 n is the noise power for DUE pair n, P m is the transmit power for CUE m, G mn is the channel gain from CUE m to DUE pair n, G n ′ n is the channel gain from D n ′ ,T x to D n,Rx and γ n is the SINR threshold for DUE pair n.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>It is assumed in this work that the BS knows the channel properties such as transmit power, channel</ns0:cell></ns0:row><ns0:row><ns0:cell>gain and noise power of the communication link for CUEs m and DUE pairs n, i.e., the BS knows the</ns0:cell></ns0:row><ns0:row><ns0:cell>channel state information (CSI) and the SINR threshold for each user.</ns0:cell></ns0:row></ns0:table><ns0:note>4/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>DMCG algorithm is started by randomly allocating idle RBs, i.e., one DUE pair is randomly chosen for each idle RB. Then, the algorithm decides how to reuse RBs of all CUEs. A group of unmarked pair m is formed by adding each unallocated DUE pair n to it in order to optimize utility function (9), forming a set denoted by Γ m . A set of DUE pairs that maximize the total utility is contained into the largest group m ′ with unallocated DUE pairs, taking the maximum weight independent set of the conflict graph corresponding to group m ′ . It is allowed that DUE pairs reuse RBs of CUE m ′ . Candidates are removed one by one until the SINR requirements are met. Then, finishing an iteration, the set of DUE pairs that will reuse RBs of CUE m ′ , denoted as ∆ m ′ , ends up being the remaining candidates and group m ′ is marked done. Another iteration is effectuated if there are unmarked groups. Thus, the DMCG algorithm works in an iterative way by forming conflict graphs G m ′ and seeking to meet the SINR requirements at each iteration</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>CUEs, N DUE pairs, K idle RBs and M − K real CUEs; Result: ∆ 1 , ∆ 2 , ..., ∆ M (set of the DUE pairs that reuse RBs allocated to CUE m); 1 For each idle RB, randomly pick K DUE pairs D δ 1 , D δ 2 , ...D δ K ; ∈ {1, 2, ..., N} − {δ 1 , δ 2 , ..., δ K } do</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Algorithm 1: Delay Minimization Conflict Graph Algorithm</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Data: M / * U is the set of unmarked groups</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>2 U ← {1, 2, ..., M};</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Γ is the set of DUES that joins group m</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>3 Γ 1 , Γ 2 , ..., Γ M ← / 0;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Initialize</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>4 for n / * Call Algorithm 2</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>7 end</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Main body</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>8 while U = / 0 do</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>5 m * ← OptimizeDelay(n,U); / * D n joins group m * * / 6 Γ m * ← Γ m * ∪ {n}; 9 Form the conflict graph G m * for the largest group Γ m ′ in U;</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Simulation Parameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Multipath models</ns0:cell><ns0:cell>Rayleigh and CDL-A</ns0:cell></ns0:row><ns0:row><ns0:cell>CUE transmission power</ns0:cell><ns0:cell>23 dBm</ns0:cell></ns0:row><ns0:row><ns0:cell>DUE transmission power</ns0:cell><ns0:cell>10 dBm</ns0:cell></ns0:row><ns0:row><ns0:cell>SINR requirement of each CUE</ns0:cell><ns0:cell>7 dB</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SINR requirement of each DUE pair 4.7 dB</ns0:cell></ns0:row><ns0:row><ns0:cell>Radius of BS coverage</ns0:cell><ns0:cell>500 m</ns0:cell></ns0:row><ns0:row><ns0:cell>Distance between each DUE pair</ns0:cell><ns0:cell>15 m</ns0:cell></ns0:row><ns0:row><ns0:cell>White noise power density</ns0:cell><ns0:cell>-174 dBm/Hz</ns0:cell></ns0:row><ns0:row><ns0:cell>Path loss model for CUE and DUE</ns0:cell><ns0:cell>128.1 + 37.6log10(d[km])</ns0:cell></ns0:row><ns0:row><ns0:cell>Path loss model for DUE pairs</ns0:cell><ns0:cell>148 + 40log10(d[km])</ns0:cell></ns0:row><ns0:row><ns0:cell>System bandwidth</ns0:cell><ns0:cell>20 MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of RBs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Carrier frequency</ns0:cell><ns0:cell>26 GHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of real CUEs</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of CUEs</ns0:cell><ns0:cell>50 until 100</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of DUE pairs</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of TTIs</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='19'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:1:1:CHECK 27 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Title: Delay Minimization Based Uplink Resource Allocation for Device-to-Device
Communications Considering mmWave Propagation.
Corresponding Author: Marcus Vinícius Gonzaga Ferreira.
E-mail address: marcusferreira@inf.ufg.br / marcusviniciusbr@gmail.com.
Telephone: +55 62 98164-7676.
Corresponding Author's Institution: Federal University of Goiás.
Order of Authors: Marcus Vinícius Gonzaga Ferreira; Flávio Henrique Teles Vieira.
Dear editor and reviewers,
We are pleased to submit an original research paper entitled “Delay
Minimization
Based
Uplink
Resource
Allocation
for
Device-to-Device
Communications Considering mmWave Propagation” for consideration for publication
in the PeerJ Computer Science Journal.
We list below the changes made to the paper addressed to the editor and
reviewers:
Editor:
1. We have changed the text of the Abstract taking into account the
reviewers' comments, making the paper clearer and more informative for
any reader. We emphasized the importance and motivation of the study
and made evident the points covered in the work. We inserted the
following texts into the Abstract:
“This paper addresses the resource allocation problem in multi-sharing
uplink for device-to-device (D2D) communication, one aspect of 5G
communication networks. The main advantage and motivation in relation
to the use of D2D communication is the significant improvement in the
spectral efficiency of the system when exploiting the proximity of
communication pairs and reusing idle resources of the network, mainly
in the uplink mode, where there are more idle available resources.”
“Simulation results show that the proposed allocation algorithm
outperforms other algorithms in the literature, being highly efficient to
5G systems.”
Reviewer 1 (C Lin):
1. We made significant changes in the Introduction section of the paper,
adding more information about the motivation, justification and
contribution of our work. In the Related Works section, we added more
state-of-the-art works in the area of device-to-device communication,
comparing their methodologies and the results with those of our work.
We added the following texts to the Introduction section:
“This paper proposes a resource allocation algorithm for multi-sharing
communication which considers the estimation of delay information of
the uplink in order to optimize overall system performance considering
system
delay
minimization.
In
this
context
of
multi-sharing
communication, the delay is considered as an essential QoS parameter,
especially for real-time applications with variable transmission rates and
specified bandwidth requirements, such as videoconferencing services”.
“Simulation results show that the proposed algorithm provides
considerable gains in terms of throughput, delay and fairness to the
performance of the considered 5G scenario, presenting lower
computational complexity compared to some optimization heuristics.”
2. We added the following texts in the result section explaining why our
algorithm performs better than the others:
“The GTM+ and the DMCG algorithms presented higher throughput
values compared to the GA-based allocation, showing that the Conflict
Graph strategy is more efficient than solving the problem represented by
equations
(3)
(4)
optimization algorithms.
(5)
and
(6)
The
proposed
by
DMCG
directly
applying
outperforms
the
GTM+ because it is considered the Conflict Graph strategy in
conjunction to delay minimization. In this way, in order to reduce
system delay, the proposed DMCG algorithm must increase system
throughput similar as done by the GTM+ or find a solution that
optimizes
the
allocation
process
yielding
the
system
delay
minimization.”
“The performance improvement of the proposed algorithm in relation to
the other studied algorithms is mainly due to its characteristic of
reducing the average delay of the system through the verification of the
utility function defined by the estimated delay parameter. The reduction
of the average delay of the system has a direct consequence in the
improvement of the system throughput and in the fairness parameter”
3. We also changed the text of the Conclusion section, presenting our
intentions for improvement and future work. The following text was
added:
“As a proposal for future work, we intend to extend the communication
scenario considering optimized resource allocation for CUEs devices and
the application of a variation of the proposed approach involving
downlink data transmission. The use of the proposed algorithm in the
downlink transmission could considerably improve spectral efficiency by
reusing idle resources of network users.”
Reviewer 2 (Anonymous):
1. We rewrote the Abstract to make it clearer and more concise, as
suggested. We inserted the following texts into the Abstract:
“This paper addresses the resource allocation problem in multi-sharing
uplink for device-to-device (D2D) communication, one aspect of 5G
communication networks. The main advantage and motivation in relation
to the use of D2D communication is the significant improvement in the
spectral efficiency of the system when exploiting the proximity of
communication pairs and reusing idle resources of the network, mainly
in the uplink mode, where there are more idle available resources.”
“Simulation results show that the proposed allocation algorithm
outperforms other algorithms in the literature, being highly efficient to
5G systems.”
2. We added the following information about the multifractal envelope
process and service curve in the text, explaining the motivation for their
use and to become clearer the concepts about these methods:
“In general, fractals are described in the literature according to a set of
behaviors and characteristics, such as self-similarity, phenomena with
patterns that are repeated at different scales and with irregular structures.
Multifractals are characterized by a set of fractal dimensions and are
used to treat phenomena that occur in multiple scales and dimensions
(Feldmann et al., 1998).”
“Network traffic traces captured at small scales in general tend to be
multifractal. This means that they present highly dependent structure
between samples with burst incidences at various scales. These
characteristics can degrade network performance in relation to traffic
flows considered Gaussian and short-dependent (Feldmann et al., 1998).
In this work, we propose to use a multifractal envelope process once it
was shown that it can better describe real traffic envelope processes at
the time scale considered in this paper than monofractal based or shortrange based envelope processes (Santos Jr and Vieira, 2015).”
“The service curve concept has been explored as estimation tool
involving various technologies and scenarios, mainly in the area of
deterministic and statistical network calculus (Santos Jr and Vieira,
2015). The advantage of the network calculus theory is due to the very
intuitive convolution formulas that can be used to determine traffic
output processes of a communication system from its arrival envelope
process and service curve (Le Boudec and Thiran, 2004).”
3. We corrected the citation style as suggested by the reviewer;
4. We improved the section of Related Works adding more relevant and
recent works on the theme, comparing them with our work in terms of
methodology and obtained results. The following texts were added to the
Related Works section:
“Contrary to our proposal, the GTM+ algorithm does not consider the
system delay as an optimization metric.”
“In (Zhang et al., 2016), it is proposed to use a distance based power
control scheme for D2D communication in order to achieve expected
performance gain without causing performance degradation to the
primary cellular users due to system interference. The authors applies the
Poisson point process (PPP) model, a stochastic geometry model, to get
tractable analysis results. Numeric results presented in simulations show
that the proposed scheme is benefit for both CUEs and DUEs. The initial
results demonstrate the advantages of using the power control scheme,
although comparisons are not made with state-of-the-art schemes in
terms of throughput and delay for resource allocation in D2D
communication.”
“In (Song et al., 2019) it is proposed a joint uplink and downlink (JUAD)
resource allocation scheme which maximizes system capacity and
guarantees the SINR constraint for CUEs and DUEs. The authors
formulate the optimization problem as a mixed integer nonlinear
programming problem (MINLP) and divide it into two sub-problems, the
power allocation and channel assignment. At first, the optimal
transmission power is obtained through the convex objective function.
Then, it is developed the Hungarian algorithm to achieve joint uplink and
downlink channel assignment, improving system capacity performance
and increasing spectrum efficiency. Authors show through simulations
that the performance of the algorithm is better than that of schemes for
independent allocation. The JUAD algorithm has among its advantages
the fact that it works with allocation in the uplink and downlink.”
“In (Li et al., 2019), the authors propose to use a resource allocation
scheme that integrates a probabilistic approach to a quasi-convex
optimization algorithm based on channel
probability statistical
characteristics for D2D communication mode selection and resource
optimization. Authors propose also a sub-optimal allocation algorithm
when number of users is too large, in order to save costs and improve
efficiency. The presented results show that the algorithm optimizes total
throughput of the system and reduces communication interference
between users. However, in the same way as JUAD and the other
mentioned algorithms, the algorithm proposed in (Li et al., 2019) does
not consider the user’s system delay in the optimization process, an
important parameter in any resource allocation system, especially if we
consider that D2D communication takes advantage of the proximity
between users to increase spectral efficiency. That is, by decreasing
communication delays of the links, spectral efficiency can be increased.”
5. We added a Notation table (Table 1) as suggested by the reviewer:
6. We designed new Figures 1 and 2 to make them clearer for the reader
and improve the quality of our work:
7. We used MATLAB to have more control on the simulation scenario
configuration and CDL channel modeling parameters than with some
commercial software. The following information was added to the
System Parameters and Results section:
“The simulation functions and routines were implemented instead of
using available network simulation tools in order to have more control on
the simulation scenario configuration and CDL channel modeling
parameters than some commercial softwares.”
8. We rearranged the figures and increased the size to improve
visualization;
9. We added justifications regarding the performance improvement of the
proposed algorithm when compared to the other simulated algorithms.
The following texts were added to the System Parameters and Results
section:
“The GTM+ and the DMCG algorithms presented higher throughput
values compared to the GA-based allocation, showing that the Conflict
Graph strategy is more efficient than solving the problem represented by
equations (3) (4) (5) and (6) by directly applying optimization
algorithms. The proposed DMCG outperforms the GTM+ because it is
considered the Conflict Graph strategy in conjunction to delay
minimization. In this way, in order to reduce system delay, the proposed
DMCG algorithm must increase system throughput similar as done by
the GTM+ or find a solution that optimizes the allocation process
yielding the system delay minimization.”
“The performance improvement of the proposed algorithm in relation to
the other studied algorithms is mainly due to its characteristic of
reducing the average delay of the system through the verification of the
utility function defined by the estimated delay parameter. The reduction
of the average delay of the system has a direct consequence in the
improvement of the system throughput and in the fairness parameter”
We are grateful for the suggestions and guidelines that we consider valuable for
improving the understanding and interpretation of the text for the reader and the quality
of the paper in general.
Sincerely,
Authors: Marcus Vinícius Gonzaga Ferreira and Flávio Henrique Teles Vieira.
" | Here is a paper. Please give your review comments after reading it. |
42 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>resources to D2D and cellular user equipments (CUE) users in the uplink of a 5G based network which considers the estimation of delay bound value. The proposed algorithm considers minimization of total delay for users in the uplink and solves the problem by forming conflict graph and by finding the maximal weight independent set. For the user delay estimation, it is proposed an approach considering multifractal traffic envelope process and service curve for the uplink. The performance of the algorithm is evaluated through computer simulations comparing to those of other algorithms in the literature in terms of throughput, delay, fairness and computational complexity in a scenario with channel modeling that describes the propagation of millimeter waves at frequencies above 6 GHz. Simulation results show that the proposed allocation algorithm outperforms other algorithms in the literature, being highly efficient to 5G systems.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Wireless communication applications have become very popular and several technologies were developed to improve the quality of service (QoS) during last decades. The increasing demand for wireless traffic through various applications such as augmented reality, machine-to-machine (M2M) communication, internet of things (IoT), among others, has driven telecommunications operators to increase the bandwidth of their systems and provide service with lower latency and higher throughput. However, there are several limitations of the system that hinder the increase in bandwidth, making it necessary to think about more advanced cellular communication systems with better spectral efficiency to support the growing number of devices in the network <ns0:ref type='bibr' target='#b12'>(Mishra et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Among the strands of 5G communication networks, one that stands out is device-to-device (D2D) communication. The D2D communication was introduced in LTE-A (long term evolution advanced).</ns0:p><ns0:p>However, this technology was in the background for a long time, not being used much by telecommunications operators. In this scenario, the D2D devices communicate with each other with low interference from the base station (BS), depriving the BS of traffic overload <ns0:ref type='bibr' target='#b12'>(Mishra et al., 2016)</ns0:ref>. Most works related to D2D communication deal with sharing resources in the uplink instead of the downlink, due to the interest in taking advantage of the asymmetric characteristic of data traffic in Internet networks. More specifically, Internet traffic is much higher on the downlink, thus it seems reasonable to suppose that there PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science is a greater availability of idle resources to be explored in the uplink than in the downlink <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>.</ns0:p><ns0:p>For resource sharing on the uplink, <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> has developed an heuristic that guarantees a maximum number of device pairs communicating with each other with minimal interference, denoted as greedy resource allocation (GRA) algorithm. The authors show that the problem is essentially an allocation problem and proposed an optimal algorithm based on the Hungarian method. However, the algorithm proposed by <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> was not developed for 5G networks, where can exist a large number of devices in the network. Moreover, the authors considered a single-sharing scenario where each allocated resource block (RB) can be reused by only one pair of devices. In 5G networks, it is desirable that any RB allocated to a device can be reused by several devices, which characterizes multi-sharing communication. <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> then proposed a method based on the algorithm described by <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> to solve the multi-sharing resources allocation problem in order to increase the throughput of the system and to guarantee the signal-to-interference-plus-noise ratio (SINR) requirements, denoted as greedy throughput maximization plus (GTM+). The authors formulated the multi-sharing allocation problem and proved that it is a non-deterministic polynomial-time hard problem. More specifically, <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> proposed an algorithm with a fast and efficient solution that explores conflict graphs and maximal weight independent, showing that it outperforms other existing schemes.</ns0:p><ns0:p>This paper proposes a resource allocation algorithm for multi-sharing communication which considers the estimation of delay information of the uplink in order to optimize overall system performance considering system delay minimization. In this context of multi-sharing communication, the delay is considered as an essential QoS parameter, especially for real-time applications with variable transmission rates and specified bandwidth requirements, such as videoconferencing services. To this end, first it is proposed an approach to estimate delay bound using envelope process for traffic flows and a service curve for the uplink transmission. The proposed algorithm uses concepts such as conflict graph and maximal weight independent set. However, different from the algorithms of the previous mentioned works, we propose an algorithm to solve the multi-sharing resource allocation problem that considers the minimization of the estimated delay information of the uplink.</ns0:p><ns0:p>The implemented scenario simulation considers the propagation of millimeter waves (mmWaves) above 6 GHz, an important characteristic of 5G networks which leads to improvements in throughput and latency for network users. Simulation results show that the proposed algorithm provides considerable gains in terms of throughput, delay and fairness to the performance of the considered 5G scenario, presenting lower computational complexity compared to some optimization heuristics.</ns0:p><ns0:p>The remainder of the paper is organized as follows: Section 2 describes the system model and formulates the multi-sharing resource allocation problem. Section 3 presents related works. Section 4 presents the algorithm proposed in this paper. Section 5 presents the performance evaluation of the algorithms. Section 6 presents the final considerations. This system is composed of M CUEs, including real and virtual CUEs, N DUEs, K idle resource blocks (RBs) scheduled for transmission during a transmission time interval (TTI) and M − K real CUEs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>SYSTEM MODEL AND PROBLEM FORMULATION</ns0:head><ns0:p>The RBs correspond to the minimum allocation unit in a wireless communication system, and each idle RB, not used by any real CUEs, is regarded as a virtual CUE with zero transmit power and no signal-to-interference-plus-noise ratio (SINR) requirement. The CUEs are denoted by C 1 ,C 2 , ...,C M and the DUEs by D 1 , D 2 , ..., D N . The DUE pair transmitter is denoted by D n,T x and the receiver by D n,Rx . In order to facilitate the comprehension of the proposed approach and of the considered system model, Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> presents the variables and parameters considered in this paper. Manuscript to be reviewed Computer Science Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Base Station</ns0:note><ns0:formula xml:id='formula_0'>P m G mB σ 2 m + ∑ n∈∆ m P n G nB ≥ γ m ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where P m is the transmit power for CUE m, G mB is the channel gain between CUE m and the BS, σ 2 m is the noise power for CUE m, P n is the transmit power for DUE pair n, G nB is the channel gain between DUE pair n transmitter and the BS and γ m is the SINR threshold required by CUE m.</ns0:p><ns0:p>There are also SINR requirements for the DUE pairs. Certain RBs can be reused by a DUE pair n only if the received SINR is higher than:</ns0:p><ns0:formula xml:id='formula_1'>P n G nn σ 2 n + P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ≥ γ n , ∀m ∈ Θ n ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P n is the transmit power for DUE pair n, G nn is the channel gain between the DUE pair n two ends, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>σ</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The multi-sharing resource allocation problem objective is determine which DUEs reuse RBs from CUEs such that the total system throughput F is maximized. The total system throughput F is defined as the sum of all CUEs' and DUEs' Shannon capacities and is formulated as follows:</ns0:p><ns0:formula xml:id='formula_3'>F = max ∆ 1 ,∆ 2 ,...,∆ M M ∑ m=1 W m log2 1 + P m G mB σ 2 m + ∑ n∈∆ m P n G nB + ∑ n∈∆ m W m log2 1 + P n G nn σ 2 n + P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>subject to:</ns0:p><ns0:formula xml:id='formula_4'>∑ n∈∆ m P n G nB ≤ P m G mB γ m − σ 2 m , ∀m ∈ {1, 2, ..., M − K} ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>P m G mn + ∑ n ′ ∈∆ m −{n} P n ′ G n ′ n ≤ P n G nn γ n − σ 2 n , ∀n ∈ {1, 2, ..., N} , ∀m ∈ Θ n ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>n ∈ ∆ m , ∀n ∈ {1, 2, ..., N} , ∀m ∈ Θ n ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where constraint (4) represents the maximum tolerable interference on m's transmission and constraint ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>)</ns0:p><ns0:p>represent the maximum tolerable interference on n's transmission.</ns0:p><ns0:p>The multi-sharing resource allocation problem described by equation (3) under constraints ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>) is non-deterministic polynomial-time hard. That is, it is NP-hard as proved in <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>, which justifies the need to find an efficient and fast algorithm to solve the problem, such as the one presented in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RELATED WORKS</ns0:head><ns0:p>In this section, we briefly comment about some works related to D2D communications and multi-sharing resource allocation. The greedy throughput maximization plus algorithm (GTM+) is an iterative algorithm proposed by <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> which attempt to find a solution to the multi-sharing resources allocation problem. This algorithm is based on the maximization of a utility function given in terms of the system throughput considering that RBs are reused <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>. The authors in <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref> use the heuristic algorithm proposed by <ns0:ref type='bibr' target='#b1'>(Basagni, 2001)</ns0:ref> to obtain a maximal weight independent set, with time complexity O(n 3 ). The worst-case complexity of GTM+ is O(n 4 ), because in each iteration, at least one DUE pair is granted to reuse the RBs. Contrary to our proposal, the GTM+ algorithm does not consider the system delay as an optimization metric.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>(Zhang et al., 2016)</ns0:ref>, it is proposed to use a distance based power control scheme for D2D communication in order to achieve expected performance gain without causing performance degradation to the primary cellular users due to system interference. The authors applies the Poisson point process (PPP) model, a stochastic geometry model, to get tractable analysis results. Numeric results presented in simulations show that the proposed scheme is benefit for both CUEs and DUEs. The initial results demonstrate the advantages of using the power control scheme, although comparisons are not made with state-of-the-art schemes in terms of throughput and delay for resource allocation in D2D communication.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b12'>(Mishra et al., 2016)</ns0:ref>, the authors propose a resource allocation scheme for D2D communication in the uplink which consist of two phases. In the first phase, if multihop communication (two-hops) is required, a relay is selected from the available relays between cell edge device and BS. The relay selection scheme selects the parameters such as battery power and reliability to minimize packet loss.</ns0:p><ns0:p>In the second phase, an efficient resource allocation scheme is proposed that reduces the upload time and optimizes the number of resource blocks. In this work, we also address resource allocation scheme for D2D communication in the uplink but the throughput maximization occurs via a delay minimization based algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>, the authors study power control and the problem of resource allocation in D2D communication underlaying a non-orthogonal multiple access (NOMA) cellular network. The objective of the work is to maximize the throughput sum of all D2D pairs by meeting the minimum rate constraints of users and using techniques of successive interference cancellation. The optimal conditions for power control of cellular users in each subchannel are derived first. Then, it is proposed a dual-based iterative algorithm to solve the resource allocation problem. The results show that the proposed scheme outperforms the conventional scheme for the network with high data requirements. Different from <ns0:ref type='bibr' target='#b14'>(Pan et al., 2018)</ns0:ref>, we consider in this work an uplink of CP-OFDM (cyclic prefix -orthogonal frequencydivision multiple access) based communication system. However, we also aim to enhance network QoS parameters such as throughput and delay through the application of our resource allocation approach.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b8'>(Khuntia and Hazra, 2019)</ns0:ref>, it is proposed by the authors a Deep Q-learning with an extended Kalman filter (EKF) scheme to solve the channel and power allocation issue for D2D communication devices when the prior traffic information is not known to the BS. They explore an optimal policy for resource and power allocation with the aim of maximizing the total throughput of the system. The and Rao T, 2019) and <ns0:ref type='bibr' target='#b8'>(Khuntia and Hazra, 2019)</ns0:ref> produce interesting results, we alternatively propose an algorithm in this paper that is not based on reinforcement learning that can solve the problem with a lower computational complexity.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b16'>(Song et al., 2019)</ns0:ref> it is proposed a joint uplink and downlink (JUAD) resource allocation scheme which maximizes system capacity and guarantees the SINR constraint for CUEs and DUEs. The authors formulate the optimization problem as a mixed integer nonlinear programming problem (MINLP) and divide it into two sub-problems, the power allocation and channel assignment. At first, the optimal transmission power is obtained through the convex objective function. Then, it is developed the Hungarian algorithm to achieve joint uplink and downlink channel assignment, improving system capacity performance and increasing spectrum efficiency. Authors show through simulations that the performance of the algorithm is better than that of schemes for independent allocation. The JUAD algorithm has among its advantages the fact that it works with allocation in the uplink and downlink.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b11'>(Li et al., 2019)</ns0:ref>, the authors propose to use a resource allocation scheme that integrates a probabilistic approach to a quasi-convex optimization algorithm based on channel probability statistical characteristics for D2D communication mode selection and resource optimization. Authors propose also a sub-optimal allocation algorithm when number of users is too large, in order to save costs and improve efficiency. The presented results show that the algorithm optimizes total throughput of the system and reduces communication interference between users. However, in the same way as JUAD and the other mentioned algorithms, the algorithm proposed in <ns0:ref type='bibr' target='#b11'>(Li et al., 2019)</ns0:ref> does not consider the user's system delay in the optimization process, an important parameter in any resource allocation system, especially if we consider that D2D communication takes advantage of the proximity between users to increase spectral efficiency. That is, by decreasing communication delays of the links, spectral efficiency can be increased.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DELAY MINIMIZATION BASED D2D RESOURCE ALLOCATION ALGO-</ns0:head></ns0:div>
<ns0:div><ns0:head>RITHM</ns0:head><ns0:p>In this section, a multi-sharing resource allocation algorithm for D2D communication is proposed which considers the estimated delay information as a utility function to be optimized. To estimate the delay, an approach based on deterministic network calculus concepts is also proposed.</ns0:p><ns0:p>Deterministic network calculus can be used to estimate resources in order to provide quality of service Network calculus can also be seen as the systems theory that applies to computer networks, but the main difference is to consider another algebra (Gonzaga Ferreira and Teles Vieira, 2020).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>MFBAP envelope process</ns0:head><ns0:p>In general, fractals are described in the literature according to a set of behaviors and characteristics, such as self-similarity, phenomena with patterns that are repeated at different scales and with irregular structures. Multifractals are characterized by a set of fractal dimensions and are used to treat phenomena that occur in multiple scales and dimensions <ns0:ref type='bibr' target='#b4'>(Feldmann et al., 1998)</ns0:ref>.</ns0:p><ns0:p>Network traffic traces captured at small scales in general tend to be multifractal. This means that they present highly dependent structure between samples with burst incidences at various scales. These characteristics can degrade network performance in relation to traffic flows considered Gaussian and short-dependent <ns0:ref type='bibr' target='#b4'>(Feldmann et al., 1998)</ns0:ref>. In this work, we propose to use a multifractal envelope process once it was shown that it can better describe real traffic envelope processes at the time scale considered in this paper than monofractal based or short-range based envelope processes <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>.</ns0:p><ns0:p>The envelope process for incoming packet traffic is an upper bound for the actual accumulated packet traffic process. For a deterministic envelope process, the function Â(t) corresponds to the maximum value of a flow A(t) in the time slot [s, s + t], and is defined by equation as follows (Le <ns0:ref type='bibr' target='#b9'>Boudec and Thiran, 2004)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_7'>Â(t) = sup s≥0 A [s, s + t] ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where sup is an operator that returns the maximum value of A[s, s +t] without establishing an upper bound value s ≥ 0 in this case.</ns0:p><ns0:p>The multifractal bounded arrival process (MFBAP) is a deterministic alternative to obtain the envelope process that limits the volume of traffic in a given time interval, calculated as follows <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>ÂMFBAP (t) = āt + kσt H(t) + B,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where H(t) is the Hölder exponent <ns0:ref type='bibr'>(P et al., 2003)</ns0:ref>, that represents the degree of the singularity of the function, t is the instant of time, ā and σ are respectively the mean and standard deviation of incoming traffic, k is the constant related to the probability of violation (for ε = 10 −6 ) of the envelope process and B is the buffer size.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Delay Bound Estimation</ns0:head><ns0:p>The service curve concept has been explored as estimation tool involving various technologies and scenarios, mainly in the area of deterministic and statistical network calculus <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>.</ns0:p><ns0:p>The advantage of the network calculus theory is due to the very intuitive convolution formulas that can be used to determine traffic output processes of a communication system from its arrival envelope process and service curve (Le <ns0:ref type='bibr' target='#b9'>Boudec and Thiran, 2004)</ns0:ref>.</ns0:p><ns0:p>The upper bound on delay, denoted by d, is given by <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_9'>d = inf {d ≥ 0|∀t ≥ 0 : A * (t − d) ≤ S(t)} , (<ns0:label>9</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>where inf is an operator that returns, in this case, the lowest value of d ≥ 0 obeying A * (t − d) ≤ S(t). A * is the MFBAP envelope process, calculated according to equation ( <ns0:ref type='formula' target='#formula_8'>8</ns0:ref>) and S is the generalized service curve of an OFDM (orthogonal frequency-division multiple access) system for any user served for the same time interval T , and can be denoted as <ns0:ref type='bibr' target='#b3'>(Costa, 2013)</ns0:ref>: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>S n S (t) = cT P + cT min max [t − PN S T − (n S − 1)T ; 0] T ; 1 ,<ns0:label>(10)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where c is the average service rate on the system server and N S is the number of time slots T per complete cycle P given by P = t N S T . The operator represents the smallest integer closest to t N S T . The use of service curve allows to obtain network performance and behavior parameters in an analytical way, as well as estimates of the delay and backlog bounds.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Delay Minimization Conflict Graph Algorithm</ns0:head><ns0:p>In this section, we propose a resource allocation algorithm for the uplink of communication system with D2D users that considers the minimization of delay, called DMCG (delay minimization conflict graph) algorithm. It is proposed in this paper to use equation ( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) to estimate the delay due to its precision as verified in previously works such as <ns0:ref type='bibr' target='#b15'>(Santos Jr and Vieira, 2015)</ns0:ref> and (Gonzaga Ferreira and Teles Vieira, 2020), which turns it possible to make early decisions on wireless networks resource scheduling.</ns0:p><ns0:p>The The set of vertices of the conflict graph G m ′ corresponds to the DUE pairs in group m ′ . A weight value is assigned to each vertex according to the utility of the corresponding DUE pair when it joins group m ′ . In the conflict graph, an edge is added for every two vertices if the mutual interference does not meet the threshold. The same RB could not be reused by two DUE pairs. This explains why the edge connects two vertices.</ns0:p><ns0:p>The conflict graph G m ′ formed by DUE pairs representing the vertices helps to determine the candidates. Thus, the objective is to remove the DUE pairs that cannot coexist due to the large mutual interference and to keep the DUE pairs that maximize the throughput of the system. Candidates are chosen as the maximum weight independent set of the conflict graph G m ′ . Finally, the set ∆ 1 , ∆ 2 , ..., ∆ M of DUE pairs n that reuse RBs allocated to CUE m is returned as the algorithm result.</ns0:p><ns0:p>Algorithms 1 and 2 show the pseudo-code for the proposed multi-sharing resource allocation algorithm, an efficient solution to the maximization problem described by equation ( <ns0:ref type='formula' target='#formula_3'>3</ns0:ref>) and subject to constraints (4), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>).</ns0:p><ns0:p>The operation of Algorithm 1 is similar to that of the GTM+ algorithm, but with the difference of using the estimated delay function described by equation ( <ns0:ref type='formula' target='#formula_9'>9</ns0:ref>) instead of a utility function based on throughput. The proposed algorithm seeks to find a solution that minimizes the delay bound estimated to each user and simultaneously meeting the constraints imposed by mutual interference.</ns0:p><ns0:p>The proposed multi-sharing allocation method for uplink transmission described in Algorithm 1 can not be directly applied for a downlink transmission scenario. For this, it would be necessary to reformulate the problem described by equation ( <ns0:ref type='formula' target='#formula_3'>3</ns0:ref>) and subject to constraints (4), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>). This issue will be addressed in future works. Notice also that the DMCG algorithm deals with the reuse of idle resource blocks of CUEs by DUEs, that is, its focus is on determining pairs of DUEs in the network. The joint use of the method proposed in this paper with other techniques available in the literature for resource allocation between devices and base station would solve the allocation problem in a generalized allocation scenario and will be investigated later.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>SIMULATIONS AND RESULTS</ns0:head><ns0:p>This section presents the simulation parameters of the wireless network and the channel modeling used in the simulations, as well as the obtained results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Channel Modeling for mmWaves</ns0:head><ns0:p>Recent studies developed by 3GPP regarding high frequency bands between 0.5 and 100 GHz defined two channel models for this scenario: clustered delay line (CDL) and tapped delay line (TDL). These channel </ns0:p><ns0:formula xml:id='formula_12'>∆ m ′ ← maximum weight independent set of G m ′ ; for n ′ ∈ ∆ m ′ do / * Check if DUE pair n ′ meet the SINR requirement * / if P m ′ G m ′ n ′ + ∑ n∈∆ m ′ −{n ′ } P n G nn ′ ≥ I n ′ then Remove n ′ from ∆ m ′ ; end end</ns0:formula><ns0:p>In descending order, sort DUEs in ∆ m ′ by their interference on C m ′ ; / * Remove one DUE from ∆ m ′ until the maximum tolerable interference models are applicable for system-level simulations subject to the following conditions (3GPP, 2017):</ns0:p><ns0:formula xml:id='formula_13'>I m ′ constraint is met * / while ∑ n∈∆ m ′ P n G nB ≥ I m ′ do Remove the first element from ∆ m ′ ; end for n ∈ Γ m ′ − ∆ m ′ do / * Call Algorithm 2 * / m * ← OptimizeDelay (n,U − {m ′ }); / * D n joins group m * * / Γ m * ← Γ m * ∪ {n}; end / * Group m ′ is marked * / U ← U − {m ′ };</ns0:formula><ns0:p>• For system-level simulations, the supported scenarios are microcell and macrocell urban, indoor office and macrocell rural;</ns0:p><ns0:p>• Bandwidth is supported up to 10 % of the center frequency, but not greater than 2 GHz.</ns0:p><ns0:p>The CDL model is a type of channel modeling where the received signal is composed of several separate clusters in delay, as shown in Figure <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>. Each cluster contains a number of multipath components with the same delay value, but with different arrival and departure angles.</ns0:p><ns0:p>CDL models are defined for the frequency range between 0.5 GHz and 100 GHz with a maximum bandwidth of 2 GHz. CDL models can be implemented through the generation of coefficients or the generation of TDL model using spatial filtering.</ns0:p><ns0:p>Three CDL models, called CDL-A, CDL-B and CDL-C, are defined to represent three different channel profiles for non-line of sight (NLOS) environments, while CDL-D and CDL-E models are built for line of sight (LOS) environments (3GPP, 2017).</ns0:p><ns0:p>It is considered in this work the multipath model clustered delay line A (CDL-A) (3GPP, 2018b) (3GPP, 2018a), suitable for the mmWaves scenario, and the Rayleigh fading in order to verify the channel modeling impacts via simulations. The carrier frequency of 26 GHz was chosen based on recent studies by Brazilian National Telecommunications Agency (ANATEL) with this frequency range, which should also be adopted in Europe <ns0:ref type='bibr' target='#b18'>(Tecnoblog, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>System Parameters and Results</ns0:head><ns0:p>The simulations were conducted using MATLAB software version R2018a and a microcomputer with the following configuration: Intel Core i7-4785T CPU 2.20 GHz, 8 GB RAM, SSHD SATA III and Windows 10 64 bits. The simulation functions and routines were implemented instead of using available network simulation tools in order to have more control on the simulation scenario configuration and CDL channel modeling parameters than some commercial softwares.</ns0:p><ns0:p>It was compared the simulation results of the proposed DMCG algorithm with those of the greedy throughput maximization plus (GTM+) presented in <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>, a genetic algorithm (GA) based approach, which aims to find a solution for the maximization problem described by equation (3) subject to constraints (4), ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>), and a random reuse scheme. The GA-based algorithm was developed using 30 individuals and 100 iterations. transmission time intervals (TTIs) in order to reflect the average system performance. The system simulation parameters are given in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>. Most parameter values were set according to references <ns0:ref type='bibr' target='#b17'>(Sun et al., 2013)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>(Ciou et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In this work, it is simulated an uplink based on CP-OFDM (cyclic prefix -orthogonal frequencydivision multiple access) whose configuration consists of subcarrier spacing of 15 KHz and normal cyclic prefix (CP), as described in <ns0:ref type='bibr'>(3GPP, 2018b)</ns0:ref>. Each radio frame occupies 10 ms, divided in 10 subframes of 1 ms, and each subframe is divided in two time slots of 0.5 ms with 7 symbols for each time slot when using normal CP. In the frequency domain the resources are aggregated in 12 subcarriers with 15 KHz bandwidth, totalizing 180 KHz bandwidth, defined as a resource block (RB), the basic unity of resource allocation.</ns0:p><ns0:p>Five traces of real transmission control protocol / Internet protocol (TCP/IP) network traffic were used to represent users' data traffic during the simulation of the algorithms, which were aggregated in the time domain at 1 ms intervals and assigned to users randomly. These series represent TCP/IP traffic between the (University of Waikato, 2020) and external networks and were collected between 20/05/2011 and 29/10/2011.</ns0:p><ns0:p>The total throughput values of the system calculated according to equation (3) and the sum of all devices throughput are shown in Figures <ns0:ref type='figure'>3 and 4</ns0:ref>. It can be seen that the proposed algorithm has the highest values in the two simulated scenarios, both with Rayleigh and CDL-A modeling. The GTM+ and the DMCG algorithms presented higher throughput values compared to the GA-based allocation, showing that the conflict graph strategy is more efficient than solving the problem represented by equations (3) (4) ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>) by directly applying optimization algorithms. The proposed DMCG outperforms the GTM+ because it is considered the conflict graph strategy in conjunction to delay minimization. In this way, in order to reduce system delay, the proposed DMCG algorithm must increase system throughput similar as done by the GTM+ or find a solution that optimizes the allocation process yielding the system delay minimization. The performance of the proposed algorithm improves as the number of CUEs in the network increases, while the performance of the GA-based algorithm tends to become worse.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science Fairness is an important measure that determines how fair the distribution of resources among system users is, calculated as described in <ns0:ref type='bibr' target='#b7'>(Jain et al., 1998)</ns0:ref>. It can be seen in Figures <ns0:ref type='figure'>7 and 8</ns0:ref> that the proposed algorithm presents the highest fairness values in all simulated scenarios, showing that it presents a fair distribution of resources among the users of the system. The algorithm based on GA presents the lowest fairness values, a fact expected once the algorithm considers only the system total throughput and the interference constraints, not taking into account a fair distribution of resources.</ns0:p><ns0:p>Figures 9 and 10 present the average delay values calculated for the compared algorithms. The values presented by the proposed DMCG algorithm are the lowest values, even lower than the values presented by the GTM+ algorithm. This result proves that the DMCG algorithm is efficient in its strategy of using the estimated delay as a utility function in the allocation system, resulting in a considerable drop in the delay value in both simulated scenarios. The delay values of the GA-based algorithm and the random algorithm vary widely in the simulated scenarios. In the tests, the presence of these peaks of values was verified due to the characteristic of these algorithms of not distributing resources in a fair way, i.e., the characteristic of penalizing some users in terms of delay in detriment of total throughput of the system.</ns0:p><ns0:p>The performance improvement of the proposed algorithm in relation to the other studied algorithms is mainly due to its characteristic of reducing the average delay of the system through the verification of the utility function defined by the estimated delay parameter. The reduction of the average delay of the system has a direct consequence in the improvement of the system throughput and in the fairness parameter.</ns0:p><ns0:p>Regarding the processing time, shown in Figures <ns0:ref type='figure' target='#fig_10'>11 and 12</ns0:ref>, it can be noticed that the algorithm based on GA has the highest values. GA-based heuristics is the one that demands the most processing among the compared algorithms, due to the large number of variables involved in the system. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>This paper presented a multi-sharing resource allocation algorithm for device-to-device (D2D) communication networks in a scenario with millimeter waves above 6 GHz. The proposed DMCG algorithm considers the minimization of a delay function estimated using concepts of network calculus such as traffic envelope process and service curve to decide on the allocation of idle resources in the network to D2D devices.</ns0:p><ns0:p>The results presented for two channel modeling scenarios (Rayleigh and CDL-A) show that the DMCG algorithm provides the highest throughput values of the system when compared to the algorithms based on GA, GTM+ algorithm and the random algorithm. The proposed algorithm also yields higher values of fairness and lower values of delay, as well as processing time shorter than the processing time presented by the GTM+ and GA-based algorithms.</ns0:p><ns0:p>These results show that the use of estimated delay information proved to enhance the multi-sharing resource allocation performance that is based on conflict graphs and maximal weight independent set, with improvement in all requirements in relation to the GTM+ algorithm and with the advantage of being able to anticipate the allocation of resources in a D2D communication scenario with mmWaves.</ns0:p><ns0:p>As a proposal for future work, we intend to extend the communication scenario considering optimized resource allocation for CUEs devices and the application of a variation of the proposed approach involving downlink data transmission. The use of the proposed algorithm in the downlink transmission could considerably improve spectral efficiency by reusing idle resources of network users.</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 represents the system model of D2D communications which reuse the uplink resources of equipments in wireless networks. In these systems, there are two types of equipments: pairs of D2D user equipments (DUEs) and cellular user equipments (CUEs). The first one reuse radio resource allocated to the CUEs and communicate directly without communication load on the base station (BS).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Before sharing the RBs to DUEs during transmission, it is pre-allocated a set of uplink RBs for each CUE. The allocated bandwidth denoted by W m for each CUE m is proportional to the number of allocated RBs. Each CUE can share its RBs with several pairs of DUEs, making it possible a large number 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. System model of D2D communications reusing the uplink resources of CUES.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>resource allocation scheme comprises of four phases, i.e., cell splitting, clustering, queuing model and channel allocation simultaneously with power allocation. It is used EKF together with Deep Q-Network to incorporate weight uncertainty of the Q-function as well as the state uncertainty during transition, helping the learner in achieving an optimal policy. The authors show the advantage of the resource sharing scheme over other existing schemes through numerical simulations. The author propose in (G. and Rao T, 2019) to use reinforcement-learning based latency controlled D2D connectivity (RL-LCDC) algorithm and its Q-Learning approach in an indoor D2D communication network for 5G connectivity with minimum latency. The algorithm discovers the neighbors, decides the D2D link and adaptively controls the communication range for maximum network connectivity. Resultspresented in<ns0:ref type='bibr' target='#b5'>(G. and Rao T, 2019)</ns0:ref> show that RL-LCDC optimizes the connectivity with lower delay and better energy efficiency when compared with other conventional schemes. Although both approaches (G.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>QoS) in networks and has provided powerful tools for estimating backlog and delay in a network with 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021) Manuscript to be reviewed Computer Science guaranteed service for individual traffic flows. Using the notion of envelope process, arrival curves and service curves, several studies have shown that the backlog and delay bound can be concisely expressed by the Min-Plus algebra (Le Boudec and Thiran, 2004).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021) Manuscript to be reviewed Computer Science Algorithm 2: Proposed Algorithm: Delay Optimization Data: DUE n and set C; Result: Optimum group m * ;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Representation of clusters for CDL model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. CDL-A channel model: Total throughput for different number of CUEs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .Figure 6 .Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .</ns0:head><ns0:label>567891011</ns0:label><ns0:figDesc>Figure 5. CDL-A channel model: Total D2D throughput for different number of CUEs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Rayleigh channel model: Processing time for different number of CUEs</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Notation table</ns0:figDesc><ns0:table><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>Number of CUEs.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>Number of DUEs.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell cols='2'>Number of idle resource blocks.</ns0:cell></ns0:row><ns0:row><ns0:cell>C 1 ,C 2 , ...,C M</ns0:cell><ns0:cell>CUEs denotation.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>D 1 , D 2 , ..., D N DUEs denotation.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>W m</ns0:cell><ns0:cell cols='2'>Allocation bandwidth for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>Θ n</ns0:cell><ns0:cell cols='2'>Set of CUEs that share their RBs with DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>∆ m</ns0:cell><ns0:cell cols='2'>Set of DUE pairs that reuse the RBs allocated to CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>P m</ns0:cell><ns0:cell cols='2'>Transmit power for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>G mB</ns0:cell><ns0:cell cols='2'>Channel gain between CUE m and the BS.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ 2 m</ns0:cell><ns0:cell cols='2'>Noise power for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>P n</ns0:cell><ns0:cell cols='2'>Transmit power for DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G nB</ns0:cell><ns0:cell cols='2'>Channel gain between DUE pair n transmitter and the BS.</ns0:cell></ns0:row><ns0:row><ns0:cell>γ m</ns0:cell><ns0:cell cols='2'>SINR threshold required by CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>G nn</ns0:cell><ns0:cell cols='2'>Channel gain between the DUE pair n two ends.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ 2 n</ns0:cell><ns0:cell cols='2'>Noise power for DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G mn</ns0:cell><ns0:cell cols='2'>Channel gain from CUE m to DUE pair n.</ns0:cell></ns0:row><ns0:row><ns0:cell>G n ′ n</ns0:cell><ns0:cell>Channel gain from D n ′</ns0:cell><ns0:cell>,T x to D n,Rx .</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell cols='2'>Total system throughput.</ns0:cell></ns0:row><ns0:row><ns0:cell>Â(t)</ns0:cell><ns0:cell cols='2'>Maximum value of a flow A(t) in the time slot [s, s + t].</ns0:cell></ns0:row><ns0:row><ns0:cell>ÂMFBAP (t)</ns0:cell><ns0:cell cols='2'>Multifractal bounded arrival process.</ns0:cell></ns0:row><ns0:row><ns0:cell>H(t)</ns0:cell><ns0:cell>Hölder exponent.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>t</ns0:cell><ns0:cell>Instant of time.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ā</ns0:cell><ns0:cell cols='2'>Mean of the incoming traffic.</ns0:cell></ns0:row><ns0:row><ns0:cell>σ</ns0:cell><ns0:cell cols='2'>Standard deviation of the incoming traffic.</ns0:cell></ns0:row><ns0:row><ns0:cell>k</ns0:cell><ns0:cell cols='2'>Constant related to the probability of violation of the envelope process.</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell>Buffer size.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>d</ns0:cell><ns0:cell>Estimated delay.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell cols='2'>Generalized service curve of an OFDM system.</ns0:cell></ns0:row><ns0:row><ns0:cell>c</ns0:cell><ns0:cell cols='2'>Average service rate on the system server.</ns0:cell></ns0:row><ns0:row><ns0:cell>N S</ns0:cell><ns0:cell cols='2'>Number of time slots T per complete cycle P given by P = t N S T .</ns0:cell></ns0:row><ns0:row><ns0:cell>Γ m</ns0:cell><ns0:cell cols='2'>Set of unmarked pair m.</ns0:cell></ns0:row><ns0:row><ns0:cell>∆ m ′</ns0:cell><ns0:cell cols='2'>Set of DUE pairs that will reuse RBs of CUE m ′ .</ns0:cell></ns0:row><ns0:row><ns0:cell>G m ′</ns0:cell><ns0:cell cols='2'>Conflict graph formed by DUE pairs representing the vertices.</ns0:cell></ns0:row><ns0:row><ns0:cell>U</ns0:cell><ns0:cell cols='2'>Set of unmarked groups.</ns0:cell></ns0:row><ns0:row><ns0:cell>I m ′</ns0:cell><ns0:cell cols='2'>Maximum tolerable interference for CUE m.</ns0:cell></ns0:row><ns0:row><ns0:cell>C m ′</ns0:cell><ns0:cell cols='2'>Set of DUEs in ∆ m ′ sorted by their interference.</ns0:cell></ns0:row></ns0:table><ns0:note>3/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>2 n is the noise power for DUE pair n, P m is the transmit power for CUE m, G mn is the channel gain from CUE m to DUE pair n, G n ′ n is the channel gain from D n ′ ,T x to D n,Rx and γ n is the SINR threshold for DUE pair n.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>It is assumed in this work that the BS knows the channel properties such as transmit power, channel</ns0:cell></ns0:row><ns0:row><ns0:cell>gain and noise power of the communication link for CUEs m and DUE pairs n, i.e., the BS knows the</ns0:cell></ns0:row><ns0:row><ns0:cell>channel state information (CSI) and the SINR threshold for each user.</ns0:cell></ns0:row></ns0:table><ns0:note>4/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>DMCG algorithm is started by randomly allocating idle RBs, i.e., one DUE pair is randomly chosen for each idle RB. Then, the algorithm decides how to reuse RBs of all CUEs. A group of unmarked pair m is formed by adding each unallocated DUE pair n to it in order to optimize utility function (9), forming a set denoted by Γ m . A set of DUE pairs that maximize the total utility is contained into the largest group m ′ with unallocated DUE pairs, taking the maximum weight independent set of the conflict graph corresponding to group m ′ . It is allowed that DUE pairs reuse RBs of CUE m ′ . Candidates are removed one by one until the SINR requirements are met. Then, finishing an iteration, the set of DUE pairs that will reuse RBs of CUE m ′ , denoted as ∆ m ′ , ends up being the remaining candidates and group m ′ is marked done. Another iteration is effectuated if there are unmarked groups. Thus, the DMCG algorithm works in an iterative way by forming conflict graphs G m ′ and seeking to meet the SINR requirements at each iteration</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>CUEs, N DUE pairs, K idle RBs and M − K real CUEs; Result: ∆ 1 , ∆ 2 , ..., ∆ M (set of the DUE pairs that reuse RBs allocated to CUE m); 1 For each idle RB, randomly pick K DUE pairs D δ 1 , D δ 2 , ...D δ K ; ∈ {1, 2, ..., N} − {δ 1 , δ 2 , ..., δ K } do</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Algorithm 1: Delay Minimization Conflict Graph Algorithm</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Data: M / * U is the set of unmarked groups</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>2 U ← {1, 2, ..., M};</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Γ is the set of DUES that joins group m</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>3 Γ 1 , Γ 2 , ..., Γ M ← / 0;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Initialize</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>4 for n / * Call Algorithm 2</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>7 end</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>/ * Main body</ns0:cell><ns0:cell>* /</ns0:cell></ns0:row><ns0:row><ns0:cell>8 while U = / 0 do</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021) Manuscript to be reviewed 5 m * ← OptimizeDelay(n,U); / * D n joins group m * * / 6 Γ m * ← Γ m * ∪ {n}; 9 Form the conflict graph G m * for the largest group Γ m ′ in U;</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Simulation Parameters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Multipath models</ns0:cell><ns0:cell>Rayleigh and CDL-A</ns0:cell></ns0:row><ns0:row><ns0:cell>CUE transmission power</ns0:cell><ns0:cell>23 dBm</ns0:cell></ns0:row><ns0:row><ns0:cell>DUE transmission power</ns0:cell><ns0:cell>10 dBm</ns0:cell></ns0:row><ns0:row><ns0:cell>SINR requirement of each CUE</ns0:cell><ns0:cell>7 dB</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SINR requirement of each DUE pair 4.7 dB</ns0:cell></ns0:row><ns0:row><ns0:cell>Radius of BS coverage</ns0:cell><ns0:cell>500 m</ns0:cell></ns0:row><ns0:row><ns0:cell>Distance between each DUE pair</ns0:cell><ns0:cell>15 m</ns0:cell></ns0:row><ns0:row><ns0:cell>White noise power density</ns0:cell><ns0:cell>-174 dBm/Hz</ns0:cell></ns0:row><ns0:row><ns0:cell>Path loss model for CUE and DUE</ns0:cell><ns0:cell>128.1 + 37.6log10(d[km])</ns0:cell></ns0:row><ns0:row><ns0:cell>Path loss model for DUE pairs</ns0:cell><ns0:cell>148 + 40log10(d[km])</ns0:cell></ns0:row><ns0:row><ns0:cell>System bandwidth</ns0:cell><ns0:cell>20 MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of RBs</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Carrier frequency</ns0:cell><ns0:cell>26 GHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of real CUEs</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of CUEs</ns0:cell><ns0:cell>50 until 100</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of DUE pairs</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of TTIs</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='17'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54775:2:0:NEW 22 Feb 2021)</ns0:note>
</ns0:body>
" | "Title: Delay Minimization Based Uplink Resource Allocation for Device-to-Device
Communications Considering mmWave Propagation.
Corresponding Author: Marcus Vinícius Gonzaga Ferreira.
E-mail address: marcusferreira@inf.ufg.br / marcusviniciusbr@gmail.com.
Telephone: +55 62 98164-7676.
Corresponding Author's Institution: Federal University of Goiás.
Order of Authors: Marcus Vinícius Gonzaga Ferreira; Flávio Henrique Teles Vieira.
Dear editor and reviewers,
We are pleased to submit an original research paper entitled “Delay
Minimization
Based
Uplink
Resource
Allocation
for
Device-to-Device
Communications Considering mmWave Propagation” for consideration for publication
in the PeerJ Computer Science Journal.
We list below the change made to the paper addressed to reviewer 1 (C Lin):
•
We added the following text to the Delay Minimization Conflict Graph
Algorithm section describing the inadequacies of the proposed method and
possible solutions:
“The proposed multi-sharing allocation method for uplink transmission
described in Algorithm 1 can not be directly applied for a downlink
transmission scenario. For this, it would be necessary to reformulate the
problem described by equation (3) and subject to constraints (4), (5) and
(6). This issue will be addressed in future works. Notice also that the
DMCG algorithm deals with the reuse of idle resource blocks of CUEs
by DUEs, that is, its focus is on determining pairs of DUEs in the
network. The joint use of the method proposed in this paper with other
techniques available in the literature for resource allocation between
devices and base station would solve the allocation problem in a
generalized allocation scenario and will be investigated later.”
We are grateful for the suggestions and guidelines that we consider valuable for
improving the understanding and interpretation of the text for the reader and the quality
of the paper.
Sincerely,
Authors: Marcus Vinícius Gonzaga Ferreira and Flávio Henrique Teles Vieira.
" | Here is a paper. Please give your review comments after reading it. |
43 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Data acquisition problem in large-scale distributed Wireless Sensor Networks (WSNs) is one of the main issues that hinder the evolution of Internet of Things (IoT) technology.</ns0:p><ns0:p>Recently, combination of Compressive Sensing (CS) and routing protocols has attracted much attention. An open question in this approach is how to integrate these techniques effectively for specific tasks. In this paper, we introduce an effective Deterministic Clustering based CS scheme (DCCS) for Fog-supported heterogeneous WSNs to handle the data acquisition problem. DCCS employs the concept of Fog Computing, reduces total overhead and computational cost needed to self-organize sensor network by using a simple approach, and then uses CS at each sensor node to minimize the overall energy expenditure and prolong the IoT network lifetime. Additionally, the proposed scheme includes an effective algorithm for CS reconstruction called Random Selection Matching Pursuit (RSMP) to enhance the recovery process at the Base Station (BS) side with a complete scenario using CS. RSMP adds random selection process during the forward step to give opportunity for more columns to be selected as an estimated solution in each iteration. The results of simulation prove that the proposed technique succeeds to minimize the overall network power expenditure, prolong the network lifetime and provide better performance in CS data reconstruction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Internet of Things (IoT) can be seen as the primary bridge that connects physical and digital world. IoT has become one of the significant and attractive field of research through which researchers monitor everyday usages via the Internet <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Integrating computational abilities in various kinds of things and living organisms can create big leap in many sectors such as health, military, home, entertainment etc. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. IoT consists of combinations of different technologies such as cloud computing, WSNs, big data and data information. Nowadays, WSNs are widely used in various fields. WSNs can be considered as part of IoT, due to the huge number of connected sensor nodes it contains. The main task of IoT components (such as sensors, phones and RFID tags) is to sense, collect and store data, and then transmit the collected data to the BS <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. However, limitation on power, computation, storage and battery resources of IoT devices hinder the development of IoT applications. To overcome these limitations most of the IoT applications depend on Cloud to deploy their computations. However, Cloud solutions are unable to meet the issues such as location awareness, mobility support, geo-distribution and latency for its end users concurrently. Fog computing can overcome these limitations of Cloud computing <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> and bring services on the edge of the network and nearer to end users. Network devices with additional computational power and storage can be utilized as Fog servers to keep data and computation close to end users. Examples of such devices include Gateways, Wireless Sensors, and Routers <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr'>4]</ns0:ref>. Fog Nodes (FNs) act as middleware between cloud and the end users and offer resources to the underlying sensors. Data acquisition problem in large distributed sensor networks is one of the main challenges that hinder the further development of the IoT technology <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>.</ns0:p><ns0:p>Therefore, it is highly required to find effective techniques that solve this problem to prolong the network lifetime <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. Various techniques have been proposed, such as routing protocols or data compression scheme <ns0:ref type='bibr' target='#b7'>[7,</ns0:ref><ns0:ref type='bibr' target='#b8'>8]</ns0:ref>. Data compression methods are used to reduce the overall data transmitted through the wireless channels, thereby reduce the energy consumed by nodes during communication.</ns0:p><ns0:p>In the perspective of data compression, Compressive Sensing (CS) has been regarded as a suitable technique for signal sampling and compression <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref><ns0:ref type='bibr' target='#b10'>[10]</ns0:ref><ns0:ref type='bibr' target='#b11'>[11]</ns0:ref><ns0:ref type='bibr' target='#b2'>3]</ns0:ref>. In the context of routing algorithms, it is considered as the most important communication paradigm that can optimize energy consumption in WSNs. Designing suitable routing protocols for WSNs is a challenging issue <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref><ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. Hierarchical cluster-based routing is regarded as the most efficient protocol in terms of energy efficiency and scalability of WSNs <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. In hierarchical protocols, sensor nodes are grouped to form clusters <ns0:ref type='bibr'>[4]</ns0:ref>. For each cluster, one node which acts as aggregation point is called cluster head (CH) and the rest of the nodes are treated as cluster members (CMs). Each CH receives data from its CMs and the aggregated data is sent towards the BS. Finally, the BS receives these aggregated data from each CH. In this case, the total amount of transmitted data can be significantly reduced. The recent researches reveal that the integration between routing protocols and CS method can help to solve data acquisition problem <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. However, the problem of finding an efficient way to integrate routing protocols and compressive data aggregation to decrease the data traffic is NP-complete <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Through this paper, we introduce an effective Deterministic Clustering using CS protocol (DCCS) for heterogeneous WSNs to handle the data acquisition problem. DCCS reduces the total overhead and computational cost needed to self-organize sensor network, uses CS at each sensor node to reduce the energy consumption as a whole, and increases the IoT network lifetime. In addition, we employ Fog computing infrastructure between IoT devices and the cloud for efficient saving of IoT resources. Secondly, we propose an efficient reconstruction algorithm called Random Selection Matching Pursuit (RSMP) to enhance the reconstruction operation at the BS side with a complete scenario using CS. RSMP adds random selection process during the forward step to give opportunity for more columns to be selected as an estimated solution in each iteration. The proposed scheme is validated by simulations in terms of power consumption and lifetime of the IoT network.</ns0:p><ns0:p>The paper is further structured as follows: Section 2 provides the literature review and in Section 3, we present a background study on Compressive Sensing. The newly proposed algorithms are explained in Section 4. The simulation and validation experiments are included in Section 5 and finally Section 6 concludes our research paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>Many routing algorithms such as [4, <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref><ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> did not take into consideration the data compression, and therefore cannot efficiently deal with the huge data traffic of WSNs. It is effective to apply compression before transmitting data to reduce total power consumption by sensor nodes. The use of CS technique can significantly reduce the total amount of data gathered and improve WSNs performance <ns0:ref type='bibr' target='#b2'>[3,</ns0:ref><ns0:ref type='bibr' target='#b8'>8,</ns0:ref><ns0:ref type='bibr' target='#b11'>11,</ns0:ref><ns0:ref type='bibr' target='#b16'>[16]</ns0:ref><ns0:ref type='bibr' target='#b17'>[17]</ns0:ref><ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>.</ns0:p><ns0:p>Compressive Data Gathering (CDG) <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> is the primary work that used CS in WSNs. The method provides a join of CS and routing scheme for reducing the overall network energy expenditure. However, the authors of CDG did not provide analysis of their algorithm. The work in <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> aimed to minimize the energy expenditure by uniting compression techniques and routing schemes. The CS scheme proposed in <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> focused on solving the data collection problem in large-scale WSNs. In <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>, the authors provided a fusion of CS and tree routing methods to minimize the total forwarding energy utilization. However, it leads to an increase in the power consumed by leaf and intermediate nodes. In order to solve the tree routing issue, the authors of <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> introduced a CS strategy in a hybrid way in which only the parent nodes perform CS task. Even if this approach is convenient for small networks, cluster-based strategies turn out to be more efficient for large networks.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>, the authors proposed a CS hybrid method integrated with clustering and studied the connection between the cluster size and the transmissions count in hybrid CS strategy. In <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>, a multi chain based distributed routing protocol with CS (DMC-CS) is proposed. Each chain leader takes the responsibility of collecting the CS compressed samples from its CMs and then sends them to the BS. Even though this algorithm succeeds to improve the WSN lifetime, it is expensive as the BS needs to know the distances among all sensors.</ns0:p><ns0:p>EDACP-CS protocol of <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> offers an effective technique with CS for data acquisition in heterogeneous WSN using multi-hop scheme. Manuscript to be reviewed Computer Science method in which the CH selection depends on distance to BS and residual energy of nodes. However, this algorithm suffers from the computation cost overheads associated to CH selection. The work proposed in <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> was the initial work which studied CS with IoT from the perspective of data-compressed sampling. The main problem of that research is that it applies CS without considering how to organize the nodes in order to transmit or receive data to and from the BS.</ns0:p><ns0:p>In the context of CS reconstruction problem, Orthogonal Matching Pursuit (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> determines the greatest magnitude values in index during each iteration, where, r represents the residual r T  of y. Then, the least squares (LS) problem is solved. The works in <ns0:ref type='bibr' target='#b23'>[23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref> proposed algorithms based on OMP where Stagewise OMP (StOMP) proposed in <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> is an enhancement of OMP.</ns0:p><ns0:p>StOMP selects more than one column to enhance the forward step of OMP; then utilizes these columns to solve the LS problem. While in <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>, OMP is enhanced by grouping the inner-products having identical magnitudes into sets; then the set with the largest energy is determined. The algorithms <ns0:ref type='bibr' target='#b22'>[22,</ns0:ref><ns0:ref type='bibr' target='#b23'>23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref> do not have a backward step as they fall under the category of irreversible greedy algorithms. The advantage of the backward step is to recover from wrong selection that might have occurred during the forward step. On the other hand, reversible greedy algorithms e.g., IHT <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>, CoSaMP <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>, SP <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> and FBP <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> employ backward step to eliminate wrong selection added during the forward step.</ns0:p><ns0:p>As analyzed above, the related algorithms suffer from non-stability because they used probabilitybased models in each round to form the clusters. Besides, no method proposed an efficient mechanism to regularly check the suitability of selected measurement matrix in each round to decide whether to change it or not. Beside, in terms of data reconstruction algorithms mentioned above we noticed that none of them give the chance for all CS matrix columns to be tested as correct solution which make most of them not able to recover the original data successfully. This raises the motivation for this paper.</ns0:p><ns0:p>In this paper, we introduce an efficient CS scheme to improve the performance of WSNs, enhance the IoT network lifetime and improve the reconstruction process within a complete scenario. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science construction steps. In inter-cluster process, DCCS applies the same scenario used in the intra-cluster process to organize the Fog nodes into a chain and Fog leader is selected to communicate with the cloud.</ns0:p><ns0:p>4. To enhance CS data gathering and reconstruction process, DCCS allows the cloud to dynamically change the measurement matrix depending on network status.</ns0:p><ns0:p>5. Finally, to improve the reconstruction process in the cloud side, RSMP algorithm adds a random selection process to the forward step, which give a chance for all columns to be selected as estimated solution in each round.</ns0:p></ns0:div>
<ns0:div><ns0:head>Compressive Sensing Background</ns0:head><ns0:p>The CS method allows sampling and compression to be executed in one step and this differentiates it from conventional techniques of compression where sampling and compression are performed in separate steps <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>. In addition, the CS reconstruction strategy doesn't require any prior knowledge to successfully recover the actual data from the compressed samples <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>.</ns0:p><ns0:p>The general CS framework equation can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_0'>y = Φx,<ns0:label>(1) where,</ns0:label></ns0:formula><ns0:p>is the compressed samples vector, M << N, Φ is an M×N CS matrix, which is a 𝑦 ∈ 𝑅 𝑀 random matrix such as Bernoulli or Gaussian distribution matrix in most of the CS methods, and signal vector . In this system, ||x|| 0 =S < M < N <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref><ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. 𝑥 ∈ 𝑅 𝑁</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>CS Signal Recovery</ns0:head><ns0:p>Consider the CS scenario which requires to reconstruct a larger and sparse signal using a few available measurements coefficients. One among the easiest solutions to reconstruct the signal from its available measurements using Eq. ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>) is to find solution of the minimization issue 0 L that determines the count of non-zero entries and the issue of signal reconstruction becomes:</ns0:p><ns0:formula xml:id='formula_1'>such that, y = Φx<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>, arg 0 min x x  Even though this works well theoretically, the problem is computationally NP-hard. It is computationally hard to determine solution to the issue defined by Eq. ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>) for any vector or matrix. However, the CS framework provides efficient alternate solutions to Eq. ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>) by using Basic Pursuit (BP) <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref> or Greedy Pursuit (GP). Examples of Greedy Pursuit includes Orthogonal Matching Pursuit (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, Stagewise Orthogonal Matching Pursuit (StOMP) <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> and ROMP <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Scheme</ns0:head><ns0:p>Recently, IoT technologies have attracted many researchers in the area of wireless networks. However, due to the energy constraints of sensors, formulating effective data aggregation strategies and managing huge amount of information are regarded as the major challenges faced by IoT technologies. To address these problems, we introduce a new method using CS integrated with an efficient routing scheme. The proposed work consists of two algorithms: Deterministic Clustering using CS Protocol (DCCS): During DCCS, the sensor network is converted into various clusters. A CH from each cluster is selected according to residual energy of nodes within the cluster. We assume that DCCS organizes each cluster into chain to start CS based data gathering. Moreover, it allows cloud to dynamically change the measurement matrix if it is not suitable for the network.</ns0:p></ns0:div>
<ns0:div><ns0:head>2)</ns0:head><ns0:p>Random Selection Matching Pursuit (RSMP): RSMP is proposed for data reconstruction. It adds random selection during the columns selection to increase the chance of finding the correct columns in each round and improve the reconstruction performance. In the next subsections, we describe the network assumption with the two algorithms in detail. Network Assumptions In this work, our proposed scheme is designed according to the following assumptions:</ns0:p><ns0:p>1. The network comprises a group of nodes with the same transmission range.</ns0:p><ns0:p>2. Each node belongs to one of the two classes: normal nodes, advanced nodes 3. Fog Node (FN) has higher level energy than normal and advanced nodes.</ns0:p><ns0:p>4. Finally, we use the same energy parameters as used in [4]. To send a message of size l-bits to a distance d, the power consumed by the radio is:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>𝐸 𝑇𝑥 (𝑙,𝑑) = { 𝑙 𝐸 𝑒𝑙𝑒𝑐 + 𝑙 𝜖 𝑓𝑠 𝑑 2 𝑑 < 𝑑 0 𝑙 𝐸 𝑒𝑙𝑒𝑐 + 𝑙 𝜖 𝑚𝑝 𝑑 4 𝑑 ≥ 𝑑 0</ns0:formula><ns0:p>In order to obtain this message, the radio expended is:</ns0:p><ns0:p>. (4) 𝐸 𝑅𝑥 (𝑙) = 𝑙 𝐸 𝑒𝑙𝑒𝑐 Where the radio dissipates (Eelec )= 50nJ/bit, amplifier energy r( )= 10pJ/bit/m2, 𝜖 𝑓𝑠 transmit amplifier( )= pJ/bit/m4, d0 = . The initial energy level 𝜖 𝑚𝑝 13 10000 𝜖 𝑓𝑠 /𝜖 𝑚𝑝 associated to super nodes is 2J, for advanced nodes is 1.25J and for normal nodes is 0.5J.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>DCCS Algorithm</ns0:head><ns0:p>In DCCS algorithm, heterogeneous WSN is considered in which each of the distributed sensor nodes belong to any of the three classes: normal nodes, advanced nodes (possess relatively high energy when compared to the normal ones) and Fog Nodes (FN) with a higher level of energy than normal and advanced nodes and they are positioned at predetermined locations in the WSN. DCCS succeeds to achieve a balance in total energy expenditure between nodes in every round which can lead to extension of network lifespan. The working of DCCS algorithm is presented in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. DCCS algorithm comprises of two important phases: (1) Setup phase and (2) Data compression phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1.'>Setup Phase</ns0:head><ns0:p>DCCS executes this phase only once in the first round. The basic aim of this phase is to collect all sensor data X with non-CS compression in the Fog nodes with possible minimum energy consumption of sensors. To achieve that, this phase consists of four steps: CH Selection, Clusters Construction, Fog Node Selection and Learning. We describe these steps in detail as follows: A)</ns0:p><ns0:p>Step 1: CH Selection: This step adopts the same idea as proposed in <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref> where, the CH selection process depends only on the RE of the nodes (residual energy). In the DCCS algorithm, the Fog leader node selects a fixed number of nodes (NCH) to be CHs depending on RE of every node such that the priority is for nodes with highest RE (fog leader node selection process will be described later in section 4.1.2. A). The NCH value is predetermined by the cloud and can be estimated by the algorithm in <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>. After the first round, each cluster is responsible for selecting a new CH dynamically. This scenario reduces the cost for searching and selecting new CH. The selected CHs transmit their own information to all other (non-CH) nodes. The non-CH nodes will select the nearby CH to start with the Clusters Construction step.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig 2. Clusters Construction Algorithm B)</ns0:head><ns0:p>Step 2: Clusters Construction: Once the selected CHs advertise themselves as CHs, the non-CH nodes start to construct clusters by selecting a closest cluster head Ci, where i = 1, 2, …,NCH, and send join-request message (JRM) to it. This JRM contains: (1) node identification (Node -ID), the selected CH identification (CH -id), the node residual energy (Node-RE), and the node location (Node-Loc). The DCCS converts the WSN into NCH clusters where each cluster has a CH and a set of member (CM) nodes. In order to decrease the whole network power utilization for transmitting data per round in each cluster, the DCCS algorithm starts to organize the member nodes within each cluster into chain. For all clusters, each CH Ci, i =1, 2, …, NCH applies the following Initialization and Update Step to construct ChainList (s) for nodes. The procedure for this step is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Initialization</ns0:head><ns0:p>Step: In each cluster, Ci uses its member nodes information to create the ChainListi, where ChainListi = [c0, c1, …, clast-1, clast] by adding the nearest member node c0 to it. It then updates the ChainListi with the nearest unselected member node (c1) to node c0.</ns0:p></ns0:div>
<ns0:div><ns0:head>Update</ns0:head><ns0:p>Step: After that, Ci holds the nearest unselected neighbor node cj to node c1 in waiting to decide wherever it will be placed in ChainListi or not, by comparing the distance between c1, cj and any consecutive nodes in ChainListi. If the distance between c1 and cj is less than D, where D the distance between cj and any consecutive node, then Ci adds it to the end of ChainListi. Otherwise cj will be added between the consecutive nodes that have least distance to cj, e.g., if cr and ck are consecutive nodes in the ChainListi and if dis(cj, clast) > dis(cj, cr) and dis(cj, clast) > dis(cj, ck), then node cj will be inserted between cr and ck. Otherwise, node cj will be inserted to the end of the ChainListi after node clast, where clast is the last node of the ChainListi and dis(cj, ck) is the distance among node cj and node ck. Ci repeats the previous Update Step to include all its members in ChainListi. If a member node dies in a ChainListi, then Ci will reconstruct the chain to bypass the dead node. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By applying the previous steps, each node will send and receive with the possible minimum distance. Thus, DCCS can save power of each node.</ns0:p></ns0:div>
<ns0:div><ns0:head>C)</ns0:head><ns0:p>Step 3: Fog Node Selection: each CH selects the nearest fog node for transmitting its data. Fog nodes receive the data transmitted by the CHs, perform aggregation operation and deliver the data to the cloud using a routing technique. D)</ns0:p><ns0:p>Step 4: Learning process: Measurement matrix selection is considered as one of the most important processes in the CS method due to its impact on nodes data where this matrix is used by sensor nodes to compress their data and is used by the cloud to reconstruct the sensor data. Incorrect selection may lead to large data loss; hence, selection of proper measurement matrix is crucial.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 4. Flow chart of DCCS Data Compression Phase</ns0:head><ns0:p>Each fog node generates this matrix using a random seed ξ, and then broadcasts ξ to the whole WSN. For seed selection process, DCCS applies the following scenario: DCCS starts seed estimation process by learning step. During the intra-cluster process, the cluster head starts to collect data X by a non-CS from its chain members and then fuse these data. Then, using intercluster communication process, data is combined by Fog nodes towards Fog leader node and send to cloud by Fog leader node. Then, the cloud starts to find the best ξ that gives minimum error. The cloud uses this minimum error as threshold β. Finally, the cloud sends ξ to the entire network to use during Data compression phase .</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2.'>Data Compression Phase</ns0:head><ns0:p>DCCS repeats this phase starting from the second round. This phase consists of four steps: CS based data gathering within intra-cluster (cluster member to CH) and inter-cluster (Fog node to cloud), Reconstruction, Dynamic Re-Generation of Random Seed and CH rotation. At the end of this phase, DCCS reuses Algorithm 1 to create the cluster with the new inputs (new CHs will be the output from this phase). The details of these steps are illustrated below: A) CS based data gathering As described in the previous steps, there are NCH clusters with CH Ci and chain member nodes organized in ChainListi such that each ChainListi = [c0, c1, …, clast-1, clast]. Intra-cluster (from cluster member to CH): DCCS starts CS gathering in each intra-cluster as follows: the last node clast in the ChainListi uses the global seed ξ received from the BS to generate αclast. The clast node computes its compress vector (measurement) ylast = αc_last dc_last , where dc_last is the reading of sensor clast, and then transmits the measurement yc_last to its previous neighbor node clast-1 in the ChainListi. After that, node clast-1 uses the same global seed ξ to generate αc_(last-1) and compute its measurement yc_(last-1) = αc_(last-1) dc_(last-1) and then delivers the summation vector yc_last + yc_(last-1) to the previous node clast-2. Once clast-2 receives yc_last + yc_(last-1), it computes its value yclast-2, adds it to yc_last + yc_(last-1) and then transmits the summation value to previous node in ChainListi and so on till the CH Ci. Now each CH Ci has already received the compressed vector yi = [yc_0, yc_1, …, yc_last] from their corresponding cluster members. Then, each CH sends the compressed samples to the nearest Fog Node. Inter-cluster (from Fog nodes to cloud): Through inter-cluster communication, DCCS applies the same scenario used in Algorithm 1 to organize the Fog nodes in chain and consider them as CMs of a cluster with the cloud as CH. The communication among fog nodes is restricted to adjacent FNs in each round, and the fog leader node will be selected to deliver the gathered data to the cloud. As the scenario is same as in Algorithm 1, Fog nodes (FNs) are organized into a chain to deliver the information among FNs and to deliver the aggregated data to the cloud. The formed chain allows the nodes to interact and exchange information with their chain neighbors. CHs deliver their collected data to FNs, and the data will be fused at the FNs. Finally, fog leader node will deliver the fused data to the cloud. The selection of fog leader node depends on their energy and the distance to the cloud. Formation of a chain among FNs and choosing one among them as fog leader to send the data to the cloud could save more energy in the network than sending the data independently by each FN to the cloud. The communication process for both inter-cluster and intra-cluster are shown in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref></ns0:p></ns0:div>
<ns0:div><ns0:head>. B) Reconstruction</ns0:head><ns0:p>Step When the cloud gets the compress vector y = [y1, y2, y3, …,yi], where i = [1, 2, …, NCH] transmitted by the FN leader, cloud generates the CS matrix depending on the predefined random seed ξ. After that, the cloud reconstructs the original data x0 of every cluster. In order to improve this step, in this paper, Random Selection Matching Pursuit (RSMP) is proposed. The working of RSMP will be described in Section 4. <ns0:ref type='bibr' target='#b2'>3</ns0:ref></ns0:p></ns0:div>
<ns0:div><ns0:head>. C) Dynamic Re-Generation of Random Seed</ns0:head><ns0:p>The main idea of this step is that DCCS gives the ability to dynamically change the CS matrix depending on the network status, and the number of nodes that are still alive, instead of using the same CS matrix generated during the Setup phase in all rounds. The problem of using the fixed CS matrix is that: in each round, every sensor node transmits and receives fixed size vector whatever the count of alive nodes in each round (which should be varied according to the count of alive nodes); this leads to an increment in the average power consumption and also has negative reflection in the reconstruction process. To overcome this problem, DCCS dynamically changes the CS matrix whenever the network status changes, i.e., the CS matrix size reduces in accordance with the number of alive nodes. In this situation, DCCS can successfully decrease the overall power consumption. The cloud can obtain the dead nodes count in every cluster from fog nodes through the CHs. Where, each CH can simply use a HELLO message to identify the dead nodes count in its cluster in each round. The working procedure of this step can be summarized as follows: The cloud compares the latest reconstructed data x' with X and decides whether to re-generate depending on the error value, ε = x -x'. If it goes beyond a predefined threshold β which means that there is a change in network status, the cloud regenerates new ξ, otherwise no need to change last seed.</ns0:p></ns0:div>
<ns0:div><ns0:head>D) Cluster Head Rotation</ns0:head><ns0:p>CHs check the piggybacked CM-REs information to make decision on whether to continue as CHs or give up their CH roles to any other node in their respective clusters based on RE and assign these nodes as the new CHs. This step prevents WSNs from dying earlier by balancing the energy consumption. The whole process of the data compression phase can be seen in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>RSMP Algorithm</ns0:head><ns0:p>In this section, we propose a new reconstruction technique called Random Selection Matching Pursuit (RSMP). RSMP can be utilized by the cloud to recover the sensor readings again. It is a reversible greedy algorithm in the sense that it has reversible construction, the support set can be pruned (backward step) to eliminate the unreliable elements chosen in the past (forward step). Before presenting the RSMP algorithm, we define some operations which we use in the algorithm as provided below: resid (y, x) ≜ y -Φx (5) supp(x; k) ≜ {the set of indices that corresponds to the k largest amplitude components of x}, (6) rand(x; k) ≜ {the set of indices that corresponds to the k randomly chosen components of x}, <ns0:ref type='bibr' target='#b7'>(7)</ns0:ref> During the forward step, most of the CS reconstruction greedy algorithms used Matched Filter Detection (MF) operation Φ'y to calculate the correlation between matrix Φ columns and the Sampled Measurement Vector y. Then, Eq. ( <ns0:ref type='formula'>6</ns0:ref>) is used to select the set of indices that corresponds to the n largest amplitude components of Φ'y. The size of n may vary for each algorithm, for example: n = 1,.. S, and 2S in Orthogonal Measurement Sampling (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, Subspace Pursuit (SP) <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref> and COSAMP <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref> algorithms respectively. However, as a result of measurement noises, the MF does not usually give the indices of all correct columns. Indeed, the correct indices may not be selected because they give small correlation according to Eq. ( <ns0:ref type='formula'>6</ns0:ref>). To solve this drawback, RSMP proposes a random technique to the selection process in the forward step to increase the probability to find the correct column indices in each iteration. Fig. <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> provides the working of RSMP algorithm. Manuscript to be reviewed Computer Science reconstruction performance especially when the sparse level increases. Due to the measurements noise, the correct columns do not usually give high correlation during MF process. RSMP algorithm uses a simple way to improve this step. Instead of choosing the indices corresponding to largest amplitude components in the set of F only, in each iteration, RSMP selects S + q columns where q is the random selection size. RSMP firstly selects the largest S components in F (H = supp(F, S) ) to create set H and then uses Eq. ( <ns0:ref type='formula'>7</ns0:ref>) to select q random components from set F (R = Rand(F, q)), and creates set R to overcome the challenging case in which the correct columns do not give high correlation. Indeed, the probability to find the correct columns in both cases is increased. RSMP sets q = M/2 -S according to the fact that the CS signal recovery problem can be solved if S ≤ M / 2, where S is the sparsity level <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. Finally, RSMP uses the union set U = H R between set H and set R to expand the estimated set T and start the next step. ∪ C) Backward: We can call this step as correction step because through this step, the RSMP algorithm eliminates incorrect column indices which were incorrectly selected in the last step, i.e the technique updates the approximation set E = WS by removing S column indices that have the least values in set W. D) Update: The samples are updated using Eq. ( <ns0:ref type='formula'>5</ns0:ref>) as rK = resid(y, EK). There are two situations that terminate our algorithm: 1) the algorithm will stop when the residue set r ||rK|| 2 is lower than the β which is the termination parameter. The selection of β is based on the noise level; 2) If the number of iterations exceed Kmax where Kmax is the maximum count of iterations. At the end, EK holds the corresponding nonzero values</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiments</ns0:head><ns0:p>This section includes the results of simulation for analyzing the performance of our work. We divide this section into three parts: in the first part, DCCS technique is assessed with reference to (i) network lifetime (first node die) and (ii) average energy consumption. In the second part, we analyze the RSMP reconstruction technique in comparison to Orthogonal Matching Pursuit (OMP), COSAMP, Forward-Backward Pursuit (FBP) <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>, Subspace Pursuit (SP) <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>, and E-OMP algorithms <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>. Finally, the dynamic re-generation of random seed step is evaluated in terms of average power consumption and reconstruction error in the third part.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Evaluation of DCCS Algorithm</ns0:head><ns0:p>In this section, we describe the details of the simulations performed in MATLAB environment. The network region is having a size of 100m × 100m, and the BS is located at the region center. The nodes count is varied from 50 to 200 nodes with an incremental factor of 50. This section is subdivided based on two cases: Homogenous network and Heterogeneous network.</ns0:p><ns0:p>Performance Metrics: We use the following performance metrics to analyze and compare the proposed algorithm performance with baseline algorithms:</ns0:p><ns0:p>(1) Average Energy Consumption: It is given by the total energy expended by the entire nodes divided by the total number of nodes during their operations like sending, forwarding and receiving. The average energy consumed for each round can be estimated as: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where N denotes the nodes count and r refers to the round.</ns0:p><ns0:p>(2) Network lifetime: We measure the lifetime of the network according to the first node death.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1.'>Case 1: Homogeneous Network</ns0:head><ns0:p>In this case, DCCS algorithm performance is evaluated in comparison with DMC-CS <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> and EDACP-CS <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. We use the same energy parameters as used in [4]. To send a message of size lbits to a distance d, the power consumed by the radio, we use Eqs. ( <ns0:ref type='formula'>3</ns0:ref>) and (4) in section 4.1. Fig. <ns0:ref type='figure'>6</ns0:ref> shows the lifetime for DCCS, EDACP-CS and DMC-CS. In EDACP-CS and DMC-CS, the death of the first node is earlier than in DCCS, and also Fig. <ns0:ref type='figure'>6</ns0:ref> shows the potency of the DCCS algorithm in enhancing the lifetime of the network than compared to EDACP-CS and DMC-CS algorithms. The reason is that the DCCS uses a fixed count of CHs (NCH) per round, which leads to achieve better stability in energy utilization among the nodes, when compared to other algorithms. Additionally, in DCCS, the BS takes the role to select the CHs only in the first round and then the CHs change dynamically, which considerably decreases the overhead cost of computation associated with CH search when compared with others. DCCS reduces the transmitted CS measurement samples in each cluster which dynamically depends on the network status rather than using a fixed number of CS measurement samples in each round as in other algorithms. Fig. <ns0:ref type='figure'>6</ns0:ref>. Network lifetime in DCCS, DMC-CS and EDACP-CS Fig. <ns0:ref type='figure'>7</ns0:ref>. Count of alive nodes as a function of number of rounds Fig. <ns0:ref type='figure'>7</ns0:ref> depicts the lifetime and the count of alive nodes in the network per round for DCCS, EDACP-CS and DMC-CS. It clearly shows that the first and last node death in DCCS happens several rounds later than those of EDACP-CS and DMC-CS, which means that DCCS minimizes the energy utilization among all sensors. This is because DCCS reduces the power consumption of each node by organizing the nodes of each cluster in a chain such that each node sends and receives only from the nearest node, which is not considered by EDACP-CS algorithm. During the chain construction, DCCS rearranges all nodes in the chain when it adds a new node to the chain list to take into consideration the distances between that node and the others in the chain, rather than simply adding the closest node as the last node of the chain like DMC-CS. From Fig. <ns0:ref type='figure'>8</ns0:ref>, it is evident that DCCS succeeds to decrease the average energy consumption when compared to EDACP-CS and DMC-CS algorithms. The main reason for this is due to the dynamic re-generation of CS matrix in DCCS, which is not considered in the other algorithms. Fig. <ns0:ref type='figure'>8</ns0:ref>. Average energy consumption in DCCS, EDACP-CS and DMC-CS. 5.1.2. Case 2: Heterogeneous Network Here, we focus to evaluate the proposed algorithm performance in a heterogeneous network scenario. In this case, we make an assumption that the total network energy is 102 J, where the nodes are divided into advanced, intermediate and normal nodes according to their residual energy. DCCS performance is evaluated in comparison with ETSSEP <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>, SEECP <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> based on CS method. It's clear that DCCS still provides good performance with reference to network lifetime in terms of first node dies enhancement in comparison with ETSSEP, SEECP, and SILEACH algorithms as shown in Fig. <ns0:ref type='figure' target='#fig_6'>9</ns0:ref>. That is because the dynamic CS matrix regeneration process in DCCS gives it the ability to utilize CS matrix in an effective way to minimize the total transmitted data which leads to reduce the transmission energy expenditure. Whereas, the other algorithms use the same CS matrix in each iteration which may become inappropriate for the network after a number of iterations. The same effect can be noticed in Fig. <ns0:ref type='figure' target='#fig_7'>10</ns0:ref> where DCCS performs better than the other algorithms with reference to network lifetime in half-node death case. From Fig. <ns0:ref type='figure' target='#fig_8'>11</ns0:ref>, we can conclude that DCCS succeeds to minimize the total energy expenditure in comparison with the others. That is because DCCS divides the network into various clusters and inside every cluster, it uses the proposed chain construction algorithm to arrange the cluster members into a chain. In addition, DCCS uses the same proposed chain construction algorithm to organize the Fog nodes transmission to the Cloud. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.'>Evaluation of RSMP Algorithm</ns0:head><ns0:p>Here, we evaluate RSMP reconstruction algorithm performance in comparison with OMP, COSAMP, SP, FBP and E-OMP. Firstly, we make use of the proposed algorithm to recover the signals captured from 54 sensors placed at Intel Berkeley Research Lab. The entire experiment process is iterated for 500 times on randomly generated S sparse samples. Secondly, RSMP algorithm is applied to reconstruct computer-generated signals in which its nonzero coefficients are drawn from Uniform and Gaussian distributions. Finally, RSMP performance is measured over signal noise observations. We have adopted MATLAB environment for performing the simulations. The signal reconstruction performance is analyzed using Gaussian matrix Φ of size M × N, where N = 256 and M = 128. Performance Metrics: RSMP algorithm performance in signal reconstruction is compared with other reconstruction algorithms with reference to Average Normalized Mean Squared Error (ANMSE), which is the average ratio , where and represent the original and Manuscript to be reviewed We can see that RSMP algorithm produces less error than COSAMP, OMP, E-OMP, FBP and SP. In summary, RSMP algorithm improves the reconstruction process and gives better performance than COSAMP, OMP, E-OMP, FBP and SP algorithms. This is because in each iteration RSMP gives the chance to the columns which do not give the largest values in MF process to be chosen. Manuscript to be reviewed Computer Science</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='5.3.'>Evaluation of Dynamic Re-generation of Random Seed Step</ns0:head><ns0:p>In this part, network area is assumed to be 100m × 100m, having the sensor nodes count ranging from 50 to 200 nodes with an incremental factor of 50 and the BS is placed at (x=50, y=50). Performance Metrics: We call DCCS algorithm as DCCS-dynamic if it uses the proposed dynamic re-generation of random seed and DCCS-static otherwise. This section compares the performance of DCCS-dynamic and DCCS-static, with reference to the following: Average Energy Consumption and Average Normalized Mean Squared Reconstruction Error (ANMSE). During the reconstruction process, COSAMP <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref> algorithm is used to recover the data in each round. Fig. <ns0:ref type='figure' target='#fig_16'>20</ns0:ref> shows the performance of DCCS algorithm in both dynamic (DCCS-dynamic) and static (DCCS-static) mode in terms of number of alive nodes per round. According to DCCS-dynamic scenario, the number of measurement samples transmitted in intra or inter-cluster communication is decreased while the count of dead nodes are increased. Moreover, DCCS-static uses a fixed CS matrix whatever the count of alive nodes per round. On the other hand, DCCS-dynamic uses the threshold β value with reference to the best reconstruction error and then compares the reconstruction error in each round with β. If the error is larger than β, the old matrix is considered as not suitable and therefore regenerates another one. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The main objective of IoT components is to collect accurate information about any event. However, there are some challenges that hinder the way to attain this objective such as sensor battery constraints and dealing large amount of data acquisition. To solve these problems, this research introduced a new CS scheme for IoT and explained how this scheme could be utilized to compress and reduce the overall data traffic through the network. The proposed work consists of two algorithms, first one is called DCCS algorithm which converts the network into several clusters and organizes each cluster into chain to start the CS data gathering. In the second algorithm called RSMP which is used in the cloud side in order to reconstruct the original data successfully. In each round, the cloud checks the suitability of the measurement matrix to the network to decide whether to change or not. The proposed work achieved our objectives to enhance the IoT network lifetime and improved the reconstruction performance. Simulation results proved that our proposed algorithm is an effective data acquisition tool for decreasing the energy consumption in networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:1:2:NEW 8 Feb 2021) Manuscript to be reviewed Computer Science 1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Fig. 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1. DCCS Algorithm 4.1Network Assumptions In this work, our proposed scheme is designed according to the following assumptions:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Fig. 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig. 3. Inter and Intra cluster communication process</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Fig. 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig. 5. RSMP AlgorithmThe proposed algorithm includes four steps: Initialization, Forward, Backward and Update as detailed below: A) Initialization: The proposed algorithm initializes all parameters as follows: initial approximation E0 = 0, residual r0 = y, and estimated set T = φ. B) Forward: The main contribution of RSMP algorithm is in this step. Most of the MP algorithms use the n largest in-amplitude components from the MF, |n| depends on the algorithm, as a first estimation of the estimated set T. However, they depend only on the high correlation columns in MF equation without taking consideration the others which have negative effect on the</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>1 𝐸</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>𝑖 (𝑟) 𝑟 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:1:2:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Fig.8. Average energy consumption in DCCS, EDACP-CS and DMC-CS. 5.1.2. Case 2: Heterogeneous Network Here, we focus to evaluate the proposed algorithm performance in a heterogeneous network scenario. In this case, we make an assumption that the total network energy is 102 J, where the nodes are divided into advanced, intermediate and normal nodes according to their residual energy. DCCS performance is evaluated in comparison with ETSSEP<ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>, SEECP<ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> and SILEACH</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Fig. 9. Network lifetime (First node dies) in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig. 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Fig. 10. Network lifetime (half of nodes die) in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Fig. 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Fig. 11. Residual Energy in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS.5.2. Evaluation of RSMP AlgorithmHere, we evaluate RSMP reconstruction algorithm performance in comparison with OMP, COSAMP, SP, FBP and E-OMP. Firstly, we make use of the proposed algorithm to recover the signals captured from 54 sensors placed at Intel Berkeley Research Lab. The entire experiment process is iterated for 500 times on randomly generated S sparse samples. Secondly, RSMP algorithm is applied to reconstruct computer-generated signals in which its nonzero coefficients are drawn from Uniform and Gaussian distributions. Finally, RSMP performance is measured over signal noise observations. We have adopted MATLAB environment for performing the simulations. The signal reconstruction performance is analyzed using Gaussian matrix Φ of size M × N, where N = 256 and M = 128. Performance Metrics: RSMP algorithm performance in signal reconstruction is compared with other reconstruction algorithms with reference to Average Normalized Mean Squared Error</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig. 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Fig. 12. Intel temperature and humidity: A) Original temperature Signal, B) Reconstructed temperature signal, C) Original humidity Signal and D) Reconstructed humidity signal</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Fig. 12 -Fig. 13 .</ns0:head><ns0:label>1213</ns0:label><ns0:figDesc>Fig. 12-A, 12-B: shows the effectiveness of RSMP algorithm in terms of reconstructing the temperature signals. RSMP achieves similar performance in reconstructing the humidity signals as shown in Fig. 12-C, 12-D. In Fig. 13, we illustrate the distribution of relative reconstruction error for different reconstruction algorithms. It is evident that RSMP algorithm exceeds the performance of other greedy algorithms, i.e., the COSAMP, OMP, EOMP, FBP and SP respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Fig. 14 .Fig. 15 .</ns0:head><ns0:label>1415</ns0:label><ns0:figDesc>Fig. 14. Reconstruction results over sparsity levels (Uniform Distribution) Fig. 15. Reconstruction results over sparsity level (Gaussian distribution) Reconstruction performance over different measurement vector lengths: This part of simulation aims to test RSMP reconstruction performance when different measurement vector lengths-M are used with two different CS matrices: Gaussian and Bernoulli distribution matrices as shown in Figs. 16 and 17 respectively. To achieve this aim, sparse signals taken from Uniform distribution having length N = 120 is utilized and M values from 10 to 60 with step size of 1. From those figures, we can understand that RSMP algorithm still provides the least ANMSE values when compared to other algorithms. Reconstruction over noisy signal: In this part, we add some noise equal to 10 -4 to the original Uniform as well as in Gaussian distribution signal where N = 256 and M = 128. The CS matrix Φ is drawn from the Gaussian distribution. The sparsity S levels are from 10 to 60 with step size 1.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Fig. 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Fig. 16. Reconstruction results over Gaussian matrix for different lengths of M.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Fig. 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Fig. 17. Reconstruction results over Bernoulli matrix for different lengths of M.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Fig. 18 .Fig. 19 .</ns0:head><ns0:label>1819</ns0:label><ns0:figDesc>Fig. 18. Reconstruction results for noisy Uniform sparse signals Fig. 19. Reconstruction results for noisy Gaussian sparse signals. Figs. 18 and 19 depict the reconstruction errors for the noisy Uniform and Gaussian sparse signals.We can see that RSMP algorithm produces less error than COSAMP, OMP, E-OMP, FBP and SP. In summary, RSMP algorithm improves the reconstruction process and gives better performance than COSAMP, OMP, E-OMP, FBP and SP algorithms. This is because in each iteration RSMP gives the chance to the columns which do not give the largest values in MF process to be chosen.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:1:2:NEW 8 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Fig. 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Fig. 20. Number of Alive nodes in DCCS-dynamic and DCCS-static</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,204.37,525.00,317.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,204.37,525.00,352.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,204.37,525.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,204.37,525.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,204.37,525.00,308.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,204.37,525.00,308.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,204.37,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,250.12,525.00,364.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,204.37,525.00,316.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,204.37,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,204.37,525.00,422.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,204.37,525.00,416.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,204.37,525.00,361.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,204.37,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,204.37,525.00,368.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,204.37,525.00,345.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b14'>14</ns0:ref> where the sparse signal's non-zeros values are taken from uniform distribution, RSMP algorithm has lower ANMSE comparing to COSAMP, FBP, OMP, E-OMP and SP. Moreover, ANMSE for RSMP algorithm appear to rise only when S >49 while it increases when S > 42, S ≥ 34, S ≥ 44, S ≥ 38 and S ≥ 41 for COSAMP, OMP, E-OMP, FBP and SP algorithms respectively as shown in Fig.14. Fig.15shows the results of ANMSE when the non-zero entries of sparse signal are taken from Gaussian distribution. From Fig.15, it is evident that RSMP algorithm still gives least ANMSE result when compared to COSAMP, OMP, EOMP, FBP and SP, as S > 59, S ≥ 46, S > 34, S > 49, S > 47 and S > 45, respectively.</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "
Revıewers’ Comments response
January 20, 2023
Manuscript number: 53682
Paper title: “Deterministic clustering based compressive sensing scheme for fog-supported heterogeneous wireless sensor networks”
Authors: Walid Osamy, Ahmed Aziz and Ahmed M. Khedr
We would like to thank the editor and anonymous reviewers for their excellent suggestions and comments. Without their feedback, this manuscript would not have been in the current form. Based on the valuable comments of reviewers, we have thoroughly revised the manuscript and accommodated all possible changes. We hope that the reviewers will find that the current revision is up to their required standards. Below we detail our changes.
Reviewers:
1. Remove 'the' from 'Recently, the [...]' Remove 'of researchers' from 'Recently, [...] has attracted much attention of researchers.
Response:
Fixed, please see the abstract.
2. From -Line 69: authors are advised to give only a brief description of the compressive sensing the reconstruction problem
Response:
Fixed, please see the abstract.
3. A section on compressive sensing and data reconstruction should be added. Moreover, authors are advised to revise the related mathematical expressions
Response
We added Compressive Sensing Background, please see section 3
4. Line 37: Replace 'can be regarded' by 'can be seen as'
Remove 'between' from '[...] primary bridge that connects between physical [...]'
-Line 38: Add 's' to 'field'
-Line 39: replace 'open to control all' with 'monitor'
-Line 39-40: Explain 'IoT elements include objects different from technological environment'
-Line 45-46: Revise the sentence: 'Since IoT contains huge number of connected sensor nodes [...]' to WSNs can be considered as part of IoT, due to the huge number of connected sensor nodes it contains.
Response
Fixed, please see section 2
5. -Line 59: Remove 'On the other side' -Line 62: Add 'Therefore, ' before 'it is highly required [...]' -Line 65: Remove 'The' from 'The data compression methods' -Line 66: Remove 'which' from '[...], which thereby reduce the energy' Replace 'communicated' with 'transmitted' -Line 67-68: Remove 'which is considered as the major power consumer of IoT elements' -Line 91-92: Revise the sentence ' In this case, the total data amount which is transmitted to the BS [...]' to 'In this case, the total amount of transmitted data can be significantly reduced.'
Response
Fixed.
6. Authors should add related works on data reconstruction methods
Response
Added, please see the related work section.
7. Line 159-160: Revise the sentence 'The work in [20] integrated between CS and clustering [...]' to 'The work in [20] combined CS with clustering [...]' -Line 177-178: What do you mean by 'non-stability'? -Line 190-191: Explain 'To overcome IoT device limitations [...] and hence conserves the communication energy.
Response
Fixed, check the Literature review section.
8. -Authors should add a section on network assumptions where characteristics of the network and its nodes are given
Response
Added, please see section 4.1.
9. -If authors select CH only based on residual energy, therefore what's the role of an advanced node?
Response
In the first round the priority will be given to the advanced nodes to be CHs. Starting from the second round, CH will be updated dynamically and the priority will be given to the node with the highest residual energy to be the new CH. Moreover, selection based on residual energy yields a better lifetime. This will provide a more ideal solution for energy consumption in WSNs and guarantee that every node will have the chance to be elected as long as its residual energy is higher than its neighbors.
10. -How do authors ensure that clusters are fairly distributed over the environment?
Response
In the proposed work the fog leader node selects a fixed number of nodes (NCH) to be CHs depending on their residual energy. In order to achieve even distribution of clusters over the area, the fog node establishes a grid-based virtual structure and then selects CHs according to the virtual grid structure. Moreover, in the proposed work after the first round, each cluster is responsible for selecting a new CH from its neighbors dynamically which preserve even distribution.
11. -How do authors deal with outliers?
Response
This work does not focus on Outlier detection but it addresses data gathering and reconstruction problems. However, we can adapt any outlier detection schemes like [R1,R2,R3] before data compression process to address this issue.
[R1] Asmaa Fawzy, Hoda M.O. Mokhtar, Osman Hegazy, “Outliers detection and classification in wireless sensor networks”, Egyptian Informatics Journal, Volume 14, Issue 2, 2013/
[R2] M. Bahrepour, N. Meratnia, M. Poel, Z. Taghikhaki, P.J.M. Havinga,”Distributed event detection in wireless sensor networks for disaster management Intell Networking Collaborative”, Syst (2010), pp. 50
7-512
[R3] M.S. Mohamed, T. Kavitha, “Outlier detection using support vector machine in wireless sensor network real time data”, Int J Soft Comput Eng, 1 (2) (2011)
12. -Line 243: Describes how each cluster select the CH dynamically
Response
Each existing cluster head will collect its cluster members’ information which includes residual energy. Then, it will compare its residual energy with the residual energy of its members. If its energy is less than one of its members it decides to promote that one with highest energy to be the next cluster head.
13. -Line 270: Review the for loop
Response
Fixed
14. Line 271: Define clearly 'D' beforehand. You cannot compute a distance between a point and a distance.
Response
Fixed, please see section 4.2.1
15. What's the difference between Fog, Gateway and Sink nodes?
Response
• This paper employs the fog framework. In Fog framework, the Fog layer works as bridge to cover the communication gap between WSNs and Cloud to reduce latency. This framework is considered in previous works such as [r1, r2, r3]. All queries from the Cloud are disseminated to the sensor network, and all responses from the wireless sensor network are aggregated to the Cloud.
Fog nodes usually consists of some powerful nodes in network with higher ability of storage and processing than normal sensors, such as, mobile collectors and mobile/static sinks [r1]. Fog nodes in fog computing architecture function as middleware, operating between end users and cloud computing. In the context of WSNs, they provide resources to the underlying sensor nodes. The integration of fog computing into WSNs can overcome numerous problems [r3] (e.g. Geographical distribution: fog nodes which are distributed into a network, play an active role in the delivery of high-quality streaming services, through the processing and analysis of sensed data in real time.)
• A wireless sensor gateway is placed on the route to connect a wireless sensor network to the fixed Internet, and it must provide a flexible delivery of query and response between two networks.
• A sink node or a base station is basically a designated device similar to the normal sensor nodes but more powerful. One of the primary tasks of the sink node is to bridge a WSN with the remote users [r4]. One can retrieve required information from the network by injecting queries and gathering results from the sink.
[r1] Zeng, J., Wang, T., Lai, Y., Liang, J., & Chen, H. (2016). Data Delivery from WSNs to Cloud Based on a Fog Structure. 2016 International Conference on Advanced Cloud and Big Data (CBD). doi:10.1109/cbd.2016.028
[r2] Borujeni, Elham Mirzavand, Dadmehr Rahbari, and Mohsen Nickray. 'Fog-based energy-efficient routing protocol for wireless sensor networks.' The Journal of Supercomputing 74, no. 12 (2018): 6831-6858.
[r3] Abidoye, Ademola, and Boniface Kabaso. 'Energy-efficient hierarchical routing in wireless sensor networks based on Fog Computing.' (2020).
[r4] Tamandani, Yahya Kord, Mohammad Ubaidullah Bokhari, and Mohammad Zarif Kord. 'Computing geometric median to locate the sink node with the aim of extending the lifetime of wireless sensor networks.' Egyptian Informatics Journal 18, no. 1 (2017): 21-27.
16. -Line 314-315: Explain 'intracluster process'?
Response
Fixed, we have explained Intra and Inter Communication Process, please see section 4.2.2
17. -Line 363-364: Why do you need to improve this step? What are the disadvantages of this step?
Response
In this step, the cloud will generate the CS matrix to reconstruct the Original data thus it will use the traditional CS reconstruction algorithm which can’t achieve high recovery performance. So, in this paper we also improve the reconstruction step by proposing RSMP algorithm which overcome the traditional CS reconstruction algorithms in terms of reduce the reconstruction error as shown in the results section.
18. -What are the advantages of RSMP over other reconstruction algorithms?
Response
During the forward step, most of the CS reconstruction greedy algorithms used Matched Filter Detection (MF) operation Φ'y to calculate the correlation between matrix Φ columns and the Sampled Measurement Vector y, then, Eq. (4) is used to select the set of indices that corresponds to the n largest amplitude components of Φ'y. The size of n may vary for each algorithm, for example: n = 1, S, and 2S in Orthogonal Measurement Sampling (OMP) [35], Subspace Pursuit (SP) [33] and COSAMP [17] algorithms respectively. However, as a result of measurement noises, the MF does not usually give the indices of all correct columns. I.e., the correct indices may not be selected because they give small correlation according to Eq. (4). To solve this drawback, RSMP proposes a random technique to the selection process in the forward step to increase the probability of finding the correct column indices in each iteration.
19. -Authors should consider evaluating the energy consumption during the cluster formation phase.
Response
We did not ignore the energy consumption of cluster formation phase it is part of the total energy of the proposed scheme (the total messages of this phase is O(n), where CHs advertise themselves as CHs and the non-CHs will join the closest cluster head Ci by sending join-request message).
20. -Why didn't you compare DCCS with the same protocols in both homogeneous and heterogeneous
Response
We cannot do this because it will not be fair to compare between a designed algorithm for homogenous networks with one designed for heterogeneous networks in a heterogeneous environment.
I.e., in case of homogenous environment, we can compare between designed algorithms for homogenous or heterogeneous networks. However, in case of heterogeneous environment, we can compare only between designed algorithms for heterogeneous networks.
21. -Line 511: What do E_elec, E_fs and mp mean? -Line 530: Fig.4- What do the values of y-axis mean? -Line 548: Fig.6- What's the label of the x-axis? The unit of y-axis should be mentioned. -Fig.7 & 8- Add the label of x-axis -Fig.10 & 11- Plot the original and reconstructed signals in the same figure
Response
Fixed, please see the updated Figures.
22. Can we apply the proposed models to a network in dimension greater than 2 (for instance: under-water?
Response
We believe that the proposed scheme can be adopted to be implemented in different IoT application. But we consider the partial application of the proposed scheme as a part of our future work.
" | Here is a paper. Please give your review comments after reading it. |
44 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Data acquisition problem in large-scale distributed Wireless Sensor Networks (WSNs) is one of the main issues that hinder the evolution of Internet of Things (IoT) technology.</ns0:p><ns0:p>Recently, combination of Compressive Sensing (CS) and routing protocols has attracted much attention. An open question in this approach is how to integrate these techniques effectively for specific tasks. In this paper, we introduce an effective Deterministic Clustering based CS scheme (DCCS) for Fog-supported heterogeneous WSNs to handle the data acquisition problem. DCCS employs the concept of Fog Computing, reduces total overhead and computational cost needed to self-organize sensor network by using a simple approach, and then uses CS at each sensor node to minimize the overall energy expenditure and prolong the IoT network lifetime. Additionally, the proposed scheme includes an effective algorithm for CS reconstruction called Random Selection Matching Pursuit (RSMP) to enhance the recovery process at the Base Station (BS) side with a complete scenario using CS. RSMP adds random selection process during the forward step to give opportunity for more columns to be selected as an estimated solution in each iteration. The results of simulation prove that the proposed technique succeeds to minimize the overall network power expenditure, prolong the network lifetime and provide better performance in CS data reconstruction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Internet of Things (IoT) can be seen as the primary bridge that connects physical and digital world. IoT has become one of the significant and attractive field of research through which researchers monitor everyday usages via the Internet <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Integrating computational abilities in various kinds of things and living organisms can create big leap in many sectors such as health, military, home, entertainment etc. <ns0:ref type='bibr' target='#b2'>[2]</ns0:ref>. IoT consists of combinations of different technologies such as cloud computing, WSNs, big data and data information. Nowadays, WSNs are widely used in various fields. WSNs can be considered as part of IoT, due to the huge number of connected sensor nodes it contains. The main task of IoT components (such as sensors, phones and RFID tags) is to sense, collect and store data, and then transmit the collected data to the BS <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. However, limitation on power, computation, storage and battery resources of IoT devices hinder the development of IoT applications. To overcome these limitations most of the IoT applications depend on Cloud to deploy their computations. However, Cloud solutions are unable to meet the issues such as location awareness, mobility support, geo-distribution and latency for its end users concurrently. Fog computing can overcome these limitations of Cloud computing <ns0:ref type='bibr' target='#b2'>[2]</ns0:ref> and bring services on the edge of the network and nearer to end users. Network devices with additional computational power and storage can be utilized as Fog servers to keep data and computation close to end users. Examples of such devices include Gateways, Wireless Sensors, and Routers <ns0:ref type='bibr' target='#b2'>[2,</ns0:ref><ns0:ref type='bibr' target='#b4'>4]</ns0:ref>. Fog Nodes (FNs) act as middleware between cloud and the end users and offer resources to the underlying sensors. Data acquisition problem in large distributed sensor networks is one of the main challenges that hinder the further development of the IoT technology <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>.</ns0:p><ns0:p>Therefore, it is highly required to find effective techniques that solve this problem to prolong the network lifetime <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. Various techniques have been proposed, such as routing protocols or data compression scheme <ns0:ref type='bibr' target='#b7'>[7,</ns0:ref><ns0:ref type='bibr' target='#b8'>8]</ns0:ref>. Data compression methods are used to reduce the overall data transmitted through the wireless channels, thereby reduce the energy consumed by nodes during communication.</ns0:p><ns0:p>In the perspective of data compression, Compressive Sensing (CS) has been regarded as a suitable technique for signal sampling and compression <ns0:ref type='bibr' target='#b3'>[3,</ns0:ref><ns0:ref type='bibr' target='#b9'>[9]</ns0:ref><ns0:ref type='bibr' target='#b10'>[10]</ns0:ref><ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>. In the context of routing algorithms, it is considered as the most important communication paradigm that can optimize energy consumption in WSNs. Designing suitable routing protocols for WSNs is a challenging issue <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref><ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. Hierarchical cluster-based routing is regarded as the most efficient protocol in terms of energy efficiency and scalability of WSNs <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. In hierarchical protocols, sensor nodes are grouped to form clusters <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. For each cluster, one node which acts as aggregation point is called cluster head (CH) and the rest of the nodes are treated as cluster members (CMs). Each CH receives data from its CMs and the aggregated data is sent towards the BS. Finally, the BS receives these aggregated data from each CH. In this case, the total amount of transmitted data can be significantly reduced. The recent researches reveal that the integration between routing protocols and CS method can help to solve data acquisition problem <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. However, the problem of finding an efficient way to integrate routing protocols and compressive data aggregation to decrease the data traffic is NP-complete <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. Through this paper, we introduce an effective Deterministic Clustering using CS protocol (DCCS) for heterogeneous WSNs to handle the data acquisition problem. DCCS reduces the total overhead and computational cost needed to self-organize sensor network, uses CS at each sensor node to reduce the energy consumption as a whole, and increases the IoT network lifetime. In addition, we employ Fog computing infrastructure between IoT devices and the cloud for efficient saving of IoT resources. Secondly, we propose an efficient reconstruction algorithm called Random Selection Matching Pursuit (RSMP) to enhance the reconstruction operation at the BS side with a complete scenario using CS. RSMP adds random selection process during the forward step to give opportunity for more columns to be selected as an estimated solution in each iteration. The proposed scheme is validated by simulations in terms of power consumption and lifetime of the IoT network.</ns0:p><ns0:p>The paper is further structured as follows: Section 2 provides the literature review and in Section 3, we present a background study on Compressive Sensing. The newly proposed algorithms are explained in Section 4. The simulation and validation experiments are included in Section 5 and finally Section 6 concludes our research paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>Many routing algorithms such as <ns0:ref type='bibr' target='#b4'>[4,</ns0:ref><ns0:ref type='bibr' target='#b14'>[14]</ns0:ref><ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> did not take into consideration the data compression, and therefore cannot efficiently deal with the huge data traffic of WSNs. It is effective to apply compression before transmitting data to reduce total power consumption by sensor nodes. The use of CS technique can significantly reduce the total amount of data gathered and improve WSNs performance <ns0:ref type='bibr' target='#b3'>[3,</ns0:ref><ns0:ref type='bibr' target='#b8'>8,</ns0:ref><ns0:ref type='bibr' target='#b11'>11,</ns0:ref><ns0:ref type='bibr' target='#b16'>[16]</ns0:ref><ns0:ref type='bibr' target='#b17'>[17]</ns0:ref><ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>.</ns0:p><ns0:p>Compressive Data Gathering (CDG) <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref> is the primary work that used CS in WSNs. The method provides a join of CS and routing scheme for reducing the overall network energy expenditure. However, the authors of CDG did not provide analysis of their algorithm. The work in <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> aimed to minimize the energy expenditure by uniting compression techniques and routing schemes. The CS scheme proposed in <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> focused on solving the data collection problem in large-scale WSNs. In <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>, the authors provided a fusion of CS and tree routing methods to minimize the total forwarding energy utilization. However, it leads to an increase in the power consumed by leaf and intermediate nodes. In order to solve the tree routing issue, the authors of <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> introduced a CS strategy in a hybrid way in which only the parent nodes perform CS task. Even if this approach is convenient for small networks, cluster-based strategies turn out to be more efficient for large networks.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>, the authors proposed a CS hybrid method integrated with clustering and studied the connection between the cluster size and the transmissions count in hybrid CS strategy. In <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>, a multi chain based distributed routing protocol with CS (DMC-CS) is proposed. Each chain leader takes the responsibility of collecting the CS compressed samples from its CMs and then sends them to the BS. Even though this algorithm succeeds to improve the WSN lifetime, it is expensive as the BS needs to know the distances among all sensors.</ns0:p><ns0:p>EDACP-CS protocol of <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> offers an effective technique with CS for data acquisition in heterogeneous WSN using multi-hop scheme. It integrates a cluster-based technique with CS method in which the CH selection depends on distance to BS and residual energy of nodes.</ns0:p><ns0:p>However, this algorithm suffers from the computation cost overheads associated to CH selection. The work proposed in <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> was the initial work which studied CS with IoT from the perspective of data-compressed sampling. The main problem of that research is that it applies CS without considering how to organize the nodes in order to transmit or receive data to and from the BS.</ns0:p><ns0:p>In the context of CS reconstruction problem, Orthogonal Matching Pursuit (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> determines the greatest magnitude values in index during each iteration, where, r represents the residual r T  of y. Then, the least squares (LS) problem is solved. The works in <ns0:ref type='bibr' target='#b23'>[23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref> proposed algorithms based on OMP where Stagewise OMP (StOMP) proposed in <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> is an enhancement of OMP.</ns0:p><ns0:p>StOMP selects more than one column to enhance the forward step of OMP; then utilizes these columns to solve the LS problem. While in <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>, OMP is enhanced by grouping the inner-products having identical magnitudes into sets; then the set with the largest energy is determined. The algorithms <ns0:ref type='bibr' target='#b22'>[22,</ns0:ref><ns0:ref type='bibr' target='#b23'>23,</ns0:ref><ns0:ref type='bibr' target='#b24'>24]</ns0:ref> do not have a backward step as they fall under the category of irreversible greedy algorithms. The advantage of the backward step is to recover from wrong selection that might have occurred during the forward step. On the other hand, reversible greedy algorithms e.g., IHT <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>, CoSaMP <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>, SP <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> and FBP <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> employ backward step to eliminate wrong selection added during the forward step.</ns0:p><ns0:p>As analyzed above, the related algorithms suffer from non-stability because they used probabilitybased models in each round to form the clusters. Besides, no method proposed an efficient mechanism to regularly check the suitability of selected measurement matrix in each round to decide whether to change it or not. Beside, in terms of data reconstruction algorithms mentioned above we noticed that none of them give the chance for all CS matrix columns to be tested as correct solution, which makes most of them not able to recover the original data successfully. This raises the motivation for this paper.</ns0:p><ns0:p>In this paper, we introduce an efficient CS scheme to improve the performance of WSNs, enhance the IoT network lifetime and improve the reconstruction process within a complete scenario. The proposed work consists of two algorithms:</ns0:p><ns0:p>(1) Deterministic Clustering using CS protocol (DCCS) and (2) Random Selection Matching Pursuit (RSMP) algorithm. The highlights of our research contribution are as listed below:</ns0:p><ns0:p>1. To overcome the stability problem and to optimize the energy consumption, DCCS algorithm distributes the network into a fixed count of clusters per round. Each cluster has a CH depending on nodes residual energy.</ns0:p><ns0:p>2. To overcome IoT device limitations, DCCS employs the Fog computing architecture, which is very near to the WSN nodes and hence conserves the communication energy. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science intra-cluster process to organize the Fog nodes into a chain and Fog leader is selected to communicate with the cloud.</ns0:p><ns0:p>4. To enhance CS data gathering and reconstruction process, DCCS allows the cloud to dynamically change the measurement matrix depending on network status.</ns0:p><ns0:p>5. Finally, to improve the reconstruction process in the cloud side, RSMP algorithm adds a random selection process to the forward step, which give a chance for all columns to be selected as estimated solution in each round.</ns0:p></ns0:div>
<ns0:div><ns0:head>Compressive Sensing Background</ns0:head><ns0:p>The CS method allows sampling and compression to be executed in one step and this differentiates it from conventional techniques of compression where sampling and compression are performed in separate steps <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>. In addition, the CS reconstruction strategy doesn't require any prior knowledge to successfully recover the actual data from the compressed samples <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>.</ns0:p><ns0:p>The general CS framework equation can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_0'>y = Φx,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where, is the compressed samples vector, m << n, Φ is an m×n CS matrix, which is a 𝑦 ∈ 𝑅 𝑚 random matrix such as Bernoulli or Gaussian distribution matrix in most of the CS methods, and signal vector . In this system, ||x|| 0 =s < m< n <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref><ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. 𝑥 ∈ 𝑅 𝑛</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>CS Signal Recovery</ns0:head><ns0:p>Consider the CS scenario which requires to reconstruct a larger and sparse signal using a few available measurements coefficients. One among the easiest solutions to reconstruct the signal from its available measurements using Eq. ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>) is to find solution of the minimization issue 0 L that determines the count of non-zero entries and the issue of signal reconstruction becomes:</ns0:p><ns0:formula xml:id='formula_1'>Such that, y = Φx<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>, arg 0 min x x  Even though this works well theoretically, the problem is computationally NP-hard. It is computationally hard to determine solution to the issue defined by Eq. (2) for any vector or matrix. However, the CS framework provides efficient alternate solutions to Eq. ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>) by using Basic Pursuit (BP) <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref> or Greedy Pursuit (GP). Examples of Greedy Pursuit includes Orthogonal Matching Pursuit (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, Stagewise Orthogonal Matching Pursuit (StOMP) <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> and ROMP <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Scheme</ns0:head><ns0:p>Recently, IoT technologies have attracted many researchers in the area of wireless networks. However, due to the energy constraints of sensors, formulating effective data aggregation strategies and managing huge amount of information are regarded as the major challenges faced by IoT technologies. To address these problems, we introduce a new method using CS integrated with an efficient routing scheme. The proposed work consists of two algorithms: 1)</ns0:p><ns0:p>Deterministic Clustering using CS Protocol (DCCS): During DCCS, the sensor network is converted into various clusters. A CH from each cluster is selected according to residual energy of nodes within the cluster. We assume that DCCS organizes each cluster into chain to start CS based data gathering. Moreover, it allows cloud to dynamically change the measurement matrix if it is not suitable for the network.</ns0:p></ns0:div>
<ns0:div><ns0:head>2)</ns0:head><ns0:p>Random Selection Matching Pursuit (RSMP): RSMP is proposed for data reconstruction. It adds random selection during the columns selection to increase the chance of finding the correct columns in each round and improve the reconstruction performance. In the next subsections, we describe the network assumption with the two algorithms in detail. Network Assumptions In this work, our proposed scheme is designed according to the following assumptions:</ns0:p><ns0:p>1. The network comprises a group of nodes with the same transmission range.</ns0:p><ns0:p>2. Each node belongs to one of the two classes: normal nodes, advanced nodes 3. Fog Node (FN) has higher level energy than normal and advanced nodes.</ns0:p><ns0:p>4. Finally, we use the same energy parameters as used in <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. To send a message of size l-bits to a distance d, the power consumed by the radio is:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>𝐸 𝑇𝑥 (𝑙,𝑑) = { 𝑙 * 𝐸 𝑒𝑙𝑒𝑐 + 𝑙 * 𝜖 𝑓𝑠 * 𝑑 2 𝑑 < 𝑑 0 𝑙 * 𝐸 𝑒𝑙𝑒𝑐 + 𝑙 * 𝜖 𝑚𝑝 * 𝑑 4 𝑑 ≥ 𝑑 0</ns0:formula><ns0:p>In order to obtain this message, the radio expended is:</ns0:p><ns0:p>. 𝜖 𝑓𝑠 /𝜖 𝑚𝑝 associated to super nodes is 2J, for advanced nodes is 1.25J and for normal nodes is 0.5J.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>DCCS Algorithm</ns0:head><ns0:p>In DCCS algorithm, heterogeneous WSN is considered in which each of the distributed sensor nodes belong to any of the three classes: normal nodes, advanced nodes (possess relatively high energy when compared to the normal ones) and Fog Nodes (FN) with a higher level of energy than normal and advanced nodes and they are positioned at predetermined locations in the WSN. DCCS succeeds to achieve a balance in total energy expenditure between nodes in every round which can lead to extension of network lifespan. The working of DCCS algorithm is presented in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. DCCS algorithm comprises of two important phases: (1) Setup phase and (2) Data compression phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1.'>Setup Phase</ns0:head><ns0:p>DCCS executes this phase only once in the first round. The basic aim of this phase is to collect all sensor data X with non-CS compression in the Fog nodes with possible minimum energy consumption of sensors. To achieve that, this phase consists of four steps: CH Selection, Clusters Construction, Fog Node Selection and Learning. We describe these steps in detail as follows:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:2:0:NEW 3 Mar 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:note type='other'>Computer Science A)</ns0:note><ns0:p>Step 1: CH Selection: This step adopts the same idea as proposed in <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref> where, the CH selection process depends only on the RE of the nodes (residual energy). In the DCCS algorithm, the Fog leader node selects a fixed number of nodes ( ) to be CHs depending on RE of every 𝑛 𝑐ℎ node such that the priority is for nodes with highest RE (fog leader node selection process will be described later in section 4.1.2. A). The value is predetermined by the cloud and can be 𝑛 𝑐ℎ estimated by the algorithm in <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>. After the first round, each cluster is responsible for selecting a new CH dynamically. This scenario reduces the cost for searching and selecting new CH. The selected CHs transmit their own information to all other (non-CH) nodes. The non-CH nodes will select the nearby CH to start with the Clusters Construction step.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig 2. Clusters Construction Algorithm B)</ns0:head><ns0:p>Step 2: Clusters Construction: Once the selected CHs advertise themselves as CHs, the non-CH nodes start to construct clusters by selecting a closest cluster head Ci, where i = 1, 2, …, , and 𝑛 𝑐ℎ send join-request message (JRM) to it. This JRM contains: (1) node identification (Node -ID), the selected CH identification (CH -id), the node residual energy (Node-RE), and the node location (Node-Loc). The DCCS converts the WSN into NCH clusters where each cluster has a CH and a set of member (CM) nodes. In order to decrease the whole network power utilization for transmitting data per round in each cluster, the DCCS algorithm starts to organize the member nodes within each cluster into chain. For all clusters, each CH Ci, i =1, 2, …, applies the following Initialization and 𝑛 𝑐ℎ</ns0:p></ns0:div>
<ns0:div><ns0:head>Update</ns0:head><ns0:p>Step to construct ChainList (s) for nodes. The procedure for this step is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, where represents the total number of members of cluster Ci, (cj,ck)</ns0:p><ns0:formula xml:id='formula_4'>𝑛 𝐶𝑀 𝑖 𝑑𝑖𝑠 𝑚𝑖𝑛 𝑗 = 𝑚𝑖𝑠 𝑗 < 𝑘 ≤ 𝑛 𝐶𝑀 𝑖 𝑑𝑖𝑠</ns0:formula><ns0:p>denotes the smallest distance between cj and other consecutive nodes ck.</ns0:p></ns0:div>
<ns0:div><ns0:head>Initialization</ns0:head><ns0:p>Step: In each cluster, Ci uses its member nodes information to create the ChainListi, where ChainListi = [c0, c1, …, clast-1, clast] by adding the nearest member node c0 to it. It then updates the ChainListi with the nearest unselected member node (c1) to node c0.</ns0:p></ns0:div>
<ns0:div><ns0:head>Update</ns0:head><ns0:p>Step: After that, Ci holds the nearest unselected neighbor node cj to node c1 in waiting to decide wherever it will be placed in ChainListi or not, by comparing the distance between c1, cj and any consecutive nodes in ChainListi. If the distance between c1 and cj is less than D, where D the distance between cj and any consecutive node, then Ci adds it to the end of ChainListi. Otherwise cj will be added between the consecutive nodes that have least distance to cj, e.g., if cr and ck are consecutive nodes in the ChainListi and if dis(cj, clast) > dis(cj, cr) and dis(cj, clast) > dis(cj, ck), then node cj will be inserted between cr and ck. Otherwise, node cj will be inserted to the end of the ChainListi after node clast, where clast is the last node of the ChainListi and dis(cj, ck) is the distance among node cj and node ck. Ci repeats the previous Update Step to include all its members in ChainListi. If a member node dies in a ChainListi, then Ci will reconstruct the chain to bypass the dead node. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By applying the previous steps, each node will send and receive with the possible minimum distance. Thus, DCCS can save power of each node.</ns0:p></ns0:div>
<ns0:div><ns0:head>C)</ns0:head><ns0:p>Step 3: Fog Node Selection: each CH selects the nearest fog node for transmitting its data. Fog nodes receive the data transmitted by the CHs, perform aggregation operation and deliver the data to the cloud using a routing technique. D)</ns0:p><ns0:p>Step 4: Learning process: Measurement matrix selection is considered as one of the most important processes in the CS method due to its impact on nodes data where this matrix is used by sensor nodes to compress their data and is used by the cloud to reconstruct the sensor data. Incorrect selection may lead to large data loss; hence, selection of proper measurement matrix is crucial.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 4. Flow chart of DCCS Data Compression Phase</ns0:head><ns0:p>Each fog node generates this matrix using a random seed ξ, and then broadcasts ξ to the whole WSN. For seed selection process, DCCS applies the following scenario: DCCS starts seed estimation process by learning step. During the intra-cluster process, the cluster head starts to collect data X by a non-CS from its chain members and then fuse these data. Then, using intercluster communication process, data is combined by Fog nodes towards Fog leader node and send to cloud by Fog leader node. Then, the cloud starts to find the best ξ that gives minimum error. The cloud uses this minimum error as threshold β. Finally, the cloud sends ξ to the entire network to use during Data compression phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2.'>Data Compression Phase</ns0:head><ns0:p>DCCS repeats this phase starting from the second round. This phase consists of four steps: CS based data gathering within intra-cluster (cluster member to CH) and inter-cluster (Fog node to cloud), Reconstruction, Dynamic Re-Generation of Random Seed and CH rotation. At the end of this phase, DCCS reuses Algorithm 1 to create the cluster with the new inputs (new CHs will be the output from this phase). The details of these steps are illustrated below: A) CS based data gathering As described in the previous steps, there are NCH clusters with CH Ci and chain member nodes organized in ChainListi such that each ChainListi = [c0, c1, …, clast-1, clast]. Intra-cluster (from cluster member to CH): DCCS starts CS gathering in each intra-cluster as follows: the last node clast in the ChainListi uses the global seed ξ received from the BS to generate αclast. The clast node computes its compress vector (measurement) ylast = αc_last dc_last , where dc_last is the reading of sensor clast, and then transmits the measurement yc_last to its previous neighbor node clast-1 in the ChainListi. After that, node clast-1 uses the same global seed ξ to generate αc_(last-1) and compute its measurement yc_(last-1) = αc_(last-1) dc_(last-1) and then delivers the summation vector yc_last + yc_(last-1) to the previous node clast-2. Once clast-2 receives yc_last + yc_(last-1), it computes its value yclast-2, adds it to yc_last + yc_(last-1) and then transmits the summation value to previous node in ChainListi and so on till the CH Ci. Now each CH Ci has already received the compressed vector yi = [yc_0, yc_1, …, yc_last] from their corresponding cluster members. Then, each CH sends the compressed samples to the nearest Fog Node. Inter-cluster (from Fog nodes to cloud): Through inter-cluster communication, DCCS applies the same scenario used in Algorithm 1 to organize the Fog nodes in chain and consider them as CMs of a cluster with the cloud as CH. The communication among fog nodes is restricted to adjacent FNs in each round, and the fog leader node will be selected to deliver the gathered data to the cloud. As the scenario is same as in Algorithm 1, Fog nodes (FNs) are organized into a chain to deliver the information among FNs and to deliver the aggregated data to the cloud. The formed chain allows the nodes to interact and exchange information with their chain neighbors. CHs deliver their collected data to FNs, and the data will be fused at the FNs. Finally, fog leader node will deliver the fused data to the cloud. The selection of fog leader node depends on their energy and the distance to the cloud. Formation of a chain among FNs and choosing one among them as fog leader to send the data to the cloud could save more energy in the network than sending the data independently by each FN to the cloud. The communication process for both inter-cluster and intra-cluster are shown in Fig. <ns0:ref type='figure' target='#fig_0'>3</ns0:ref></ns0:p></ns0:div>
<ns0:div><ns0:head>. B) Reconstruction Step When the cloud gets the compress vector y = [y1, y2, y3, …,yi], where i = [1, 2, …,</ns0:head><ns0:p>] transmitted by the FN leader, cloud generates the CS matrix depending on the 𝑛 𝑐ℎ predefined random seed ξ. After that, the cloud reconstructs the original data x0 of every cluster. In order to improve this step, in this paper, Random Selection Matching Pursuit (RSMP) is proposed. The working of RSMP will be described in Section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>C) Dynamic Re-Generation of Random Seed</ns0:head><ns0:p>The main idea of this step is that DCCS gives the ability to dynamically change the CS matrix depending on the network status, and the number of nodes that are still alive, instead of using the same CS matrix generated during the Setup phase in all rounds. The problem of using the fixed CS matrix is that: in each round, every sensor node transmits and receives fixed size vector whatever the count of alive nodes in each round (which should be varied according to the count of alive nodes); this leads to an increment in the average power consumption and also has negative reflection in the reconstruction process. To overcome this problem, DCCS dynamically changes the CS matrix whenever the network status changes, i.e., the CS matrix size reduces in accordance with the number of alive nodes. In this situation, DCCS can successfully decrease the overall power consumption. The cloud can obtain the dead nodes count in every cluster from fog nodes through the CHs. Where, each CH can simply use a HELLO message to identify the dead nodes count in its cluster in each round. The working procedure of this step can be summarized as follows: The cloud compares the latest reconstructed data x' with X and decides whether to re-generate depending on the error value, 𝜀 = , where . If it goes beyond a predefined threshold β which means that ‖𝑥 -𝑥'‖ ‖.‖ 𝑖𝑠 𝐿 𝑛 -𝑛𝑜𝑟𝑚 there is a change in network status, the cloud regenerates new ξ, otherwise no need to change last seed.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:2:0:NEW 3 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>D) Cluster Head Rotation</ns0:head><ns0:p>CHs check the piggybacked CM-REs information to make decision on whether to continue as CHs or give up their CH roles to any other node in their respective clusters based on RE and assign these nodes as the new CHs. This step prevents WSNs from dying earlier by balancing the energy consumption. The whole process of the data compression phase can be seen in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>RSMP Algorithm In this section, we propose a new reconstruction technique called Random Selection Matching</ns0:head><ns0:p>Pursuit (RSMP). RSMP can be utilized by the cloud to recover the sensor readings again. It is a reversible greedy algorithm in the sense that it has reversible construction, the support set can be pruned (backward step) to eliminate the unreliable elements chosen in the past (forward step). Before presenting the RSMP algorithm, we define some operations which we use in the algorithm as provided below: resid (y, x) ≜ y -Φx (5) supp(x; k) ≜ {the set of indices that corresponds to the k largest amplitude components of x}, (6) rand(x; k) ≜ {the set of indices that corresponds to the k randomly chosen components of x}, <ns0:ref type='bibr' target='#b7'>(7)</ns0:ref> During the forward step, most of the CS reconstruction greedy algorithms used Matched Filter Detection (MF) operation Φ'y to calculate the correlation between matrix Φ columns and the Sampled Measurement Vector y. Then, Eq. ( <ns0:ref type='formula'>6</ns0:ref>) is used to select the set of indices that corresponds to the n largest amplitude components of Φ'y. The size of n may vary for each algorithm, for example: n = 1,.. s, and 2S in Orthogonal Measurement Sampling (OMP) <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, Subspace Pursuit (SP) <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref> and COSAMP <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref> algorithms respectively. However, as a result of measurement noises, the MF does not usually give the indices of all correct columns. Indeed, the correct indices may not be selected because they give small correlation according to Eq. ( <ns0:ref type='formula'>6</ns0:ref>). To solve this drawback, RSMP proposes a random technique to the selection process in the forward step to increase the probability to find the correct column indices in each iteration. Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref> provides the working of RSMP algorithm. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>reconstruction performance especially when the sparse level increases. Due to the measurements noise, the correct columns do not usually give high correlation during MF process. RSMP algorithm uses a simple way to improve this step. Instead of choosing the indices corresponding to largest amplitude components in the set of F only, in each iteration, RSMP selects s + q columns where q is the random selection size. RSMP firstly selects the largest s components in F (H = supp(F, s) ) to create set H and then uses Eq. ( <ns0:ref type='formula'>7</ns0:ref>) to select q random components from set F (R = Rand(F, q)), and creates set R to overcome the challenging case in which the correct columns do not give high correlation. Indeed, the probability to find the correct columns in both cases is increased. RSMP sets q = m/2 -s according to the fact that the CS signal recovery problem can be solved if s≤ m / 2, where S is the sparsity level <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. Finally, RSMP uses the union set U = H R between set H and set R to expand the estimated set T and start the next step. ∪ C) Backward: We can call this step as correction step because through this step, the RSMP algorithm eliminates incorrect column indices which were incorrectly selected in the last step, i.e the technique updates the approximation set = W|s by removing s column indices that have the 𝐸 𝑘 least values in set W. D) Update: The samples are updated using Eq. ( <ns0:ref type='formula'>5</ns0:ref>) as = resid(y, ). There are two situations 𝑟 𝑘 𝐸 𝑘 that terminate our algorithm: 1) the algorithm will stop when the residue set r || || 2 is lower than 𝑟 𝑘 the β which is the termination parameter. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiments</ns0:head><ns0:p>This section includes the results of simulation for analyzing the performance of our work. We divide this section into three parts: in the first part, DCCS technique is evaluated with reference to (i) network lifetime (first node die) and (ii) average energy consumption. In the second part, we analyze the RSMP reconstruction technique in comparison to Orthogonal Matching Pursuit (OMP), COSAMP, Forward-Backward Pursuit (FBP) <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>, Subspace Pursuit (SP) <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>, and E-OMP algorithms <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>. Finally, the dynamic re-generation of random seed step is evaluated in terms of average power consumption and reconstruction error in the third part.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Evaluation of DCCS Algorithm</ns0:head><ns0:p>In this section, we describe the details of the simulations performed in MATLAB environment. The network region is having a size of 100 m × 100 m, and the BS is located at the region center. The nodes count is varied from 50 to 200 nodes with an incremental factor of 50. This section is subdivided based on two cases: Homogenous network and Heterogeneous network.</ns0:p><ns0:p>Performance Metrics: We use the following performance metrics to analyze and compare the proposed algorithm performance with baseline algorithms:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:2:0:NEW 3 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(1) Average Energy Consumption: It is given by the total energy expended by the entire nodes divided by the total number of nodes during their operations like sending, forwarding, and receiving. The average energy consumed for each round can be estimated as:</ns0:p><ns0:formula xml:id='formula_5'>(8) 𝐸 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 = ∑ 𝑁 𝑖 = 1 𝐸 𝑖 (𝑟)</ns0:formula></ns0:div>
<ns0:div><ns0:head>𝑟</ns0:head><ns0:p>where n denotes the nodes count and r refers to the round.</ns0:p><ns0:p>(2) Network lifetime: We measure the lifetime of the network according to the first node death.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1.'>Case 1: Homogeneous Network</ns0:head><ns0:p>In this case, DCCS algorithm performance is evaluated in comparison with DMC-CS <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> and EDACP-CS <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. We use the same energy parameters as used in <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. To send a message of size lbits to a distance d, the power consumed by the radio, we use Eqs. ( <ns0:ref type='formula'>3</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>) in section 4.1. Fig. <ns0:ref type='figure'>6</ns0:ref> shows the lifetime for DCCS, EDACP-CS, and DMC-CS. In EDACP-CS and DMC-CS, the death of the first node is earlier than in DCCS, and also Fig. <ns0:ref type='figure'>6</ns0:ref> shows the potency of the DCCS algorithm in enhancing the lifetime of the network than compared to EDACP-CS and DMC-CS algorithms. The reason is that the DCCS uses a fixed count of CHs (NCH) per round, which leads to achieve better stability in energy utilization among the nodes, when compared to other algorithms. Additionally, in DCCS, the BS takes the role to select the CHs only in the first round and then the CHs change dynamically, which considerably decreases the overhead cost of computation associated with CH search when compared with others. DCCS reduces the transmitted CS measurement samples in each cluster which dynamically depends on the network status rather than using a fixed number of CS measurement samples in each round as in other algorithms. Fig. <ns0:ref type='figure'>6</ns0:ref>. Network lifetime in DCCS, DMC-CS and EDACP-CS Fig. <ns0:ref type='figure'>7</ns0:ref>. Count of alive nodes as a function of number of rounds Fig. <ns0:ref type='figure'>7</ns0:ref> depicts the lifetime and the count of alive nodes in the network per round for DCCS, EDACP-CS and DMC-CS. It clearly shows that the first and last node death in DCCS happen several rounds later than those of EDACP-CS and DMC-CS, which means that DCCS minimizes the energy utilization among all sensors. This is because DCCS reduces the power consumption of each node by organizing the nodes of each cluster in a chain such that each node sends and receives only from the nearest node, which is not considered by EDACP-CS algorithm. During the chain construction, DCCS rearranges all nodes in the chain when it adds a new node to the chain list to take into consideration the distances between that node and the others in the chain, rather than simply adding the closest node as the last node of the chain like DMC-CS. From Fig. <ns0:ref type='figure' target='#fig_5'>8</ns0:ref>, it is evident that DCCS succeeds to decrease the average energy consumption when compared to EDACP-CS and DMC-CS algorithms. The main reason for this is due to the dynamic re-generation of CS matrix in DCCS, which is not considered in the other algorithms. Manuscript to be reviewed Computer Science 5.1.2. Case 2: Heterogeneous Network Here, we focus to evaluate the proposed algorithm performance in a heterogeneous network scenario. In this case, we make an assumption that the total network energy is 102 J, where the nodes are divided into advanced, intermediate and normal nodes according to their residual energy. DCCS performance is evaluated in comparison with ETSSEP <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>, SEECP <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> and SILEACH <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> based on CS method. It's clear that DCCS still provides good performance with reference to network lifetime in terms of first node dies enhancement in comparison with ETSSEP, SEECP, and SILEACH algorithms as shown in Fig. <ns0:ref type='figure' target='#fig_6'>9</ns0:ref>. That is because the dynamic CS matrix regeneration process in DCCS gives it the ability to utilize CS matrix in an effective way to minimize the total transmitted data which leads to reduce the transmission energy expenditure. Whereas, the other algorithms use the same CS matrix in each iteration which may become inappropriate for the network after a number of iterations. The same effect can be noticed in Fig. <ns0:ref type='figure' target='#fig_1'>10</ns0:ref> where DCCS performs better than the other algorithms with reference to network lifetime in half-node death case. From Fig. <ns0:ref type='figure' target='#fig_1'>11</ns0:ref>, we can conclude that DCCS succeeds to minimize the total energy expenditure in comparison with the others. That is because DCCS divides the network into various clusters and inside every cluster, it uses the proposed chain construction algorithm to arrange the cluster members into a chain. In addition, DCCS uses the same proposed chain construction algorithm to organize the Fog nodes transmission to the Cloud. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.'>Evaluation of RSMP Algorithm</ns0:head><ns0:p>Here, we evaluate RSMP reconstruction algorithm performance in comparison with OMP, COSAMP, SP, FBP and E-OMP. Firstly, we make use of the proposed algorithm to recover the signals captured from 54 sensors placed at Intel Berkeley Research Lab. The entire experiment process is iterated for 500 times on randomly generated S sparse samples. Secondly, RSMP algorithm is applied to reconstruct computer-generated signals in which its nonzero coefficients are drawn from Uniform and Gaussian distributions. Finally, RSMP performance is measured over signal noise observations. We have adopted MATLAB environment for performing the simulations. The signal reconstruction performance is analyzed using Gaussian matrix Φ of size m × n, where n= 256 and m = 128. Performance Metrics: RSMP algorithm performance in signal reconstruction is compared with other reconstruction algorithms with reference to Average Normalized Mean Squared Error (ANMSE), which is the average ratio , where and represent the original and We can see that RSMP algorithm produces less error than COSAMP, OMP, E-OMP, FBP and SP. In summary, RSMP algorithm improves the reconstruction process and gives better performance than COSAMP, OMP, E-OMP, FBP and SP algorithms. This is because in each iteration RSMP gives the chance to the columns which do not give the largest values in MF process to be chosen.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.'>Evaluation of Dynamic Re-generation of Random Seed Step</ns0:head><ns0:p>In this part, network area is assumed to be 100m × 100m, having the sensor nodes count ranging from 50 to 200 nodes with an incremental factor of 50 and the BS is placed at (x=50, y=50). Performance Metrics: We call DCCS algorithm as DCCS-dynamic if it uses the proposed dynamic re-generation of random seed and DCCS-static otherwise. This section compares the performance of DCCS-dynamic and DCCS-static, with reference to the following: Average Energy Consumption and Average Normalized Mean Squared Reconstruction Error (ANMSE). During the reconstruction process, COSAMP <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref> algorithm is used to recover the data in each round. Fig. <ns0:ref type='figure' target='#fig_13'>20</ns0:ref> shows the performance of DCCS algorithm in both dynamic (DCCS-dynamic) and static (DCCS-static) mode in terms of number of alive nodes per round. According to DCCS-dynamic scenario, the number of measurement samples transmitted in intra or inter-cluster communication decreases while the count of dead nodes are increased. Moreover, DCCS-static uses a fixed CS matrix whatever the count of alive nodes per round. On the other hand, DCCS-dynamic uses the threshold β value with reference to the best reconstruction error and then compares the reconstruction error in each round with β. If the error is larger than β, the old matrix is considered as not suitable and therefore regenerates another one. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The main objective of IoT components is to collect accurate information about any event. However, there are some challenges that hinder the way to attain this objective such as sensor battery constraints and dealing large amount of data acquisition. To solve these problems, this research introduced a new CS scheme for IoT and explained how this scheme could be utilized to compress and reduce the overall data traffic through the network. The proposed work consists of two algorithms, first one is called DCCS algorithm which converts the network into several clusters and organizes each cluster into chain to start the CS data gathering. In the second algorithm called RSMP which is used in the cloud side in order to reconstruct the original data successfully. In each round, the cloud checks the suitability of the measurement matrix to the network to decide whether to change or not. The proposed work achieved our objectives to enhance the IoT network lifetime and improved the reconstruction performance. Simulation results proved that our proposed algorithm is an effective data acquisition tool for decreasing the energy consumption in networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>In order to decrease the overall energy expenditure, DCCS divides the communication process into: (i) intra and (ii) inter cluster communication process. In intra-cluster process, DCCS organizes the nodes in each cluster into a chain using the proposed chain construction steps. In inter-cluster process, DCCS applies the same scenario used in the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:2:0:NEW 3 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1. DCCS Algorithm 4.1Network Assumptions In this work, our proposed scheme is designed according to the following assumptions:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>) 𝐸 𝑅𝑥 (𝑙) = 𝑙 𝐸 𝑒𝑙𝑒𝑐Where the radio dissipates (Eelec )= 50nJ/bit, amplifier energy r( )= 10pJ/bit/m2,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Fig. 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig. 3. Inter and Intra cluster communication process</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig. 5. RSMP Algorithm The proposed algorithm includes four steps: Initialization, Forward, Backward and Update as detailed below: A) Initialization: The proposed algorithm initializes all parameters as follows: initial approximation = 0, residual r = y, and estimated set T = φ. 𝐸 0 𝑟 0 B) Forward: The main contribution of RSMP algorithm is in this step. Most of the MP algorithms use the n largest in-amplitude components from the MF, |n| depends on the algorithm, as a first estimation of the estimated set T. However, they depend only on the high correlation columns in MF equation without taking consideration the others which have negative effect on the</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Fig. 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Fig. 8. Average energy consumption in DCCS, EDACP-CS and DMC-CS.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Fig. 9. Network lifetime (First node dies) in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig. 10 .Fig. 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Fig. 10. Network lifetime (half of nodes die) in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS. Fig. 11. Residual Energy in DCCS, ETSSEP-CS, SEECP-CS and SILEACH-CS.5.2. Evaluation of RSMP AlgorithmHere, we evaluate RSMP reconstruction algorithm performance in comparison with OMP, COSAMP, SP, FBP and E-OMP. Firstly, we make use of the proposed algorithm to recover the signals captured from 54 sensors placed at Intel Berkeley Research Lab. The entire experiment process is iterated for 500 times on randomly generated S sparse samples. Secondly, RSMP algorithm is applied to reconstruct computer-generated signals in which its nonzero coefficients are drawn from Uniform and Gaussian distributions. Finally, RSMP performance is measured over signal noise observations. We have adopted MATLAB environment for performing the simulations. The signal reconstruction performance is analyzed using Gaussian matrix Φ of size m × n, where n= 256 and m = 128. Performance Metrics: RSMP algorithm performance in signal reconstruction is compared with other reconstruction algorithms with reference to Average Normalized Mean Squared Error</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Fig. 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Fig. 12. Intel temperature and humidity: A) Original temperature Signal, B) Reconstructed temperature signal, C) Original humidity Signal and D) Reconstructed humidity signal</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig. 14 .Fig. 15 .</ns0:head><ns0:label>1415</ns0:label><ns0:figDesc>Fig. 14. Reconstruction results over sparsity levels (Uniform Distribution) Fig. 15. Reconstruction results over sparsity level (Gaussian distribution) Reconstruction performance over different measurement vector lengths: This part of simulation aims to test RSMP reconstruction performance when different measurement vector lengths-m are used with two different CS matrices: Gaussian and Bernoulli distribution matrices as shown in Figs. 16 and 17 respectively. To achieve this aim, sparse signals taken from Uniform distribution having length n = 120 is utilized and m values from 10 to 60 with step size of 1. From those figures, we can understand that RSMP algorithm still provides the least ANMSE values when compared to other algorithms. Reconstruction over noisy signal: In this part, we add some noise equal to 10 -4 to the original Uniform as well as in Gaussian distribution signal where n = 256 and m = 128. The CS matrix Φ is drawn from the Gaussian distribution. The sparsity S levels are from 10 to 60 with step size 1.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Fig. 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Fig. 16. Reconstruction results over Gaussian matrix for different lengths of m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Fig. 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Fig. 17. Reconstruction results over Bernoulli matrix for different lengths of m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Fig. 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Fig. 18. Reconstruction results for noisy Uniform sparse signals Fig. 19. Reconstruction results for noisy Gaussian sparse signals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Fig. 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Fig. 20. Number of Alive nodes in DCCS-dynamic and DCCS-static</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,204.37,525.00,332.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,204.37,525.00,352.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,204.37,525.00,435.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,204.37,525.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,204.37,525.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,204.37,525.00,308.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,204.37,525.00,308.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,204.37,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,250.12,525.00,364.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,204.37,525.00,316.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,204.37,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,204.37,525.00,422.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,204.37,525.00,416.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,204.37,525.00,361.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,204.37,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,204.37,525.00,368.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,204.37,525.00,345.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>The selection of β is based on the noise level; 2) If the number of iterations exceed where is the maximum count of iterations. At the end,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝑘 𝑚𝑎𝑥</ns0:cell><ns0:cell>𝑘 𝑚𝑎𝑥</ns0:cell><ns0:cell>𝐸 𝑘</ns0:cell></ns0:row><ns0:row><ns0:cell>holds the corresponding nonzero values</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>In this section, we use RSMP algorithm for reconstructing the signals obtained from Intel Berkeley Research lab.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>‖𝑥 -𝑥 ~‖2 ‖𝑥‖ 2</ns0:cell><ns0:cell>𝑥</ns0:cell><ns0:cell>𝑥 ~</ns0:cell></ns0:row><ns0:row><ns0:cell>reconstructed readings respectively.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>Experiments over real datasets:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b13'>13</ns0:ref>, we illustrate the distribution of relative reconstruction error for different reconstruction algorithms. It is evident that RSMP algorithm exceeds the performance of other greedy algorithms, i.e., the COSAMP, OMP, EOMP, FBP and SP respectively.Fig. 13. Reconstruction Performance of six different algorithms for temperature signals.Different coefficient distributions: In this part of simulation, Uniform and Gaussian distributions are utilized to draw the non-zero values of the sparse signal and the sparse level S ranges from 5 to 60. In Fig.14where the sparse signal's non-zeros values are taken from uniform distribution, RSMP algorithm has lower ANMSE comparing to COSAMP, FBP, OMP, E-OMP and SP. Moreover, ANMSE for RSMP algorithm appear to rise only when s >49 while it increases when s > 42, s ≥ 34, s ≥ 44, s ≥ 38 and s ≥ 41 for COSAMP, OMP, E-OMP, FBP and SP algorithms respectively as shown in Fig.14. Fig.15shows the results of ANMSE when the non-zero entries of sparse signal are taken from Gaussian distribution. From Fig.15, it is evident that RSMP algorithm still gives least ANMSE result when compared to COSAMP, OMP, EOMP, FBP and SP, as s > 59, s ≥ 46, s > 34, s > 49, s > 47 and s > 45, respectively.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53682:2:0:NEW 3 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Prof. Chan Hwang See,
First of all, we would like to take this opportunity to thank you for your kind support on the paper “cs-53682” entitled “Deterministic clustering based compressive sensing scheme for fog-supported heterogeneous wireless sensor networks”. We are also grateful to the editor and the anonymous reviewers for their valuable comments on the paper.
As per your suggestions, we have carefully taken care of all comments. For more details, please refer to Response to Reviewers and the revised version of the paper.
Should you have any further comments or suggestions, please do not hesitate to let us know.
Thank you very much again for all the valuable comments and kind support.
Yours sincerely,
The authors
COMMENTS FOR THE AUTHOR:
Reviewer #:
General comments
1. The authors answered most of the comments I raised. The paper has been restructured. Nevertheless, authors are strongly recommended to proofread this document; the writings may not encourage more readers,
Response: Thank you for the comments. The paper has been modified according to the reviewer’s comments.
2. Notations: To avoid confusion, authors should use unique types of characters to denote scalars, matrices and sets. For instance:
. Small letters for scalars: m; n; nCH; _ _ _
. Capital letters for matrices: M; X; Y; _ _ _
Special like H;W; _ _ _ for sets
Response: Thank you for the comments. We have added accordingly.
Comments on Proposed scheme
1. Correction: From line 356 to 357
Response: Thank you for the comments. We have added accordingly.
2. Correction: On line 411, correct the equation E = WS ; E and S are scalar, but W is a set
Response: Thank you for the comments. We have added accordingly
3. Comments: To avoid confusion, authors are recommended to use the operator _ to denote the multiplication by a scalar.
Response: Thank you for the comments. We have added accordingly
4. Correction: On line 437, the symbol E in equation (8) is confused with the one on line 411.
Response: Thank you for the comments. We have added accordingly
5. Correction: Correct the Update step of Algorithm 1 as proposed below
Response: Thank you for the comments. We have added accordingly, see the updated algorithm
" | Here is a paper. Please give your review comments after reading it. |
45 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Using prototype methods to reduce the size of training datasets can drastically reduce the computational cost of classification with instance-based learning algorithms like the k-Nearest Neighbour classifier. The number and distribution of prototypes required for the classifier to match its original performance is intimately related to the geometry of the training data. As a result, it is often difficult to find the optimal prototypes for a given dataset, and heuristic algorithms are used instead. However, we consider a particularly challenging setting where commonly used heuristic algorithms fail to find suitable prototypes and show that the optimal number of prototypes can instead be found analytically. We also propose an algorithm for finding nearly-optimal prototypes in this setting, and use it to empirically validate the theoretical results. Finally, we show that a parametric prototype generation method that normally cannot solve this pathological setting can actually find optimal prototypes when combined with the results of our theoretical analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>BACKGROUND</ns0:head><ns0:p>The k-Nearest Neighbour (kNN) classifier is a simple but powerful classification algorithm. There are numerous variants and extensions of kNN <ns0:ref type='bibr' target='#b2'>(Dudani, 1976;</ns0:ref><ns0:ref type='bibr' target='#b19'>Yigit, 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Sun et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Kanjanatarakul et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Gweon et al., 2019)</ns0:ref>, but the simplest version is the 1NN classifier which assigns a target point to a class based only on the class of its nearest labeled neighbor. Unfortunately, the family of kNN classifiers can be computationally expensive when working with large datasets, as the nearest neighbors must be located for every point that needs to be classified. This has led to the development of prototype selection methods and generation methods which aim to produce a small set of prototypes that represent the training data <ns0:ref type='bibr' target='#b0'>(Bezdek and Kuncheva, 2001;</ns0:ref><ns0:ref type='bibr' target='#b17'>Triguero et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bien and Tibshirani, 2011;</ns0:ref><ns0:ref type='bibr' target='#b3'>Garcia et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b8'>Kusner et al., 2014)</ns0:ref>. Using prototype methods speeds up the kNN classification step considerably as new points can be classified by finding their nearest neighbors among the small number of prototypes. Prototype selection methods select a subset of real points to use as prototypes while prototype generation methods are not similarly restricted and instead create synthetic points (that are not necessarily found in the original data) to act as prototypes. Generating synthetic prototypes allows for more efficient representations so our analysis focuses specifically on the generation of optimal or near-optimal prototypes. The number of prototypes required to represent the training data can be several orders of magnitude smaller than the number of samples in the original training data. <ns0:ref type='bibr' target='#b13'>Sucholutsky and Schonlau (2020)</ns0:ref> showed that by assigning label distributions to each prototype, the number of prototypes may even be reduced to be less than the number of classes in the data. This result was demonstrated on a synthetic dataset consisting of N concentric circles where the points on each circle belong to a different class. The authors found that commonly used prototype generation methods failed to find prototypes that would adequately represent this dataset, suggesting that the dataset exhibits pathological geometries.</ns0:p><ns0:p>Further analysis revealed that the soft-label kNN variant required only a fixed number of prototypes to separate any number of these circular classes, while the number of prototypes required by 1NN was shown to have an upper bound of about tπ for the t th circle as can be seen in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. However, this upper bound did not account for the possibility of rotating prototypes on adjacent circles as a method of reducing the number of required prototypes. We explore this direction to analytically find tighter bounds and an approximate solution for the minimal number of prototypes required for a 1-Nearest Neighbor classifier to perfectly separate each class after being fitted on the prototypes. In particular, we show that this problem actually consists of two sub-problems, or cases, only one of which is closely approximated by the previously proposed upper bound. We also propose an algorithm for finding nearly-optimal prototypes and use it to empirically confirm our theoretical results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>THEORY</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Preliminaries</ns0:head><ns0:p>We first proceed to formalize the problem of having a 1-NN classifier separate the classes after being fitted on a minimal number of prototypes. Consistent with <ns0:ref type='bibr' target='#b13'>Sucholutsky and Schonlau (2020)</ns0:ref>, we define the t th circle as having radius tc for t = 0, 1, . . .. Because each class is fully separated from non-adjacent classes by its adjacent classes, it is sufficient to consider arbitrary pairs of adjacent classes when trying to find the optimal prototypes. For the rest of this section, we consider arbitrarily selected circles t and t + 1 with the following radii.</ns0:p><ns0:formula xml:id='formula_0'>r 1 = tc, r 2 = (t + 1)c, t ∈ N 0 , c ∈ R >0 ,</ns0:formula><ns0:p>Because of the symmetry of each circle, we require that the prototypes assigned to each circle be spaced evenly around it. We assume that circle t and t + 1 are assigned m and n prototypes respectively. We define θ * as the angle by which the prototypes on circle t + 1 are shifted relative to the prototypes on circle t. We record the locations of these prototypes in Cartesian coordinates.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_1'>a i = (r 1 cos( 2πi m ), r 1 sin( 2πi m )), i = 1, ..., m b j = (r 2 cos( 2π j n + θ * ), r 2 sin( 2πi n + θ * )), i = 1, ..., n</ns0:formula><ns0:p>We can then find the arc-midpoints of these prototypes as follows.</ns0:p><ns0:formula xml:id='formula_2'>a * i = (r 1 cos( 2πi + π m ), r 1 sin( 2πi + π m )), i = 1, ..., m b * j = (r 2 cos( 2π j + π n + θ * ), r 2 sin( 2πi + π n + θ * )), i = 1, ..., n</ns0:formula><ns0:p>Letting d(x, y) be the Euclidean distance between points x and y, we find the distances between prototypes on the same circle.</ns0:p><ns0:formula xml:id='formula_3'>d a (m) = d(a i , a * i ) = 2t 2 c 2 − 2t 2 c 2 cos( π m ) d b (n) = d(b i , b * i ) = 2(t + 1) 2 c 2 − 2(t + 1) 2 c 2 cos( π n )</ns0:formula><ns0:p>We also find the shortest distance between prototypes of circle t and arc-midpoints of circle t + 1 and vice-versa.</ns0:p><ns0:formula xml:id='formula_4'>d * 1 (m, n, θ * ) = min i, j {d(a i , b * j ) i=1,...,m j=1,...,n } = min i, j { t 2 c 2 + (t + 1) 2 c 2 − 2t(t + 1)c 2 cos( 2πi m − 2π j + π n − θ * ) i=1,...,m j=1,...,n } d * 2 (m, n, θ * ) = min i, j {d(a * i , b j ) i=1,...,m j=1,...,n } = min i, j { t 2 c 2 + (t + 1) 2 c 2 − 2t(t + 1)c 2 cos( 2πi + π m − 2π j n − θ * ) i=1,...,m j=1,...,n }</ns0:formula><ns0:p>The necessary and sufficient condition for the 1-NN classifier to achieve perfect separation is that the distance between prototypes and arc-midpoints assigned to the same circle, be less than the minimal distance between any arc-midpoint of that circle and any prototype of an adjacent circle. This must hold for every circle. Given these conditions and some fixed number of prototypes assigned to the t th circle, we wish to minimize n by optimizing over θ * .</ns0:p><ns0:p>Given m,t min</ns0:p><ns0:formula xml:id='formula_5'>θ * n s.t. d * 1 (m, n, θ * ) > d b (n) d * 2 (m, n, θ * ) > d a (m)</ns0:formula><ns0:p>Inspecting the inequalities, we see that they can be reduced to the following system which we note is now independent of the constant c . Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>− 2t + 1 2(t + 1) > t cos( 2πi m − 2π j + π n − θ * ) − (t + 1) cos( π n ) (1) 2t + 1 2t > (t + 1) cos( 2πi + π m − 2π j n − θ * ) − t cos( π m )<ns0:label>(2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It is clear that n ≥ m, but we separate this system into two cases, n = m and n > m, as the resulting sub-problems will have very different assumptions and solutions. The simpler case is where every circle is assigned the same number of prototypes; however, the total number of circles must be finite and known in advance. In the second case where larger circles are assigned more prototypes, we assume that the number of circles is countable but not known in advance. We also note that for t = 0, a circle with radius 0, exactly one prototype is required. Given this starting point, it can be trivially shown that for t = 1, a minimum of four prototypes are required to satisfy the conditions above (three if the strict inequalities are relaxed to allow equality). However for larger values of t, careful analysis is required to determine the minimal number of required prototypes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Upper bounds</ns0:head><ns0:p>We first show how our setup can be used to derive the upper bound that was found by <ns0:ref type='bibr' target='#b13'>Sucholutsky and Schonlau (2020)</ns0:ref>.</ns0:p><ns0:p>Theorem Proof. Given the setup above, we first consider the worst case scenario where a θ * is selected such that</ns0:p><ns0:formula xml:id='formula_7'>cos( 2πi m − 2π j+π n − θ * ) = cos( 2πi+π m − 2π j n − θ * ) = cos(0) = 1.</ns0:formula><ns0:p>We can then solve Inequality 1 for n and Inequality 2 for m.</ns0:p><ns0:formula xml:id='formula_8'>− 2t + 1 2(t + 1) > t cos(0) − (t + 1) cos( π n ) cos( π n ) > 2(t + 1) 2 − 1 2(t + 1) 2 n > π arccos( 2(t+1) 2 −1 2(t+1) 2 ) ≈ (t + 1)π 2t + 1 2t > (t + 1) cos(0) − t cos( π m ) cos( π m ) > 2t 2 − 1 2t 2 m > π arccos( 2t 2 −1 2t 2 ) ≈ tπ</ns0:formula><ns0:p>This is exactly the previously discovered upper bound.</ns0:p><ns0:p>However, note that we assumed that there exists such a θ * , but this may not always be the case for n > m. If we instead use the same number of prototypes for each circle (i.e. m = n), then we can always set</ns0:p><ns0:formula xml:id='formula_9'>θ * = π n .</ns0:formula><ns0:p>This results in a configuration where every circle is assigned</ns0:p><ns0:formula xml:id='formula_10'>n = ⌈ π arccos( 2(t+1) 2 −1 2(t+1) 2 )</ns0:formula><ns0:p>⌉ ≈ ⌈(t + 1)π⌉ prototypes. While the minimum number of prototypes required on the t th circle remains the same, the total minimum number of prototypes required to separate N circles is higher as each smaller circle is assigned the same number of prototypes as the largest one.</ns0:p><ns0:p>Corollary 2 (Upper Bound -Same Number of Prototypes on Each Circle) The minimum number of prototypes required to perfectly separate N concentric circles is bounded from above by approximately N 2 π, if the number of circles is finite and known in advance (each circle can have the same number of assigned prototypes).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Lower bounds</ns0:head><ns0:p>An advantage of our formulation of the problem is that it also enables us to search for lower bounds by modifying the θ * parameter. We can investigate the scenario where a θ * is selected that simultaneously</ns0:p><ns0:formula xml:id='formula_11'>maximizes d * 1 (m, n, θ * ) and d * 1 (m, n, θ * ).</ns0:formula></ns0:div>
<ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Theorem 3 (Lower Bound) The minimum number of prototypes required to perfectly separate N concentric circles is bounded from below by approximately ∑ N t=1 t 1 2 π, if the number of circles is not known in advance (each circle must have a different number of assigned prototypes).</ns0:p><ns0:p>Proof. If m = n, the best case would be a θ * such that cos</ns0:p><ns0:formula xml:id='formula_12'>( 2πi m − 2π j+π n − θ * ) = cos( 2πi+π m − 2π j n − θ * ) = cos( π n ).</ns0:formula><ns0:p>Solving the inequalities leads to the following values for m and n.</ns0:p><ns0:p>n > π arccos( 2t+1 2(t+1) )</ns0:p><ns0:formula xml:id='formula_13'>≈ (t + 1) 1 2 π m > π arccos( 2t 2 −t−1 2t 2 ) ≈ t (t + 1) 1 2 π</ns0:formula><ns0:p>We note again that such a θ * may not always exist.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Exact and approximate solutions</ns0:head><ns0:p>In the case where m = n, we can always choose a θ * such that cos When m > n, we have that cos</ns0:p><ns0:formula xml:id='formula_14'>( 2πi m − 2π j+π n − θ * ) = cos( π n ).</ns0:formula><ns0:formula xml:id='formula_15'>( 2πi m − 2π j+π n − θ * ) > cos( π n ) as 2πi m − 2π j n = 2πc 1 gcd(m,n) mn , c 1 ∈ N 0 . Let q := 2π gcd(m,n) mn , then | 2πi m − 2π j+π n − θ * | ≤ q 2 and | 2πi+π m − 2π j n − θ * | ≤ q 2 . Thus cos( 2πi m − 2π j+π n − θ * ) ≥ cos( q 2 ), and cos( 2πi+π m − 2π j n − θ * ) ≥ cos( q 2 ).</ns0:formula><ns0:p>Using the series expansion at q = 0 we can find that cos( q 2 ) = 1 − q 2 8 + q 4 384 − q 6 46080 + O(q 8 ). Proof. For a first order approximation, we consider cos( q 2 ) = 1 − q 2 8 + O(q 4 ) and cos</ns0:p><ns0:formula xml:id='formula_16'>( π n ) = 1 − π 2 2n 2 + O( 1 n 4 ).</ns0:formula><ns0:p>Inequality 1 then becomes the following.</ns0:p><ns0:formula xml:id='formula_17'>− 2t + 1 2(t + 1) > t(1 − q 2 8 + O(q 4 )) − (t + 1)(1 − π 2 2n 2 + O( 1 n 4 )) = −1 − π 2 2n 2 (t gcd(m, n) 2 m 2 − t − 1) + O( 1 n 4 ) n 2 > −π 2 (t + 1)(t gcd(m, n) 2 m 2 − t − 1) + O( 1 n 2 )</ns0:formula><ns0:p>However, we know from our previous upper bound that m</ns0:p><ns0:formula xml:id='formula_18'>+ 1 ≤ n ≤ m + 4. Thus 4 (n−4) 2 > gcd(m,n) 2 m 2 > 1 (n−1) 2 which means that gcd(m,n) 2 m 2 = O( 1 n 2 ). n 2 > −π 2 (t + 1)(t gcd(m, n) 2 m 2 − t − 1) + O( 1 n 2 ) = π 2 (t + 1) 2 + O( 1 n 2 )</ns0:formula><ns0:p>Therefore we have that n + O( 1 n ) > (t + 1)π as desired.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. First order (before and after discretizing by rounding to nearest integer) and second order approximations for the minimal number of prototypes that must be assigned to circle t. The approximations are applied to continuous values of t to show the convergence behavior.</ns0:p><ns0:p>We plot the second order approximation alongside the first order approximation from Theorem 5 in Figure <ns0:ref type='figure'>2</ns0:ref>, without rounding to show that the two quickly converge even at small values of t. Thus we can be confident that approximately tπ prototypes are required for the t th circle since this approximation quickly approaches the true minimal number of required prototypes as t increases. Since we can only assign a positive integer number of prototypes to each circle, we assign ⌈tπ⌉ prototypes to the t th circle; this is also shown in Figure <ns0:ref type='figure'>2</ns0:ref>. Applying this to the initial condition that the 0 th circle is assigned exactly one prototype results in the following sequence of the minimal number of prototypes that must be assigned to each circle. We note that the sequence generated by the second order approximation would be almost identical, but with a 3 replacing the 4. <ns0:ref type='bibr'>1,</ns0:ref><ns0:ref type='bibr'>4,</ns0:ref><ns0:ref type='bibr'>7,</ns0:ref><ns0:ref type='bibr'>10,</ns0:ref><ns0:ref type='bibr'>13,</ns0:ref><ns0:ref type='bibr'>16,</ns0:ref><ns0:ref type='bibr'>19,</ns0:ref><ns0:ref type='bibr'>22,</ns0:ref><ns0:ref type='bibr'>26,</ns0:ref><ns0:ref type='bibr'>29,</ns0:ref><ns0:ref type='bibr'>32,</ns0:ref><ns0:ref type='bibr'>35,</ns0:ref><ns0:ref type='bibr'>38,</ns0:ref><ns0:ref type='bibr'>41</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>COMPUTATIONAL RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Algorithm</ns0:head><ns0:p>While Theorem 5 gives us the number of prototypes required for each circle, it does not give us the exact locations of these prototypes. Finding the locations would require us to know θ , the optimal rotation of circle n + 1 relative to circle n. Unfortunately, the equations involving θ depend on greatest common denominator terms. Since this makes it difficult to find explicit analytical solutions, we instead turn to computational methods to find near-optimal prototypes. The theoretical results above enable us to develop computational methods to empirically find the minimum number of required prototypes. Based on the equations derived in the previous section, we propose an iterative, non-parametric algorithm, Algorithm 1, that proceeds from the innermost circle to the outermost one finding a near-optimal number of required prototypes, and their positions, in a greedy manner.</ns0:p><ns0:p>The core of the algorithm consists of three loops: outer, middle, inner. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and n. To avoid any potential floating-point precision errors, we use a wider than necessary interval and smaller than necessary update size for θ . At the end of each iteration of the outer loop, the n and θ that were found are recorded. We note that the rotation angle θ is relative to the rotation of the previous circle.</ns0:p><ns0:p>In other words, the absolute rotation for a given circle can be found by adding its relative rotation to the relative rotations of all the preceding circles.</ns0:p><ns0:p>Our code for this algorithm can be found at the publicly available GitHub repository associated with this paper. As shown above, the choice of c > 0, the constant length by which the radius of each consecutive circle increases, does not affect the number of required prototypes. Nonetheless, we still include c as a parameter in our algorithm to verify correctness. Running the algorithm for some large T , with any choice of c, results in the following sequence.</ns0:p><ns0:p>1 <ns0:ref type='bibr'>, 3, 6, 12, 13, 16, 19, 22, 26, 29, 32, 35, 38, 41 .</ns0:ref> . . This sequence appears to converge very quickly to the one predicted by our theorem. Curiously, the small differences between the first few steps of the two sequences cancel out and the cumulative number of required prototypes is identical when there are four or more circles. While requiring the algorithm to find numerical solutions to these equations is perhaps not computationally efficient, it does guarantee near-optimal performance, with the only sub-optimal portion occurring at the start of the sequence where the algorithm outputs 1, 3, 6, 12, 13 rather than the optimal 1, 3, 7, 10, 13 due to its greedy nature.</ns0:p><ns0:p>We visualize two sub-optimal prototype arrangements, and the near-optimal arrangement found by our algorithm, in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. The patterns seen in these visualizations are largely dependent on the greatest common divisors of the number of prototypes on adjacent circles, as well as the relative rotations of the prototypes on adjacent circles. The particularly symmetrical patterns in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> are a result of the outer three circles having 3, 6, and 12 prototypes respectively, doubling each time. We show another example of the decision boundaries exhibited by 1NN when fitted on near-optimal prototypes in Figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Heuristic Prototype Methods</ns0:head><ns0:p>We compare the performance of our proposed algorithm against a variety of existing prototype selection and generation methods. Specifically, we compare against every under-sampling method implemented by <ns0:ref type='bibr' target='#b10'>Lemaître et al. (2017)</ns0:ref> in the 'imbalanced-learn' Python package. We describe the prototype methods below and summarize their key properties in Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed • TomekLinks: Rebalances classes by removing any Tomek links <ns0:ref type='bibr' target='#b16'>(Tomek, 1976b)</ns0:ref>.</ns0:p><ns0:p>• RandomUndersampler: Rebalances classes by randomly selecting prototypes from each class.</ns0:p><ns0:p>• OneSidedSelection: Rebalances classes by isolating each class and resampling the negative examples (composed of the remaining classes) <ns0:ref type='bibr' target='#b7'>(Kubat, 1997)</ns0:ref>.</ns0:p><ns0:p>• NeighbourhoodCleaningRule: Improves on OneSidedSelection in settings where particularly small classes are present. As a result, it focuses more on improving data quality than reducing the size of the dataset <ns0:ref type='bibr' target='#b9'>(Laurikkala, 2001)</ns0:ref>. selects a fixed number of the nearest points from other classes <ns0:ref type='bibr' target='#b11'>(Mani and Zhang, 2003)</ns0:ref>.</ns0:p><ns0:p>• InstanceHardnessThreshold: Rebalances classes by fitting a classifier to the data and removing points to which the classifier assigns lower probabilities <ns0:ref type='bibr' target='#b12'>(Smith et al., 2014)</ns0:ref>.</ns0:p><ns0:p>• EditedNearestNeighbours:Resamples classes by removing points found near class boundaries defined by a fitted classifier <ns0:ref type='bibr' target='#b18'>(Wilson, 1972)</ns0:ref>.</ns0:p><ns0:p>• RepeatedEditedNearestNeighbours: Resamples classes by repeatedly applying EditedNearestNeighbours and refitting the classifier. <ns0:ref type='bibr' target='#b15'>(Tomek, 1976a)</ns0:ref>.</ns0:p><ns0:p>• AllKNN: Resamples classes similarly to RepeatedEditedNearestNeighbours but increases the parameter k of the classifier each time <ns0:ref type='bibr' target='#b15'>(Tomek, 1976a)</ns0:ref>.</ns0:p><ns0:p>• CondensedNearestNeighbours: Rebalances classes by repeatedly fitting a 1NN on the set of candidate prototypes and then adding all misclassified points to that set <ns0:ref type='bibr' target='#b5'>(Hart, 1968</ns0:ref>).</ns0:p><ns0:p>• ClusterCentroids: Rebalances classes by using kMeans to replace clusters with their centroids.</ns0:p><ns0:p>For each experiment, the dataset consists of 800 points divided as evenly as possible between the circles. We note that most methods are not able to reduce the number of prototypes much lower than the number of training points. This is in part due to the automatic class re-balancing that some of these methods attempt to do. Since all classes already have roughly the same number of points, and since none are misclassified when all 800 training points are used, several of the methods determine that little-to-no re-sampling is necessary. As a result, these methods provide at most a small reduction in the number of prototypes. We visualize some of the methods performing automatic undersampling in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>, where two common failure modes can be seen: the methods either fail to reduce the number of prototypes but achieve good separation of classes, or reduce the number of prototypes but fail to separate the classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref>. The ClusterCentroids prototype generation method finds similar prototypes to our proposed algorithm when parametrized with the near-optimal number of prototypes per class.</ns0:p><ns0:p>However, the user can also override the automatic re-balancing for a few of the methods, those which include the 'Manual' option in Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref>, by passing the number of desired prototypes per class as a hyperparameter. We pass the optimal number of prototypes per class suggested by our earlier theoretical analysis, and the near-optimal number suggested by our algorithm, to these methods and document the results in Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. Curiously, none of the prototype selection methods achieve perfect separation when restricted to this nearly optimal number of prototypes, even though the nearly-optimal prototypes found by our algorithm have extremely close-by neighbors among the training points. In other words, it is not theoretically impossible for the prototype selection methods to select prototypes close to where the optimal prototypes would be, and yet they do not. Meanwhile, the ClusterCentroids prototype generation method finds similar prototypes to the ones proposed by our algorithm as seen in Figure <ns0:ref type='figure'>4</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Additional Experiments</ns0:head><ns0:p>By using the results of our theoretical analysis to parametrize ClusterCentroids, we enable it to find efficient sets of high-quality prototypes. We combine our proposed algorithm with ClusterCentroids to produce a method that combines the benefits of both: a non-parametric algorithm that finds near-optimal prototypes in our pathological case but is robust to noise. We conduct additional experiments to show that this resulting method is indeed robust to noise. Each experiment still uses a dataset of 800 points that are spread over N concentric, circular classes with radius growth parameter c = 0.5; however, we Manuscript to be reviewed</ns0:p><ns0:p>Computer Science with that class. We use four levels of noise (σ = 0.05, 0.1, 0.2, 0.4) and five different numbers of classes <ns0:ref type='bibr'>(4,</ns0:ref><ns0:ref type='bibr'>6,</ns0:ref><ns0:ref type='bibr'>8,</ns0:ref><ns0:ref type='bibr'>10,</ns0:ref><ns0:ref type='bibr'>12)</ns0:ref>, for a total of 20 generated datasets to which we apply the near-optimally parametrized ClusterCentroids algorithm and measure classification accuracy.</ns0:p><ns0:p>The results are detailed in Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref> and visualized in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. As expected, increasing noise causes a decrease in classification accuracy. However, the decrease in accuracy is roughly equal to the percentage of points found outside of their class's band as they are indistinguishable from the points of the class whose band they are in. This suggests that the 1NN classifier fitted on prototypes designed by the near-optimally parametrized ClusterCentroids algorithm, approaches the Bayes error rate. We also note that as the number of classes increases, and hence the number of points per class decreases, the accuracy of the classifier stays stable or even increases at high levels of noise. The near-optimally parametrized ClusterCentroids algorithm is clearly robust to increases in the number of classes. It is also partially robust to noise, even though noise violates the underlying assumptions on which the nearly-optimal parametrization is based.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSION</ns0:head><ns0:p>The kNN classifier is a powerful classification algorithm, but can be computationally expensive. While numerous prototype methods have been proposed to alleviate this problem, their performance is often strongly determined by the underlying geometry of the data. Certain pathological geometries can result in especially poor performance of these heuristic algorithms. We analyzed one such extreme setting and demonstrated that analytical methods can be used to find the minimal number of optimal prototypes required for fitting a 1NN classifier. We also found that in such pathological cases, theoretical analysis may not be able to provide the exact locations of the prototypes, but it can be used to derive systems of equations that when solved with numerical methods, produce optimal or near-optimal prototypes.</ns0:p><ns0:p>To demonstrate this approach, we proposed an algorithm for finding nearly-optimal prototypes in the particular pathological setting of concentric circular classes, and used it to validate our theoretical results. The algorithm outperformed all prototype selection methods that it was tested against. A prototype generation method was able to find the optimal prototypes, but only when parametrized using either the theoretical results or the outputs of our proposed algorithm. We further showed that this combination of our proposed algorithm with an existing prototype generation method exhibited the desirable features of both: it is non-parametric and is guaranteed to find near-optimal prototypes even in the examined pathological case, but it is general enough that it is robust to violations of the underlying assumptions of our theoretical analysis, such as the addition of Gaussian noise to the data.</ns0:p><ns0:p>We believe that identifying and studying further pathological geometries in kNN and other machine learning models is an important direction for understanding their failure modes and jointly improving training algorithms and prototype methods. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. 1NN decision boundaries when fitted on ⌈tπ⌉ prototypes per class. Each shaded circle represents a different class and the outlined points represent the assigned prototypes. The colored regions correspond to the decision boundaries created by the 1NN classifier. The axes form a Cartesian plane whose origin coincides with the smallest class.</ns0:figDesc><ns0:graphic coords='3,162.41,63.77,372.19,279.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. 1NN decision boundaries when fitted on two sub-optimal prototype arrangements as well as near-optimal prototypes found using the FindPUGS algorithm. Each shaded circle represents a different class and the outlined points represent the assigned prototypes. The colored regions correspond to the decision boundaries created by the 1NN classifier. The axes form a Cartesian plane whose origin coincides with the smallest class. Left and Center: Prototypes on adjacent circles are not optimally rotated resulting in imperfect class separation in certain regions. Right: Prototypes are optimally rotated resulting in perfect class separation.</ns0:figDesc><ns0:graphic coords='8,141.73,63.77,413.54,118.07' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>NearMiss: All three versions rebalance classes by resampling the negative examples for a particular class. V1 selects the points from other classes which have the shortest distance to the nearest three points from the target class. V2 selects the points from other classes which have the shortest distance to the furthest three points from the target class. For every point in the target class, V3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Examples of failure modes on four and six-class concentric circles data using prototype methods where number of prototypes per class was found automatically (semi-automatically for the InstanceHardnessThreshold method).</ns0:figDesc><ns0:graphic coords='10,141.73,342.07,413.51,186.52' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>now introduce Gaussian noise to each class. The level of noise is controlled by parameter σ , the standard deviation of the Gaussian distribution underlying the positioning of points within a class. The ratio of σ to c dictates how much overlap occurs between classes. For example, when σ = 0.25 = c 2 , only around 68% of points belonging to a particular class will be contained within the band of thickness c = 0.5 associated9/13PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Examples of failure modes on four and six-class concentric circles data using prototype methods for which the number of prototypes per class was set manually.</ns0:figDesc><ns0:graphic coords='11,141.73,63.77,413.52,185.97' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. ClusterCentroids parametrized with near-optimal number of prototypes applied to various levels of noise. From top to bottom, the rows correspond to 4, 6, 8, 10, and 12 classes. From left to right, columns correspond to σ = 0.05, 0.1, 0.2, 0.4.</ns0:figDesc><ns0:graphic coords='12,141.73,75.62,413.52,468.39' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Algorithm 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>FindPUGS Algorithm: Finding (nearly-optimal) Prototypes Using Greedy Search Result: Two ordered lists, N and R, of the minimum number of prototypes required for each circle and their rotations relative to the previous circle. T ← the number of circles; c ← the length by which radii should grow; Algorithm FindPUGS(T, c) N ← [1]; R ← [0]; for t = 1, 2, . . . , T − 1 do m ← N[−1]; n ← m + 1; p ← 0; while True do d a ← 2t 2 c 2 − 2t 2 c 2 cos(</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,183.09,63.77,330.84,220.61' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>. . .</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Corollary 6 (Approximate Solution -Different Number of Prototypes on Each Circle) The minimum</ns0:cell></ns0:row><ns0:row><ns0:cell>number of prototypes required to perfectly separate N concentric circles is approximately ∑ N t=1 ⌈tπ⌉ ≈</ns0:cell></ns0:row><ns0:row><ns0:cell>N+N(N+1)π</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell></ns0:row></ns0:table><ns0:note>, if the number of circles is not known in advance (each circle must have a different number of assigned prototypes).</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>A list of prototype selection and generation methods. The last column describes how the number of prototypes is chosen for each class.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Accuracy of ClusterCentroids parametrized with near-optimal number of prototypes.</ns0:figDesc><ns0:table /><ns0:note>11/13PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>π m ); d b ← 2(t + 1) 2 c 2 − 2(t + 1) 2 c 2 cos( π n ); for i = 0, 1, . . . , 4mn do θ ← iπ m * n * 16 ; if d1(t, c, m, n, θ ) > d b and d2(t, c, m, n, θ ) > d a then Procedure d1(t, c, m, n, θ ) dists ← [] ; for i = 0, . . . , m − 1 do for j = 0, . . . , n − 1 do dist ← t 2 c 2 + (t + 1) 2 c 2 − 2t(t + 1)c 2 cos( 2iπ</ns0:figDesc><ns0:table><ns0:row><ns0:cell>p ← n ;</ns0:cell></ns0:row><ns0:row><ns0:cell>break;</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>if p > 0 then</ns0:cell></ns0:row><ns0:row><ns0:cell>N.append(p);</ns0:cell></ns0:row><ns0:row><ns0:cell>R.append(θ );</ns0:cell></ns0:row><ns0:row><ns0:cell>break;</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>n ← n + 1;</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return N, R;</ns0:cell></ns0:row><ns0:row><ns0:cell>m − 2 jπ n − π n − θ );</ns0:cell></ns0:row><ns0:row><ns0:cell>dists.append(dist);</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return min(dists);</ns0:cell></ns0:row><ns0:row><ns0:cell>Procedure d2(t, c, m, n, θ )</ns0:cell></ns0:row><ns0:row><ns0:cell>dists ← [] ;</ns0:cell></ns0:row><ns0:row><ns0:cell>for i = 0, . . . , m − 1 do</ns0:cell></ns0:row><ns0:row><ns0:cell>for j = 0, . . . , n − 1 do</ns0:cell></ns0:row><ns0:row><ns0:cell>dist ← t 2 c 2 + (t + 1) 2 c 2 − 2t(t + 1)c 2 cos( 2iπ m − 2 jπ n + π m − θ );</ns0:cell></ns0:row><ns0:row><ns0:cell>dists.append(dist);</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return min(dists);</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='12'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54902:1:2:NEW 18 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to Reviewers: Optimal 1-NN
Prototypes for Pathological Geometries
January 30, 2021
We thank the reviewers and editor for their valuable feedback and time.
We have significantly updated our manuscript to reflect all of the suggestions
that were made. We detail below which changes correspond to each reviewer’s
specific suggestions.
1
Reviewer 1
• As shown in Figure 3, the optimal solution (data points and decision boundaries) has an interesting structure. Could you give
an intuitive explanation why the (colored) areas determinate by
decision boundaries have such shape? Why they are not symmetric like the left and middle sub-figures? Can we adjust some
hyper-parameters and get results with different visualization appearances?
We have added a paragraph to the end of Section 3.1 (Algorithm) that
discusses the resulting shapes of the decision boundaries. To summarize,
the patterns are largely dependent on the greatest common divisors of the
number of prototypes on adjacent circles, as well as the relative rotations
of the prototypes on adjacent circles. The particularly symmetrical patterns in Figure 3 are a result of the outer three circles having 3, 6, and 12
prototypes respectively (doubling each time) so this is fairly coincidental.
Figure 6b has an example of the decision boundaries for a 6 circle system
and they are far less symmetrical as the doubling no longer occurs.
• Could you introduce some classic solutions for this problem and
discuss why their heuristics failed?
We have greatly extended Section 3 (Computational Results). The newly
added Section 3.2 (Heuristic Prototype Methods), Table 1, and Figures 4,
5, and 6 explain the various heuristic prototype methods and detail their
results on our pathological dataset.
• It would be better to provide more and diverse results to prove
the effectiveness of the proposed algorithm.
We have greatly extended Section 3 (Computational Results). The newly
added Section 3.3 (Additional Experiments), Table 2, and Figure 7 focus
on a combined method that pairs our algorithm with the ClusterCentroids
prototype generation algorithm. We also now conduct a number of new
experiments with this combined method that include adding different levels of noise and varying the number of classes.
• The equations between Line 61 and 62 exceeded the page length.
We have changed the formatting of these two equations so that they now
properly fit the page.
• Figure 2 shows a linear relationship between the circle t and the
number of prototypes for 1st order and 2nd order settings. I
think the line should be discrete. Right?
We have clarified in the manuscript that approximations in that figure are
before rounding to nearest integer to help better visualize the convergence
behavior of the two approximations. We have also added a discretized (by
rounding to nearest integer) version of the first-order approximation to
this figure.
2
Reviewer 2
• Some notations are not clearly defined, for example the horizontal/vertical axes of Figure 1, Figure 3
We have added the following clarification to these figures: “The axes form
a Cartesian plane whose origin coincides with the smallest class.”
• The computational results are not sufficient. The optimal prototypes for a given dataset, as well as the comparison between
different heuristic algorithms, are not reported.
We have greatly extended Section 3 (Computational Results). The newly
added Section 3.2 (Heuristic Prototype Methods), Table 1, and Figures 4,
5, and 6 explain the various heuristic prototype methods and detail their
results on our pathological dataset. The newly added Section 3.3 (Additional Experiments), Table 2, and Figure 7 focus on a combined method
that pairs our algorithm with the ClusterCentroids prototype generation
algorithm. We also now conduct a number of new experiments with this
combined method that include adding different levels of noise and varying
the number of classes.
• The implications of the tighter bounds are not very clear. For
example how these tighter bounds can be used to find the prototypes more efficiently should be reported.
We have greatly extended Section 3.1 (Algorithm) to clarify the implications of the tighter bounds. In particular, we focus on explaining how the
theoretical results are directly used at the core of our proposed algorithm.
Additionally, in the newly added Section 3.3 (Additional Experiments),
2
we now detail how our theoretical results can be used to improve the
performance of an existing heuristic prototype generation algorithm like
ClusterCentroids by enabling near-optimal parametrization of it.
• The comparison between the proposed algorithms and different
heuristic methods for finding nearly-optimal prototypes should
be reported as well.
As mentioned above, we have greatly extended Section 3 (Computational
Results). The newly added Section 3.2 (Heuristic Prototype Methods),
Table 1, and Figures 4, 5, and 6 explain the various heuristic prototype
methods and detail their results on our pathological dataset, along with
a comparison against our proposed algorithm. The newly added Section
3.3 (Additional Experiments), Table 2, and Figure 7 focus on a second
proposed method that pairs our algorithm with the ClusterCentroids prototype generation algorithm. We conduct a number of new experiments
with this combined method that include finding prototypes in settings
with different levels of noise and varying numbers of classes.
3
Reviewer 3
• I find that it is somewhat lacking in terms of experimental design
in a fashion that is directly relevant to prototypes in machine
learning in a way that it claims. The very specific problem of
concentric circles was already introduced in a previous paper by
the same authors, and the main contribution is to do some exact
computations for this one very specific and artificial example. It
seems to assume that all data would sit exactly on one of the
concentric circles and with diameters precisely multiples, given
by the parameter ‘t’ of a fixed size. This seems very unrealistic;
It is more likely there would be concentric rings, with different
widths, and probably some distortions. It is not at all clear what
lessons might be drawn from the exact solutions, and be likely
to have any relevance to real cases. Although the general idea of
concentric circles is something that could reasonably appear it
seems very unlikely that it would be in such a regular structure.
Hence, the computations are not matching the experimental design that would be needed. Also, if it were so regular than a
reasonable learning system - maybe with the aid of kernels would be likely to pick this up.
We have greatly extended Section 3 (Computational Results). We now
conduct a number of new experiments that include adding different levels
of noise to the data and greatly varying the number of classes, as well
as the density of the training data. The newly added Section 3.3 (Additional Experiments), Table 2, and Figure 7 focus on these new experiments
along with proposing a combined method that pairs our algorithm with
3
the ClusterCentroids prototype generation algorithm. The key idea of this
section is that the theoretical results can be used to provide a near-optimal
parametrization to the ClusterCentroids prototype generation algorithm,
which greatly improves its performance in the pathological case that we
explore.
• Also, my understanding is that the idea of prototypes is that
they select a smaller set from a finite sample of values. Whereas
in this case the selection seems to be from all of the points
on each circle - essentially an infinite size initial sample. I
would have expected to see methods to select from a finite (randomised) selection of points on each circle - then the ‘exact’
solutions given could not be applied directly.
We have provided the following clarification in the Background section
about the difference between prototype selection and prototype generation: “Prototype selection methods select a subset of real points to use as
prototypes while prototype generation methods are not similarly restricted
and instead create synthetic points (that are not necessarily found in the
original data) to act as prototypes. Generating synthetic prototypes allows for more efficient representations so our analysis focuses specifically
on the generation of optimal or near-optimal prototypes.” We have also
now added Section 3.2 (Heuristic Prototype Methods), Table 1, and Figures 4, 5, and 6 to summarize a variety of both prototype selection and
prototype generation methods and detail their results on our pathological
dataset.
• It also does not match the claim in the abstract about heuristic
algorithms failing, as there is no inclusion of experiements to
show this.
We have greatly extended Section 3 (Computational Results). The newly
added Section 3.2 (Heuristic Prototype Methods), Table 1, and Figures 4,
5, and 6 discuss the various heuristic prototype methods and detail their
results on our pathological dataset.
• Theorem 5: should “each circle must have a different number”
be “may have”? Seems it should be relaxing an equality, not
adding an all different constraint.
We have re-phrased the wording of all the theorems that contained similar
constraints to clarify this.
4
" | Here is a paper. Please give your review comments after reading it. |
46 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: Bi-gram distance (BI-DIST) is a recent approach to measure the distance between two strings that have an important role in a wide range of applications in various areas. The importance of BI-DIST is due to its representational and computational efficiency, which has led to extensive research to further enhance its efficiency. However, developing an algorithm that can measure the distance of strings accurately and efficiently has posed a major challenge to many developers. Consequently, this research aims to design an algorithm that can match the names accurately.BI-DIST distance is considered the best orthographic measure for names identification; nevertheless, it lacks a distance scale between the name bigrams. Methods: In this research, the Soft Bigram Distance (Soft-Bidist) measure is proposed. It is an extension of BI-DIST by softening the scale of comparison among the name Bigrams for improving the name matching. Different datasets are used to demonstrate the efficiency of the proposed method. Results: The results show that Soft-Bidist outperforms the compared algorithms using different name matching datasets.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Currently, Name Matching is one of the hottest topics in the emerging data science area, where, the BI-DIST is a recent and significant approach for name matching by measuring the distance between two strings, which play an important role in a wide range of applications in different fields. Consequently, this led us to develop a strong and effective method for this purpose. Although,., developing highly accurate name matching algorithms is still a challenging issue in the research community <ns0:ref type='bibr' target='#b23'>(Navarro 2001</ns0:ref>) <ns0:ref type='bibr' target='#b15'>(Hall and Dowling 1980)</ns0:ref>. By deeply reviewing the previous studies, it found that several studies have been conducted to develop name-matching algorithms, which are used to cope with many important topics. The classification of these algorithms is implemented into two categories: approximate string matching (inexact) algorithms (Al-Ssulami 2015; <ns0:ref type='bibr' target='#b15'>Hall and Dowling 1980;</ns0:ref><ns0:ref type='bibr' target='#b23'>Navarro 2001</ns0:ref>) and exact string-matching algorithms (Al-Ssulami 2015; <ns0:ref type='bibr' target='#b7'>Charras and Lecroq 2004;</ns0:ref><ns0:ref type='bibr' target='#b10'>Peter Christen 2006)</ns0:ref>.</ns0:p><ns0:p>Name identification and matching are increasingly used in several applications such as Customer Relation Management (CRM), Health Care (HC), Customer Data Integration (CDI), Anti-Money Laundering (AML), Criminal Investigation (CI) and Genealogy Services (GS) <ns0:ref type='bibr' target='#b21'>(Lisbach et al. 2013)</ns0:ref>. Besides, it is used also in other applications in the airports, Plagiarism Checking software, etc. If the matching is carried out considering only the exact similarity in such applications, it would be difficult and might be impossible to deal with the case of name variations, which is an unavoidable situation when dealing with real-world data sets <ns0:ref type='bibr' target='#b13'>(Delgado et al. 2016)</ns0:ref>. That is, the exact matching approach is not suitable for large-scale applications and complex information systems, since it cannot retrieve names that have more than one acceptable spelling <ns0:ref type='bibr' target='#b10'>(Peter Christen 2006)</ns0:ref>.</ns0:p><ns0:p>To have a highly effective name matching methods, the approximate string-matching approach should be adopted rather than exact matching. Therefore, this paper aims to develop an algorithm for name matching, that consider an approximate string-matching algorithm to allow dealing with possible technical or computational errors. Such matching algorithms have been used in several applications such as Spelling correction <ns0:ref type='bibr' target='#b24'>(Park et al. 2020)</ns0:ref>, Linking database <ns0:ref type='bibr' target='#b17'>(Hand and Christen 2018)</ns0:ref>, Text retrieval <ns0:ref type='bibr' target='#b0'>(Abdulhayoglu, Thijs, and Jeuris 2016)</ns0:ref>, Handwriting recognition <ns0:ref type='bibr' target='#b8'>(Chowdhury, Bhattacharya, and Parui 2013)</ns0:ref>, Computational biology 'DNA' <ns0:ref type='bibr' target='#b6'>(Berger, Waterman, and Yu 2020)</ns0:ref>, and Name recognition <ns0:ref type='bibr' target='#b13'>(Delgado et al. 2016</ns0:ref>)… etc. Consequently, in this work, a new softened distance measure is proposed, based on the BI-DIST distance to increase the efficiency and accuracy of the name-matching method. This is achieved by identifying different cases that form bigram scales, grounded on statistical analysis to soften the distance scale. Accordingly, it is hypothesized that an evolutionary method can be adapted to adjust the weights of the distance scale between n-grams.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background and related work</ns0:head><ns0:p>Many research works mainly concentrate on name matching methods improvement and algorithm complexity. In addition to the complex process of matching names as aforementioned, misspelling and different spelling of words are detected. The effective way is to apply an approximate stringmatching technique to prevent the recurring of different spelling inputs and misspelling <ns0:ref type='bibr' target='#b19'>(Lertnattee and Paluekpet 2019)</ns0:ref>. Given two names X and Y represented as strings of n and m characters, respectively, the Edit Distance, aka Levenshtein Distance (LD), indicates the least possible cost of editing processes (insertion, deletion, and substitution) to convert X to Y <ns0:ref type='bibr' target='#b20'>(Levenshtein 1966)</ns0:ref>. For example, if X = 'Zantac‖' and Y = 'Xanax‖', the edit distance is 3 as the minimum transformation implies two substitution operations ('Z' → 'X' and 'c' → 'x') and one deletion operation (letter 't'). Which is calculated using the recurrence formula in Eq. ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>), The Levenshtein distance between two strings s, t is given mathematically by where.</ns0:p><ns0:formula xml:id='formula_0'>Lev s,t (|s|,|t|) Lev s,t ( i,j ) = { Max ( i,j ) 𝑖𝑓 ( Min ( i,j ) = 0 ) Min { Lev s,t ( i ,j -1 ) + 1 Lev s,t ( i -1 , j ) + 1 Lev s,t ( i -1,j -1 ) + 1 (si ≠ tj) otherwise (1)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>In equation ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>),1 is the indicator function equal to 0 if s i== t j and 1 otherwise. By we denote the |𝑠| length of the string s.</ns0:p><ns0:p>is the distance between string prefixes -the first i characters Lev s,t (𝑖,𝑗) of s and the first j characters of t. The first part of this formula denotes the number of insertion or deletion steps to transform prefix into an empty string or vice versa. The second block is a recursive expression with the first line represents deletion and the second one represents insertion. The last line is responsible for substitutions. More details are available at 1 .</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b12'>(Damerau 1964</ns0:ref>), Damerau-Levenshtein Distance (DLD) is presented which is akin to the LD algorithm. The chief modification is that DLD lets one more edit, particularly where the two adjacent characters can be transposed. The DLD algorithm describes the distance between two strings s and t by the following recursive relation as shown in Eq. (2):</ns0:p><ns0:formula xml:id='formula_1'>DLev s,t ( i,j ) = Min { 0 𝑖𝑓 𝑖 = 𝑗 = 0 DLev s,t ( i -1 ,j ) + 1 𝑖𝑓 𝑖 > 0 DLev s,t ( i , j -1 ) + 1 𝑖𝑓 𝑗 > 0 (2) DLev s,t ( i -1,j -1 ) + 1 (si ≠ tj) 𝑖𝑓 𝑖,𝑗 > 0 DLev s,t ( i -2,j -2 ) + 1 𝑖𝑓 𝑖,𝑗 > 1 and 𝑠[𝑖] = 𝑡[𝑗 -1] and 𝑠[𝑖 -1] = 𝑡[𝑗]</ns0:formula><ns0:p>Where is the indicator function equal to 0 when and equal to 1 otherwise.</ns0:p><ns0:formula xml:id='formula_2'>1 (si ≠ tj) si = tj</ns0:formula><ns0:p>In (Rees 2014), a customized approach called a Modified Damerau-Levenshtein Distance algorithm (MDLD) was proposed. MDLD was adjusted and tested against two input strings that support block transpositions of numerous characters. The MDLD algorithm's time complex O(n 3 ), is presented algorithm (MDLD) in its Oracle PL/SQL form. More details are available at 2</ns0:p><ns0:p>The N-gram Distance (N-DIST) that was proposed by Kondrak <ns0:ref type='bibr' target='#b18'>(Kondrak 2005</ns0:ref>) in his research works by the fusion of features carried out by grams of size and non-crossing-links constraints, and the first letter is repeated initially. On the other hand, it is found that BI-DIST is a case of N-DIST <ns0:ref type='bibr' target='#b18'>(Kondrak 2005)</ns0:ref>. In <ns0:ref type='bibr' target='#b0'>(Abdulhayoglu et al. 2016</ns0:ref>) each matrix element is 𝑁𝐷𝐼𝑆𝑇 s,t (i,j) calculated according to Eq. ( <ns0:ref type='formula'>3</ns0:ref>), where the cost in Eq. ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>) is the total number of distinct letters in the same positions in the character n-grams s i ;t j , and n is the size of the character n-gram, as shown in Eqs. ( <ns0:ref type='formula'>3</ns0:ref>)-(4):</ns0:p><ns0:formula xml:id='formula_3'>𝑁𝐷𝐼𝑆𝑇 s,t (i,j) = { Max(i,j) (i = 0 or j = 0) Min { 𝑁𝐷𝐼𝑆𝑇 s,t (i -1 ,j) + 1 𝑁𝐷𝐼𝑆𝑇 s,t (i , j -1) + 1</ns0:formula><ns0:p>𝑁𝐷𝐼𝑆𝑇 s,t (i -1,j -1) + 𝑑 𝑛 (𝑇 𝑛 𝑖,𝑗 ).</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_4'>d n (T n i,j ) = 1 n ∑ n u = 1 d 1 (x i + u , y j + u ),<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Kondrak <ns0:ref type='bibr' target='#b18'>(Kondrak 2005)</ns0:ref> proposed the measures N-gram Distance and Similarity (N-DIST and N-SIM) respectively, where the recall metric is used to assess the results of twelve measures with the U.S. Pharmacopeia (USP) look-alike/sound-alike (LASA)list of 360 unique drug names. In this study, Kondrak concluded that combining BI-DIST and BI-SIM achieves the best results. The Food and Drug Administration (FDA) uses it to create automated warning systems to identify potential LASA errors in prescription electronic systems and phonetic orthographic computer analysis (POCA) software. Moreover, <ns0:ref type='bibr' target='#b22'>(Millán-Hernández et al. 2019)</ns0:ref> proposed a Soften Bigram Similarity measure (Soft-Bisim). This work concentrated on improving an algorithm to Identify Confusable Drug Names, based on Bi-gram algorithms and the blend of the longest common subsequences. Furthermore, the research work achieved (S Al-Hagree et al. 2019) proposed an enhanced N-DIST method that concentrated on improving an algorithm for Name Matching. However, the previous studies differ from the contribution in this paper, because the proposed algorithm in this paper combines a Bi-gram technique with a distance technique (Salah Al-Hagree et al. 2019).</ns0:p></ns0:div>
<ns0:div><ns0:head>The Proposed Method</ns0:head><ns0:p>In this section, a Soft-Bidist is presented. The Soft-Bidist measure is an extension of BI-DIST, it softening the scale of comparison among the name Bigrams for improving the name detection. This section organizes as follows. The first subsection is to describe the involved cases of bigrams in the scale of the Soft-Bidist distance. Then, the Minimum, Maximum, and Average functions, which are used as means to identify the weights in the distance scale by statistical means, are mentioned. It is thus assumed that an evolutionary approach identifies the best levels in the distance scale compared to the original distance scale that Kondrak proposed in BI-DIST (cf. Eqs. ( <ns0:ref type='formula'>3</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>)). In other words, we consider this problem as an evolutionary approach for optimizing the internal parameters of the distance scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Definition of Soft-Bidist Distance</ns0:head><ns0:p>Let X and Y be given names represented as sequences of sizes n and m, respectively, Soft-Bidist is defined as follows:</ns0:p><ns0:formula xml:id='formula_5'>𝐵𝐼𝐷𝐼𝑆𝑇 s,t (i,j) = { Max(i,j) (i = 0 or j = 0) Min { 𝐵𝐼𝐷𝐼𝑆𝑇 s,t (i -1 ,j) + 𝐼𝐷 𝑛 (𝑇 𝑛 𝑖,𝑗 ). 𝐵𝐼𝐷𝐼𝑆𝑇 s,t (i , j -1) + 𝐼𝐷 𝑛 (𝑇 𝑛 𝑖,𝑗 ).</ns0:formula><ns0:p>( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_6'>𝐵𝐼𝐷𝐼𝑆𝑇 s,t (i -1,j -1) + 𝑑 𝑛 (𝑇 𝑛 𝑖,𝑗 ).</ns0:formula><ns0:p>The distance scale for Soft-Bidist is shown as follows:</ns0:p><ns0:formula xml:id='formula_7'>d n (T n i,j ) = { wt 1 , if(S i -1 = T j -1 )and (S i = T j ) Case 1 wt 2 , if(S i -1 ≠ T j -1 )and (S i ≠ T j ) Case 2 and(S i -1 ≠ T j )and (S i ≠ T j -1 ) wt 3 , if(S i -1 = T j )and (S i = T j -1 ) Case 3 wt 4 ,if(S i -1 ≠ T j -1 )and (S i = T j ) Case 4 (6)</ns0:formula><ns0:p>wt 5 , if(S i -1 = T j -1 )and (S i ≠ T j )</ns0:p><ns0:p>Case 5 wt 6 , if(S i -1 = T j )and (S i ≠ T j -1 )</ns0:p><ns0:p>Case 6 wt 7 , if(S i -1 ≠ T j )and (S i = T j -1 )</ns0:p><ns0:p>Case 7</ns0:p><ns0:p>ID n (T n i,j ) = { wt 8 , if(S i -1 = T j ) and (S i ≠ T j -1 )</ns0:p><ns0:p>Case 8 (7) wt 9 , if(S i -1 ≠ T j )and (S i = T j -1 )</ns0:p><ns0:p>Case 9</ns0:p><ns0:p>To increase the accuracy of identifying the names, there is a need to find the set of weights WT = {wt1; wt2; ...; wt9} of the distance scale of Soft-Bidist. For this, a randomized value is used <ns0:ref type='bibr' target='#b20'>(Levenshtein 1966)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Definition of Soft-Bidist Distance</ns0:head><ns0:p>The cases are weighted as symbols wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8, and wt9. These weights are depend on Table <ns0:ref type='table' target='#tab_2'>1 and Table 2</ns0:ref>, which are used to adapt to the operational environment and get highly accurate results in various situations. Therefore, Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> contains several different weights. After changing the default values of [ 0, 1, 1, 0, 1, 1, 1, 1 and 1] with wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8, and wt9 for all cases respectively, the new weights achieve results similar to that obtained by LD algorithm. Again, other default values have been examined [0, 1, 0, 0, 1, 1, 1, 1 and 1] with wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8 and wt9 for all cases respectively, the new PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science weights achieve results similar to that obtained by the DLD algorithm. Finally, other default values of [ 0, 1, 1, 0.5, 0.5, 1, 1, 1 and 1] for wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8, and wt9 for all cases respectively, the new weights achieves results similar to that obtained by the N-DIST algorithm. Based on the previous weight values, new weights were added to Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. The same data preparation used in (S Al-Hagree et al. 2019) is used in this research to be compared with the Soft-Bidist algorithm.</ns0:p><ns0:p>After defining the default values of [ 0, 1, 1, 0.2, 0.2, 1, 1, 1 and 1] for wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8, and wt9 for all cases respectively, the proposed algorithm appears to have achieved high accurate results. Generally, it is not easy to provide accurate weights between pair source and target. In order to gain proper values for weights wt1, wt2, wt3, wt4, wt5, wt6, wt7, wt8, and wt9, the experiments with different weights for Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> and Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> of dataset 1 (S Al-Hagree et al. <ns0:ref type='formula'>2019</ns0:ref>) should be repeated. The results are presented in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. The experiments are repeated on dataset 2 (Ahmed and Nürnberger 2009)(Al-Sanabani and Al-Hagree 2015) for the Soft-Bidist algorithms. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> shows the result of this experiment. It can be noticed that the Soft-Bidist Algorithm functions better than the DLD, LD and N-DIST algorithms particularly being compared with names transposition such as the names that are shown in rows 3 and 4. Unlike DLD, LD, and N-DIST, the Soft-Bidist algorithm is sensitive to replacement as shown in rows 6 and 7. The Soft-Bidist Algorithm computes recurring letters, detection of errors, and deletion in a more proficient manner than DLD, LD, and N-DIST as they appear in rows 5, 8, 9, 10, 11, 12, 13, and 14. The Soft-Bidist algorithm exhibits a number of advantages over the DLD, LD, and N-DIST algorithms as aforementioned. Therefore, the Soft-Bidist algorithm functions well and gives a better accuracy compared with the DLD, LD, and N-DIST algorithms for all pairs in dataset 2 as appears in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. Furthermore, more experiments are implemented with various datasets to prove the evidence of the ability of the Soft-Bidist algorithm. Ten datasets are chosen and implemented on the DLD, LD, N-DIST, MDLD, and Soft-Bidist algorithms as appears in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. That demonstrates the evidence and ability of the Soft-Bidist algorithm in name matching.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>, the Soft-Bidist algorithm gets 93% and 90% while DLD, LD, N-DIST, and MDLD algorithms get 88%, 88%, 86%, and 89%, respectively. Therefore, the Soft-Bidist algorithm gives more accurate results than the DLD, LD, N-DIST, and MDLD algorithms for all datasets, because LD, DLD, N-DIST, and MDLD algorithms have not considered the transposition operations of Latin-based language especially the English language. </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparative Study for Soft-Bidist Algorithm and compared algorithms</ns0:head><ns0:p>The dataset used for comparison in this section has been extracted manually from the book of <ns0:ref type='bibr' target='#b11'>(Christen 2012)</ns0:ref>. To clarify the way that string comparison functions approximate various similarity estimations when used for similar strings. Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> gives sample results when given names and surnames are compared for the Soft-Bidist algorithm and compared algorithms as well. The highest similarity is shown in bold, while the lowest is shown in italics. The similarity values in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> are calculated based on chosen name pairs. Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> reflects how different string comparison functions produce various similarity approximates for the same name pairs. According to the given results, there are significant differences in the similarities approximated on the same pair. These functions have various characteristics concerning the average and the spread of the value of similarity. Methods as Winkler, Jaro, the compression-based comparison operates, and Soft-Bidist Algorithm gives the highest mean of similarity values. Whereas, the edit distance (ED), the longest common substring (LCS) comparison, and the q-gram ('n-gram') based functions <ns0:ref type='bibr' target='#b26'>(Ukkonen 1992</ns0:ref>) result in a much lower mean in the similarity values as can be seen in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>The Estimated Measure</ns0:head><ns0:p>The estimated measure is using the f-measure which is also called f-score. The name matching quality has proven to be effective (P Christen 2006) <ns0:ref type='bibr'>(Christen, 2006;</ns0:ref><ns0:ref type='bibr'>Olson et al.,2008;</ns0:ref><ns0:ref type='bibr'>Kolomvatsos et al.,2013)</ns0:ref>, which is based on precision and recall. These metrics are used for classification tasks. They compare the predicted class of an item with the actual class, as shown in Table <ns0:ref type='table'>7</ns0:ref>. Based on Table <ns0:ref type='table'>7 and following</ns0:ref> Moreover, the F-measure is defined as the weighted combination of precision and recall. The Fmeasure is defined by:</ns0:p><ns0:formula xml:id='formula_8'>F -Measure = 2.Precision. Recall Precision + Recall (10)</ns0:formula><ns0:p>Table <ns0:ref type='table'>7</ns0:ref>: Correspondence between the predicted and the actual classes.</ns0:p><ns0:p>Since f-measure is an accuracy measure between 0 and 1, the higher the values, the better and more accurate are the results. The experiments can be seen in Table <ns0:ref type='table' target='#tab_7'>8</ns0:ref>, the mean of f-measures achieved by the proposed Soft-Bidist algorithm on all instances for the used dataset and the threshold is 0.94, which outperforms the other algorithms. Best results shown boldface and worst results underlined. The thresholds are 0.90, 0.85, 0.80, 0.75, 0.70 and 0.65 of all datasets tested (three English datasets, one Portuguese dataset, three species datasets, three genera datasets, and one Arabic dataset). Manuscript to be reviewed Computer Science match of 0.65, 0.70, 0.75, 0.80, 0.85and 0.90 for LD, DLD, N-DIST, MDLD and Soft-Bidist, respectively <ns0:ref type='bibr' target='#b0'>(Abdulhayoglu et al., 2016.)</ns0:ref>. Table <ns0:ref type='table' target='#tab_17'>9</ns0:ref> shows F-measure vs. Threshold curves for dataset 5 (Portuguese 120 pairs).</ns0:p><ns0:p>Table 9: F1-scores of different algorithms, thresholds and similarity calculation.</ns0:p><ns0:p>Repeating the previous experiment has been carried based on all Datasets Table <ns0:ref type='table' target='#tab_20'>10</ns0:ref>.the proposed algorithms Soft-Bidist (0,1,0,0.2,0.2,1,1,1 and 1) and Soft-Bidist (0,1,0,0.2,0.2,1,1,0.5 and 0.5) gives more accurate results than the algorithms LD,DLD,N-DIS and MDLD for all datasets as shown in Table <ns0:ref type='table' target='#tab_20'>10</ns0:ref>. The mean of f-measures on all datasets as can be seen in Table <ns0:ref type='table' target='#tab_20'>10</ns0:ref>, which equals 0.97, says that accuracy is almost high and reasonable to trust the results. Best results have shown boldface and worst results underlined. The thresholds are 0.90, 0.85, 0.80, 0.75, 0.70 and 0.65 of all datasets tested (three English datasets, one Portuguese dataset, three species datasets, three genera datasets, and one Arabic dataset).</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 10: The results of F-measure mean values</ns0:head></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In this research, Soft-Bidist is proposed where it used a new methodology for improving namematching accuracy. Soft-Bidist algorithm handles the transposition, deletion, substitution, and insertion operations in a new way. These operations are dealt with differently, considering its different states of the name matching to enhance the matching performance. Furthermore, different weights were assigned for each operation, which in turn enhanced the whole matching process. In comparison with other algorithms from the literature, the results of the experiments prove that the Soft-Bidist outperformed compared algorithms significantly. For future studies, it is suggested to explore the evolutionary algorithms to get the most proper weights for the soft calculation case, Genetic Algorithm (GA) for instance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Funding</ns0:head><ns0:p>Deanship of Scientific Research, Qassim University funding the publication of this project. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>: The various weights for Soft-Bidist that yelled similar results to other algorithms from literature</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Weights</ns0:head><ns0:p>Results similar to wt 1 wt 2 wt 3 wt 4 wt 5 wt 6 wt 7 wt 8 wt 9 (LD). 0, 1,</ns0:p><ns0:formula xml:id='formula_9'>N-DIST is n=2 'BI' 0, 1, 1, 0.5, 0.5, 1, 1, 1, 1, 1 Table 2:<ns0:label>1, 0, 1, 1, 1, 1, 1, (DLD). 0, 1, 0, 0, 1, 1, 1, 1, 1,</ns0:label></ns0:formula><ns0:p>The randomize of weights for Soft-Bidist algorithm 2 No Weights for Soft-Bidist wt 1 , wt 2 , wt 3 , wt 4 , wt 5 , wt 6 , wt 7 , wt 8 wt 9 Minimum. 0 1 0 0, 0.5, 1, 1, 1 1 Average. 0 1 0.7, 0.2, 0.8, 1, 1, 1 1 Maximum (Cases 8, 9 is 0.5). 0 1 0, 0, 0.5, 1, 1, 0.5 0.5 Average (Cases 8, 9 is 0.5). 0 1 0.7, 0.2, 0.8, 1, 1, 0.5 0.5</ns0:p><ns0:formula xml:id='formula_10'>0 1 1, 0, 0.5, 1, 1, 1 1 0, 1, 0, 0, 0.5, 1, 1, 1 1 0, 1, 0, 0.2, 0.2, 1, 1, 1 1 0, 1, 0, 0.1, 0.1, 1, 1, 1 1 0, 1, 0, 0, 0.2, 1, 1, 1 1 0,</ns0:formula><ns0:p>1, 1, 0, 0.5, 1, 1, 0.5 0.5 0, 1, 0.5, 0, 0.5, 1, 1, 0.5 0.5 0, 1, 0, 0.1, 0.1, 1, 1, 0.5 0.5 0, 1, 0.5, 0, 0.1, 1, 1, 0.5 0.5 0, 1, 0, 0, 0.2, 1, 1, 0.5 0.5 Randomize weights 0,</ns0:p><ns0:p>.5 The applied in this paper. 0, 1, 0, 0.2, 0.2, 1, 1, 0.5 0.5 1 Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>: The results with different weights for Soft-Bidist No wt 1 , wt 2 , wt 3 , wt 4 , wt 5 , wt 6 , wt 7 , wt 8 and wt 9 Average (percentage similarity) 1 0,1,1,0,1,1,1,1 and 1 0.83 2 0,1,0,0,1,1,1,1 and 1 0.87 3 0,1,1,0.5,0.5,1,1,1 and 1 0.82 4 0,1,0,0.5,0.5,1,1,1 and 1 0.89 5 0,1,1,0,0.5,1,1,1 and 1 0.87 6 0,1,0,0,0.5,1,1,1 and 1 0.91 7 0,1,0,0.2,0.2,1,1,1 and 1 0.91 8 0,1,0,0.1,0.1,1,1,1 and 1 0.93 9 0,1,0,0,0.2,1,1,1 and 1 0.93 0,1,1,0,0.5,1,1,0.5 and 0.5 0.89 0,1,0.5,0,0.5,1,1,0.5 and 0.5 0.91 0,1,0,0,0.5,1,1,0.5 and 0.5 0.93 0,1,0.5,0,0.5,1,1,0.5 and 0.5 0.88 0,1,0,0.2,0.2,1,10.5 and 0.5 0.94 0,1,0,0.1,0.1,1,1,0.5 and 0.5 0.96 0,1,0.5,0,0.,1,1,0.5 and 0.5 0.95 0,1,0,0,0.2,1,1,0.5 and 0.5 0.96 0,1,0,0,0.1,1,1,0.5 and 0.5 0.97 2 <ns0:ref type='table' target='#tab_11'>(0,1,0,0. 5,0.5,1,1 ,1 and 1)</ns0:ref> Soft-Bidist (0,1,0,0.2, 0.2,1,1,0. 5 and 0.5)</ns0:p><ns0:formula xml:id='formula_12'>1</ns0:formula><ns0:p>No. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Source</ns0:head><ns0:p>Computer Science The mean similarity of LD, DLD, N-DIST, MDLD and Soft-Bidist algorithms with different dataset.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>DLD LD</ns0:head><ns0:p>N-DIST MDLD Soft-Bidist (0,1,0,0.5,0.5, 1,1,1 and 1)</ns0:p></ns0:div>
<ns0:div><ns0:head>Soft-Bidist</ns0:head><ns0:p>(0,1,0,0.2,0.2,1 ,1,0.5 and 0.5) Manuscript to be reviewed</ns0:p><ns0:p>Computer Science syllable alignment distance 0.66 MDLD 0.72 0,1,1,0,1,1,1,1 and 1(Sof-Bidist) 0.70 0,1,0,0,1,1,1,1 and 1 (Sof-Bidist) 0.72 0,1,1,0.5,0.5,1,1,1 and 1 (Sof-Bidist) 0.68 0,1,0,0.5,0.5,1,1,1 and 1 (Sof-Bidist) 0.77 0,1,0,0.2,0.2,1,1,1 and 1 (Sof-Bidist) 0.78 0,1,0,0.1,0.1, 0.5, 0.5, 0.5 and 0.5 (Sof-Bidist) 0.83 0,1,0,0,0.1, 0.5, 0.5, 0.5 and 0.5 (Sof-Bidist) 0.85 0,1,0,0,0.1, 0.2, 0.2, 0.2 and 0.2 (Sof-Bidist) 0.88 3 1 <ns0:ref type='table' target='#tab_11'>(0,1,0,0.5,0.5, 1,1,1 and 1)</ns0:ref> Soft-Bidist (0,1,0,0.2,0. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>(Kolomvatsos et al., 2013), precision and recall are defined as:</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b2'>(S Al-Hagree et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Salah Al-Hagree et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Earley 1969;</ns0:ref><ns0:ref type='bibr' target='#b18'>Kondrak 2005;</ns0:ref><ns0:ref type='bibr' target='#b22'>Millán-Hernández et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Rees 2014</ns0:ref>).</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The various weights for Soft-Bidist that yelled similar results to other algorithms from the literature</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The Randomize of weights for Soft-Bidist algorithm</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The Experimental Results</ns0:cell></ns0:row></ns0:table><ns0:note>This section presents the experimental results that are carried out in this research. The objective of these experiments is to assess the Soft-Bidist algorithm compared with other algorithms from the literature.. Due to the absence of standard datasets for name matching, different multilingual datasets (English, Arabic, Portuguese) is used in the experiments carried out in this research. These datasets are presented by (S Al-Hagree et al. 2019),<ns0:ref type='bibr' target='#b1'>(Ahmed and Nürnberger 2009)</ns0:ref>,<ns0:ref type='bibr' target='#b25'>(Rees 2014</ns0:ref>) and (Al-Sanabani and Al-Hagree 2015). Different spelling errors and typographical are included in these datasets. In our previous work, a modified algorithm was applied to drug names in English documents, but for current work, the Soft-Bidist is applied to the different datasets deals with personal names in Arabic, English and Portuguese. To our knowledge, there have been no previous reports of good performance on this combination of datasets with different languages.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The results with different weights for Soft-Bidist</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison between proposed algorithms.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The mean similarity of LD, DLD, N-DIST, MDLD and Soft-Bidist algorithms with a different dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>The average similarities for proposed weights and compared methods presented at<ns0:ref type='bibr' target='#b11'>(Christen 2012)</ns0:ref> </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The results of average f-measure values Table9presents the F1-scores for different scenarios. For the dataset 5 (Portuguese 120 pairs), using different Edit Distance. The best results were retrieved with the threshold values for a correct</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison between proposed algorithms and compared algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Proposed Algorithms</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The mean similarity of LD, DLD, N-DIST, MDLD and Soft-Bidist algorithms with different dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 6 (on next page)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Similarities of calculated and string pairs</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>The average similarities for proposed weights and compared methods presented at</ns0:figDesc><ns0:table><ns0:row><ns0:cell>No. Algorithms</ns0:cell></ns0:row></ns0:table><ns0:note>2<ns0:ref type='bibr' target='#b11'>(Christen 2012)</ns0:ref> </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The results of average f-measure values</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>Compared Algorithm</ns0:cell><ns0:cell>Proposed Algorithm</ns0:cell></ns0:row><ns0:row><ns0:cell>LD</ns0:cell><ns0:cell>DLD</ns0:cell><ns0:cell>DIST N-</ns0:cell><ns0:cell>MDLD</ns0:cell><ns0:cell>Soft-Bidist</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 9 (on next page)</ns0:head><ns0:label>9</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>F1-scores of different algorithms, thresholds and similarity calculation.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>F1-scores of different algorithms, thresholds and similarity calculation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Algorithms</ns0:cell><ns0:cell>Thresholds</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>75</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>85</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>1 LD</ns0:cell><ns0:cell /><ns0:cell>0.987</ns0:cell><ns0:cell cols='2'>0.961 0.938</ns0:cell><ns0:cell>0.889</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.273</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2 DLD</ns0:cell><ns0:cell /><ns0:cell>0.987</ns0:cell><ns0:cell cols='2'>0.961 0.938</ns0:cell><ns0:cell>0.894</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.273</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>3 N-DIST</ns0:cell><ns0:cell /><ns0:cell>0.966</ns0:cell><ns0:cell cols='2'>0.952 0.924</ns0:cell><ns0:cell>0.863</ns0:cell><ns0:cell>0.710</ns0:cell><ns0:cell>0.222</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>4 MDLD</ns0:cell><ns0:cell /><ns0:cell>0.987</ns0:cell><ns0:cell cols='2'>0.961 0.938</ns0:cell><ns0:cell>0.894</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.273</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='2'>0,1,0,0.2,0.2,1,1,1 and 1 (Soft-Bidist)</ns0:cell><ns0:cell>0.987</ns0:cell><ns0:cell cols='2'>0.970 0.966</ns0:cell><ns0:cell>0.938</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.794</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_20'><ns0:head>Table 10 (on next page)</ns0:head><ns0:label>10</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_21'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>F-measure mean values (best results shown boldface and worst results underlined) with threshold 0.90, 0.85, 0.80, , 0.75, 0.70 and 0.65 , of all datasets tested (3 for English , 1 for Portuguese,3 for species, 3 for genera ,1 for Arabic ).</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_22'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>The results of F-measure mean values</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Compared Algorithm</ns0:cell><ns0:cell cols='2'>Proposed Algorithm</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Soft-Bidist</ns0:cell><ns0:cell>Soft-Bidist</ns0:cell></ns0:row><ns0:row><ns0:cell>Datasets</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>LD</ns0:cell><ns0:cell cols='3'>DLD N-DIST MDLD</ns0:cell><ns0:cell>(0,1,0,0.2,0 .2,1,1,0.5 and 0.5)</ns0:cell><ns0:cell>(0,1,0,0.2,0.2, 1,1,0.5 and 0.5)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Sim % Sim % Sim % Sim %</ns0:cell><ns0:cell>Sim %</ns0:cell><ns0:cell>Sim %</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 3 ( English 60 pairs)</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 4 ( English 4013 pairs)</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>(</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Portuguese</ns0:cell><ns0:cell /><ns0:cell>120</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell>pairs)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 6 'CAAB' ( 641 pairs)</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 7 'Dalcin</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>name pairs' (171</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>pairs)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>'CAABWEB' (2047</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>pairs)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 9 'GRIN genera' (189 pairs)</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 10 'CAAB Genera' (115 pairs)</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.88</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell /><ns0:cell>11</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>'CAABWEB Genera'</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>(853 pairs)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Dataset 1 'Arabic name (600 pairs)</ns0:cell><ns0:cell>0.66</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>F-MEASURE MEAN 0.87</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://www.baeldung.com/cs/levenshtein-distance-computation PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>https://confluence.csiro.au/public/taxamatch/the-mdld-modified-damerau-levenshtein-distance-algorithm PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55437:1:1:NEW 10 Feb 2021)</ns0:note>
</ns0:body>
" | "Original Manuscript ID: CS-2020:11:55437:0:1: REVIEW
Original Article Title: “Soft Bigram Distance for Names Matching”
To: PeerJ Editors and Reviewers
Re: Response to reviewers
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the
reviewers’ comments.
I am uploading (a) our point-by-point response to the comments (below) (response to reviewers),
(b) an updated manuscript with yellow highlighting indicating changes, and (c) a clean updated
manuscript without highlights (PDF main document).
Best regards,
Dr. Mohammed Hadwan
Reviewer 1 (René Arnulfo García-Hernández)
Basic reporting
Reviewer#1, Concern # 1: The aligment of the formulas must be improved
Author response: we updated and aligned all of the formulas.
Author action: Done.
Validity of the findings
Reviewer#1, Concern #2 : In related work, drug names are used to explain some names-matching
measures. However, the Soft-Bisim measure is not used in the experimentation.
Author response: Soft-Bisim is proposed by (Millán-Hernández et al. 2019), their paper title is
'Soft Bigram Similarity to Identify Confusable Drug Names'. The drug names are added in the
related work to cover the literature related to the work in this paper
Author action: Done.
Reviewer#1, Concern # 3: The dataset is interesting because it has different languages and
domains, but the experimentation and conclusion do not provide more information about it.
Author response: More information about the dataset added into the manuscript
Author action: Comment considered and paper updated.
Reviewer 2 (Christopher Brew)
Basic reporting
Reviewer#2, Concern # 1: This paper is exceptionally difficult to understand.
Author response: paper updated and we try our best to make it clear as much as we could in order
to make the paper easy to understand.
Author action: Done.
Reviewer#2, Concern # 2: The recurrence relation at 78 differs, for no obvious reason, from the
standard one found in most literature, This should be:
Lev(0, 0) = 0.0
Lev(i, 0) = Lev(i-1,0) + w['del'] when i > 0
Lev(0, j). = Lev(0,j-1) + w['ins'] when j > 0
Lev(i, j) = MIN(Lev(i-1, j-1] + (w['sub'] if x != y else 0),
Lev(i-1, j) + w['del'],
Lev[i ,j-1[ + w['ins'] ) when i > 0 and j > 0
In other words, the base cases of the first cell, the rest of the first column and the first row need to
be clearly defined, and the main minimization needs to be defined with precision.
Author response: we have modified recurrence relation according to literature standard. Please
refer to the manuscript.
Author action: Done.
Reviewer#2, Concern # 3: The definition at 86 is similar, and the authors correctly state that the
only difference is the addition of an extra path to the minimization, to allow transposition to have a
different score from the combination of an insertion and a deletion.
Author response: we have updated the manuscript by double check and update all the equations in
the manuscript.
Author action: Done.
Reviewer#2, Concern # 4: The mentions of Crt (not in the formula, but introduced at 99) are
mysterious. Not sure what is meant here. The sentence starting at 90 is hard to understand: I can't
make out what the N is in the claim that something is O(N^3) and the reference to an Oracle
database is extremely unexpected.
Author response: we have removed the unclear parts in this section and updated this part
accordingly.
Author action: Done.
Reviewer#2, Concern # 5: I can't see the relationship between the formulae at 100 and 101 to the
much clearer algorithms and presentation in Kondrak 2005 (which I accessed via semantic scholar
at
@inproceedings{Kondrak2005NGramSA,
title={N-Gram Similarity and Distance},
author={Grzegorz Kondrak},
booktitle={SPIRE},
year={2005}
})
In particular T_{i,j}^n is undefined.
Author response: I have updated the reference for these formulas to be (Abdulhayoglu et al. 2016),
where these formulas are modified based on the work presented by knodark 2005.
Author action: Done.
Reviewer#2, Concern # 6: The paper needs to provide a more complete and self-contained
explanation of what the proposed algorithm is. Even after re-reading the literature references, I am
still not sure, and I think other readers will have the same problem.
Examining the code published on Github did not help me.
Author response: we have tried our best in this updated version to make the paper as clear as we
could. More explaination and analysis added to the manuscript.
Author action: Done.
Reviewer#2, Concern # 7: . If asked to guess, what I think is happening is that the authors are
adjusting some of the weights in the definition of BI-DIST and combining it with some distance
measure.
The language of the paper is mostly professional, but leaves many important details unstated.
The literature references are appropriate. The summaries of what is in the literature are mostly
accurate, but again frequently unclear.
Author response: we have tried our best to improve this version of the manuscript either in the
introduction, literature or other section to make it better and clear to the readers.
Author action: Done.
Reviewer#2, Concern # 8: The hypotheses of the paper are not clearly stated, because the
experiments are not clearly described in the paper. It is possible that literature references may
contain details of data preparation and experimental hypotheses, but this should not be necessary.
The paper should contain sufficient detail that the reader can understand what the hypotheses are
and against what data they are being tested. This material should be comprehensible even before the
reader goes to the related literature. As written, the paper is insufficiently self-contained
Author response: The current version of the manuscript is much better than the previous one. Extra
information and sections added to the body of the paper for example in Comparative Study for Soft-Bidist
Algorithm we have added more measurements at this section to make clear.
Author action: Done.
Reviewer#2, Concern # 9: The formal presentation of algorithms is confusing, with undefined
symbols and unfamiliar notation. The authors are urged to go back to the literature that they rely on,
then use notations and algorithm presentations that are more clearly similar to those in the prior
work.
Author response: The presentation of the paper have been improved and discussion section added
with extra information to clarify the unclear parts.
Author action: Done.
Reviewer#2, Concern # 10: Experiments are reported at 153 and following. It is not clear from the
paper which datasets are used and what metrics are being reported. This should be more selfcontained. The authors need to include brief descriptions of the datasets used, the metrics that are
used to evaluate performance and the way in which the reports of performance relate to identifiable
knowledge gaps. For example, such a description could say 'In previous work, a similar algorithm
was applied to drug names in English documents, but our new dataset deals with personal names in
Urdu. To our knowledge, there have been no previous reports of good performance on this
combination of language and task. Our system achieves a balanced F1 of 0.93,'
Author response: dataset information, metric used for evaluation are added to the paper. You can
refer to table 5, where the language, type of data and other related information are added. In
addition to a pragaraph added as follows:
'Due to the absence of standard datasets for name matching, different multilingual datasets
(English, Arabic, and Portuguese) is used in the experiments that carried out in this research. These
datasets are presented by (S Al-Hagree et al. 2019), (Ahmed and Nürnberger 2009), (Rees 2014) and
(Al-Sanabani and Al-Hagree 2015). For more details about used datasets, referee to table 5. Different
spilling errors and typographical are included in these datasets.
The same data preparation used in (S Al-Hagree et al. 2019) is used in this research to be compared
with the Soft-Bidist algorithm'
Author action: done.
Reviewer#2, Concern # 11: The investigation is clearly performed to a high ethical standard. The
technical quality of the investigation is unclear, because insufficient details are given on the
datasets, the partitioning of the data into training, development and test partitions, or of the precise
algorithms used.
Author response: we added whatever we could to make the paper as clear as possible. We believe
that missing things added in this version.
Author action: Done.
Reviewer#2, Concern # 12: The underlying data has not been presented in an orderly fashion.
Robustness, statistical soundness and appropriate controls are difficult to assess. The claims of
superiority in 153-218 are too vague, and need to be supported by clearly reported tests of
significant difference. The summary statement of improvement at 225-226 is not warranted by any
clear evidence in the paper.
Author response: Table 7,8,9 and 10 added to support our claim for the superiority of our
proposed method.
Author action: Done.
Reviewer#2, Concern # 13: Overall, this paper reports work that could be interesting, but the
presentation is so unclear that it needs a complete rewrite before it is ready for publication.
Author response: we have rewrite most of the part in the new version.
Author action: Done.
Reviewer#2, Concern # 14: I believe that the idea of tuning the weights in a suitable string
distance or string similarity function is fundamentally sound, but the suggestion at 63-65 that the
work already uses an evolutionary algorithm seems to conflict with the rest of the paper, where
what is described is an informal search for a good set of weights. It is fine to say (at 227) that a GA
might help, but on its own the comment at 63-65 seems either premature or insufficiently supported
by the remainder of the paper.
Author response: This section is modified and the mention for the evolutionary algorithm is
removed as we intend to investigate it for future work.
Author action: Done.
" | Here is a paper. Please give your review comments after reading it. |
47 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The evolution of electronic media is a mixed blessing. Due to the easy access, low cost, and faster reach of the information, people search out and devour news from online social networks. In contrast, the increasing acceptance of social media reporting leads to the spread of fake news. This is a minacious problem that causes disputes and endangers the societal stability and harmony. Fake news spread has gained attention from researchers due to its vicious nature. proliferation of misinformation in all media, from the internet to cable news, paid advertising and local news outlets, has made it essential for people to identify the misinformation and sort through the facts. Researchers are trying to analyze the credibility of information and curtail false information on such platforms. Credibility is the believability of the piece of information at hand. Analyzing the credibility of fake news is challenging due to the intent of its creation and the polychromatic nature of the news. In this work, we propose a model for detecting fake news. Our method investigates the content of the news at the early stage i.e. when the news is published but is yet to be disseminated through social media. Our work interprets the content with automatic feature extraction and the relevance of the text pieces. In summary, we introduce stance as one of the features along with the content of the article and employ the pre-trained contextualized word embeddings BERT to obtain the state-of-art results for fake news detection. The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the information age, social networking sites have become a hotbed for spreading misinformation. Misinformation <ns0:ref type='bibr' target='#b48'>(Soll, 2016)</ns0:ref> as a phenomenon is as old as true or factual ones. The scale and scope of misinformation, however, have assumed alarming levels as social media platforms and networks can spread misinformation rapidly. With the substantial increase in the spread of misinformation, adverse impacts on individuals and society at large have also become significant <ns0:ref type='bibr' target='#b15'>(Figueira & Oliveira, 2017)</ns0:ref>. In this work, we propose a framework for identifying the misinformation by employing state-of-the-art artificial intelligence algorithms. The first step in the identification of misinformation is to understand what constitutes the misinformation. Fake news, misinformation, disinformation all are various forms of non-factual information with variations in the intent of the creator/spreader. Ethical Journalism Network 1 (EJN) defines fake news as 'information deliberately fabricated and published to deceive and mislead others into believing falsehoods or doubting verifiable facts.' Misinformation, disinformation, and mal-information are specific subsets of information disorder. Disinformation is false and is deliberately designed to harm an individual, organization, social group, or country. Mal-information is reality-based, used to harm a person, social group, organization, or country 2 (UNESCO). Misinformation is not created to cause harm and is false information that is disseminated by people who believe that it is true. In comparison, disinformation is false information intentionally and knowingly propagated by individuals to gain political, financial, social, or psychological benefits. Disinformation via advertising can be used to discredit or falsely credit a product or a specific manufacturer for financial gain 3 (Banner flow). In the political domain, disinformation could manifest from using false information to discredit opposition parties or creating false narratives to aid one specific party or candidate <ns0:ref type='bibr' target='#b0'>(Allcott & Gentzkow, 2017)</ns0:ref>. Socially, one typical example could be the spread of certain medical myths that are prevalent in specific communities and spreading them without much thought <ns0:ref type='bibr' target='#b54'>(Waszak, Kasprzycka-Waszak, & Kubanek, 2018)</ns0:ref>. Misinformation or Information Disorder is usually in the form of false or out of context information, photographs, or videos that are either intentionally created and distributed. Sometimes, they are taken out of context to mislead, deceive, confuse or misinform the reader (Pérez-Rosas, <ns0:ref type='bibr' target='#b37'>Kleinberg, Lefevre, & Mihalcea, 2017)</ns0:ref>. Although there is news created for fun and circulated as a part of a joke they have seriously impacted society. Researchers <ns0:ref type='bibr' target='#b16'>(Friggeri, Adamic, Eckles, & Cheng, 2014)</ns0:ref> surveyed different aspects of false information and answered the question 'what can be termed as false?'. The primary points considered are who is spreading the false information, what are the reasons behind the reader's belief, and what is impact this false news creates. The effects of dis /misinformation on society can prove detrimental. Misinformation has caused a serious impact on various activities such as affecting the stock market <ns0:ref type='bibr' target='#b4'>(Bollen, Mao, & Zeng, 2011)</ns0:ref>, hampering the responses during natural disasters <ns0:ref type='bibr' target='#b20'>(Gupta, Lamba, Kumaraguru, & Joshi, 2013)</ns0:ref>, instigating terrorism activity <ns0:ref type='bibr' target='#b49'>(Starbird, Maddock, Orand, Achterman, & Mason, 2014)</ns0:ref>, kindling cyber-troop activity <ns0:ref type='bibr' target='#b5'>(Bradshaw & Howard, 2018)</ns0:ref>, hampering the decision-making ability during elections(News18) and creating panic, bringing about the economic crisis(herald) and inciting religion-based attacks <ns0:ref type='bibr'>(Indianexpress)</ns0:ref> during Covid-19 pandemic.</ns0:p><ns0:p>Looking at the huge outburst of fake news around the coronavirus, the World Health Organization(WHO)announced the new coronavirus pandemic was accompanied by a 'Misinformation Infodemic'. Various aspects of misinformation and its identification using AI tools for COVID 19 data is reported in a recent article <ns0:ref type='bibr' target='#b25'>(Jyoti Choudrie, 2020)</ns0:ref>. Fact Checkers report fake news from general elections and attacks at Pulwama to the scrapping of Article 370 and the ongoing protests against the Citizenship Amendment Act, which triggered a wide distribution of misinformation across social media platforms <ns0:ref type='bibr'>(Economictimes)</ns0:ref>. Misinformation affects communities and their susceptibility in various ways; for instance, mob lynching and communal poison.</ns0:p><ns0:p>The dependability of mass on social media news items has rapidly grown. It is reported that out of the English-speaking news seekers in India 84 percent rely on Online news whereas 16 percent on the outpaced print media <ns0:ref type='bibr'>(Reuters)</ns0:ref>. The urban, semi-urban teen-agers are the primary consumers of social media news <ns0:ref type='bibr' target='#b37'>(Pérez-Rosas et al., 2017)</ns0:ref>. Due to such tremendous use of online platforms, the spread of information disorder is overwhelming and immense, causing harm to society. In the year 2014, the World Economic Forum declared misinformation as one of the 10 global risks(W.E. <ns0:ref type='bibr'>Forum, 2014)</ns0:ref>. Governments have taken some anti-misinformation efforts aiming to curb the spread of unlawful content and misinformation spanning from the laws, Media literacy campaigns, government task force, bills, platform agreements, to arrests <ns0:ref type='bibr'>(Poynter)</ns0:ref>.</ns0:p><ns0:p>From the social media platforms available, Facebook and WhatsApp are particularly widely used for news updates. As reported by Reuters, 75% use Facebook, 82% use WhatsApp, 26% use Instagram, 18% use Twitter. Hence it becomes the responsibility of these platforms to help mitigate the spread of misinformation. Facebook 4 states that they are working in different ways -e.g. most false news is motivated due to financial aspects, hence undermining the economic incentives may prove useful. The International Fact-Checking Network and the fact-checkers are working hard to investigate the facts behind a piece of information likely to be fake. Several experiments were carried out to assess the effect of hoaxes, false reviews, and fake news. To create a misinformation detection system, we need to consider the various aspects of the knowledge and categorization of different features. Several researchers performed research and submitted it. We present the literature in parts that concentrate on social and cognitive dimensions, categorization strategies, and AI-based detection systems using different acceptable algorithms.</ns0:p><ns0:p>Mis-and disinformation can easily be disseminated -wittingly or unwittingly -through all types of media. The ease of access to such quick information without any validation, has put a responsibility on the reader to decide the correctness of the information at hand. Correctness, Trustworthiness, or Credibility is the quality of the information to be trusted and believed in. In the context of news, it encompasses the broader aspects of trustworthiness, reliability, dependability, integrity, and reputation. When people are unable to debunk the information and act accordingly, that makes poor decisions impacting their lives. It is essential to check the credibility of the source, author, check your biases, check the date and supporting sources to determine the reliability via comparison with reliable sources 5 (EMIC).</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>While performing the credibility analysis we need to first examine how misinformation and disinformation are being represented, spread understood, and acted upon. The role and motivation of an individual behind resharing the original content is an important aspect while devising a policy to curtail the spread and also for developing some technical solutions to mitigate it. The most powerful of the information disorder content is that which harms people by influencing their emotions. Since the social platforms are designed to express emotions through likes, comments, and shares, all the efforts towards fact-checking and debunking false information are ineffective since the emotional aspect of the sharing of information is impossible to control. Detection and mitigation of information disorder are challenging due to the psychological aspects of Motivation for Dissemination and the Proliferation of misinformation. The two primary channels for spreading the misinformation are employed namely, Echo Chamber <ns0:ref type='bibr' target='#b44'>(Shu, Bernard, & Liu, 2019)</ns0:ref> which is a situation where beliefs are reinforced or enhanced by contact and repetition within a closed structure and Filter Bubble <ns0:ref type='bibr' target='#b44'>(Shu et al., 2019)</ns0:ref> is the state of intellectual isolation that can result from custom searches when a website algorithm selectively estimates what information a user wants to see based on user information, such as location, past click history, and search. The concept of the filter bubble is used to target a specific group of people to spread the specific misinformation. As per <ns0:ref type='bibr' target='#b29'>(Kumar & Geethakumari, 2014)</ns0:ref> cognitive psychology plays an important role in the spread of misinformation.</ns0:p><ns0:p>As stated earlier, there are political, financial, and social aspects that play a role as a motivation behind the creation of fake news items. These diverse angles, namely, the dynamic and ubiquitous nature of information, difficulty in verification, and homophily prove to be some of the primary challenges in finding the credibility of the information.</ns0:p></ns0:div>
<ns0:div><ns0:head>Previous Work</ns0:head><ns0:p>Misinformation detection is studied in different ways, starting with how it is created, spread, and eventually affects the community. <ns0:ref type='bibr' target='#b45'>(Shu, Sliva, Wang, Tang, & Liu, 2017)</ns0:ref> surveys the literature from two distinct phases: characterization and detection. Characterization is concerned with understanding the basic concepts and principles of fake news in traditional and social media whereas data mining with feature extraction and model construction is included in detection. <ns0:ref type='bibr' target='#b45'>(Shu et al., 2017)</ns0:ref> in his paper presents the characteristics of Fake News on traditional and social media that include Psychological and social foundations as well as fake accounts and echo chamber creation on social media. The author also puts forward the detection approaches including News Content and Social Context. Various approaches towards fake social media news identification are proposed, including Data Orientation, Feature Orientation, Model Orientation, and Application Orientation. Depending on these approaches multiple systems have developed that concentrate on temporal features, psychology, or the data for a data-oriented approach. Much explored approaches are Feature Orientation that considers the content or the social context of the news. Depending on the dataset the Model is selected either to be Supervised, Unsupervised, or Semi-supervised <ns0:ref type='bibr' target='#b45'>(Shu et al., 2017)</ns0:ref>. Feature Selection is an important step while approaching fake news detection. Features are broadly categorized into content features and social context features by <ns0:ref type='bibr' target='#b6'>(Cao et al., 2018)</ns0:ref>. The content features include lexical, syntactic, and topic features whereas social context features include user features, propagation features, and temporal features. There is vast work done in detecting misinformation with various approaches, traditionally some classification methods used were Decision Tree & Bayesian Networks <ns0:ref type='bibr' target='#b7'>(Castillo, Mendoza, & Poblete, 2011)</ns0:ref>, Random Forest & SVM (Kwon, Cha, Jung, Chen, & Wang, 2013), Logistic Regression <ns0:ref type='bibr' target='#b7'>(Castillo et al., 2011)</ns0:ref> for the handcrafted features. The features like author, context, and writing style <ns0:ref type='bibr' target='#b38'>(Potthast, Kiesel, Reinartz, Bevendorff, & Stein, 2017)</ns0:ref> of the news can help in identifying the fake news, although writing style alone cannot be a good option. Linguistic signs may be used to identify language characteristics such as n-grams, punctuation, psycholinguistic characteristics, readability, etc. Classification based on the credibility of the person who liked it is an approach taken in some cases <ns0:ref type='bibr' target='#b45'>(Shu et al., 2017)</ns0:ref>. The conventional techniques of machine learning have often resulted in a high-dimensional representation of linguistic information leading to the curse of dimensionality where enormous sparse matrices need to be treated. This issue can be solved with the use of word embeddings, which gives us low dimensional distributed representations. Misinformation specifically a news item may constitute Words, Sentences, Paragraphs, and Images. For applying any AI technique on text firstly we need to format the input data into a proper representation that can be understood by the model we are designing. Different state-of-art representation techniques like one-hot encoding, word embeddings like Continuous Bag of Words and Skip-gram <ns0:ref type='bibr' target='#b32'>(Mikolov, Chen, Corrado, & Dean, 2013</ns0:ref>) that compute continuous vector representations of very big datasets of terms, GloVe is Global word representation vectors <ns0:ref type='bibr' target='#b36'>(Pennington, Socher, & Manning, 2014)</ns0:ref> global corpus statistics that train just on non-zero elements in a word-word matrix, and not on the entire sparse matrix or single background windows in a large corpus. BERT <ns0:ref type='bibr' target='#b10'>(Devlin, Chang, Lee, & Toutanova, 2018)</ns0:ref> bi-directional pre-training setup, using the transformer encoder. Open-AI GPT pre-training model internally using the transformer decoder concept. Pre-trained embeddings can be adapted to build a neural networkbased fake news detection model. Text data is a sequential time series data which has some dependencies between the previous and later part of the sentence. Recurrent Neural Networks has been widely used to solve NLP problems, traditionally encoder decoders architecture in Recurrent Neural Network was a good option where an input sequence is fed into the encoder to get the hidden representation which is further fed to the decoder and produce the output sequence. One step towards fake news detection was to detect stance <ns0:ref type='bibr' target='#b9'>(Davis & Proctor, 2017</ns0:ref>) that involves estimating the relative perspective (or stance) of two texts on a subject, argument, or problem. This can help in identifying the authenticity of the news article based on whether the headline agrees with, disagrees with, or is unrelated to the body of the article. Recurrent Neural Networks <ns0:ref type='bibr' target='#b45'>(Shu et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b31'>(Ma et al., 2016)</ns0:ref> to capture the variation of contextual information, CSI model composed of three modules <ns0:ref type='bibr' target='#b42'>(Ruchansky, Seo, & Liu, 2017)</ns0:ref> implements Recurrent Neural Network for capturing user activity's temporal pattern, learning the source characteristic based on user behavior, and classifying the article. Researchers have also investigated the Rumor form of the news that is circulated without confirmation or certainty to facts <ns0:ref type='bibr' target='#b11'>(DiFonzo & Bordia, 2007)</ns0:ref>. A rumor detection system <ns0:ref type='bibr' target='#b6'>(Cao et al., 2018)</ns0:ref> for Facebook (notify with a warning alert), Twitter(credibility rating is provided and the user is allowed to give feedback on it) and Weibo(users report fake tweets and elite users scrutinize and judge them) are in function.</ns0:p><ns0:p>As the news articles usually contain a huge amount of text, this makes the input sequence long enough. In such cases, the old information gets washed off and scattered focus over the sequences which is due to a lack of explicit word alignment during decoding. Theirs raised a need to solve these issues and the attention mechanism has done a good job. There are different flavors of attention mechanisms that came up depending on their use cases, first and very basic version i.e. the basic attention which extracts important elements from a sequence. Multi-dimensional attention captures the various types of interactions between different terms. Hierarchical attention extracts globally and locally important information. Self-attention <ns0:ref type='bibr' target='#b51'>(Vaswani et al., 2017)</ns0:ref> captures the deep contextual information within a sequence. Memory-based attention discovers the latent dependencies. Task-specific attention captures the important information specified by the task.</ns0:p><ns0:p>Singhania et.al. implemented a 3HAN hierarchical attention model <ns0:ref type='bibr' target='#b46'>(Singhania, Fernandez, & Rao, 2017</ns0:ref>) that has 3 layers for words, sentences, and headlines each using bi-directional GRUs of the network encoding and attention. Wang et.al. implemented Attention-based LSTM <ns0:ref type='bibr' target='#b53'>(Wang, Huang, Zhu, & Zhao, 2016)</ns0:ref> for aspect-level sentiment analysis that finds the aspects and their polarity for the sentence. Goldberg applied a novel design for the NLP task that incorporates an attention-like mechanism in a Convolutional Network <ns0:ref type='bibr' target='#b18'>(Goldberg, 2016)</ns0:ref>. Further enhancement with a deep attention model with RNN given by (Chen, Li, Yin, & Zhang, 2018) learns selective temporal hidden representations of the news item that bring together distinct features with a specific emphasis at the same time and generate hidden representation.</ns0:p><ns0:p>Convolutional Neural Networks were used in computer vision tasks but recently they have gained popularity in Natural Language processing tasks as well. CAMI <ns0:ref type='bibr' target='#b55'>(Yu, Liu, Wu, Wang, & Tan, 2017)</ns0:ref> tries to early detect the misinformation. It is carried out by dividing the events into phases and representing them using a paragraph vector <ns0:ref type='bibr' target='#b30'>(Le & Mikolov, 2014)</ns0:ref>. Automatic, identification of fake news based on geometric deep learning <ns0:ref type='bibr'>(Monti et al.,2019)</ns0:ref> generalizing classical CNNs to graphs. FNDNet <ns0:ref type='bibr' target='#b26'>(Kaliyar, Goswami, Narang, & Sinha, 2020)</ns0:ref> deep convolutional neural network. DMFN <ns0:ref type='bibr' target='#b35'>(Nguyen, Do, Calderbank, & Deligiannis, 2019)</ns0:ref> model capturing dependencies among random variables using a Markov random field. Pattern driven approach <ns0:ref type='bibr' target='#b56'>(Zhou & Zafarani, 2019)</ns0:ref> capturing the relation between news spreader and relation between the spreaders of that news item. A mutual evaluation model <ns0:ref type='bibr' target='#b24'>(Ishida & Kuraya, 2018</ns0:ref>) that dynamically builds a relational network model to identify credibility taking into consideration the consistency of the content. Without a dynamic relation network, the content dependent model would lead to a different score of the same article, since a different individual will have different perspectives. Several researchers have proposed various approaches to the detection of fake news as discussed in this section. Various classifiers and representation techniques are proposed. The reported accuracy for these models ranges from 85 to 90%. However, there is a scope for improving the accuracy of the fake news detection model.</ns0:p><ns0:p>From the above work, it can be observed that the a number of researchers have carried out a detailed study in finding out various linguistic features and using different combinations of features in classification tasks. Deep learning automatically learns features from data, instead of adopting handcrafted features, it makes the task easier and can handle a huge amount of data. We have used the deep learning approach for feature extraction and added a new feature that helps improve the understanding of the text under consideration namely the stance which estimates the relative perspective of two pieces of texts. The major contribution over the previous works lies in the addition of stance as a feature along with the state of art BERT Model. We have considered cosine distance between the vectors of the news title and the news body as the similarity measure. In traditional literature, the stance is defined in several ways. E.g. stance is detected towards a particular topic <ns0:ref type='bibr' target='#b50'>(Sun, Wang, Zhu, & Zhou, 2018)</ns0:ref>, agreement or disagreement towards a particular claim <ns0:ref type='bibr' target='#b33'>(Mohtarami et al., 2018)</ns0:ref> and even attitude expressed in a text towards a target <ns0:ref type='bibr' target='#b1'>(Augenstein, Rocktäschel, Vlachos, & Bontcheva, 2016)</ns0:ref>. All of these have predefined categories like a negative, positive, neutral, agree, disagree, unrelated, for or against. Our intention here is to find the relation/similarity between the two text pieces(namely the title and the body of the text). Hence, we do not find the score towards a particular topic but the measure of similarity between the title and the body of the news. The reason we have made such a choice is that for the unseen data we are not already aware of the topic it is focusing on. And will make our system more generalized. Such an approach can identify how close or farther the title is to the text in the body of an article under consideration Due to this additional feature, our training data is better equipped for more accurate predictions. Also, with the help of state-of-art language model BERT, our model captures the semantics of the text efficiently with a multi-layer bidirectional transformer encoder which helps to learn deep bi-directional representations of text(article) and finetuning it on our training data to classify an article into fake or real, using the probability score our model assigns to it. In summary, the main contributions of this article are:</ns0:p><ns0:p>• Introducing stance as one of the features along with the content of the article to obtain stateof-the-art performance when detecting fake news using an AI model • Develop a model that captures the semantics of information using the pre-trained contextualized word embeddings BERT(Language Modelling Technique) • Experimentation and validation of the above approach on the benchmark dataset. The remaining article is structured as follows: Section 2 outlines the methodology we have adopted to design our system, Section 3 describes the experimental setup and parameters we are using, Section 4 describes our findings and discussion on them, Section 5 concludes our work.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>We are looking in this paper to build a classifier that detects fake news with better accuracy than already reported. We have experimented with multiple AI models and evaluated accuracies. Our model for fake news detection is based on the content features which use pre-trained embedding to better capture the contextual characteristics. The complete process is explained in the below subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pre-processing</ns0:head><ns0:p>It is observed that text data is not clean. The data is extracted from different sources hence they have different characteristics, to bring such data on a common ground text pre-processing is an important step. Our pre-processing pipeline depends on the embeddings we will be using for our task. Since most of the embeddings do not provide vector values for the special characters and punctuations we need to clean the data by removing such words. Our preprocessing step involves the removal of punctuations, special characters, extra spaces, and lowering the case of the text. Architecture Fig. <ns0:ref type='figure'>1</ns0:ref> depicts the complete pipeline of our model. Dataset Manual fact-checking is a tedious and lengthy task and such fact-checked articles are very less to train deep learning models. Researchers have extracted news articles from the websites they believe to be authentic as real or genuine and similarly the fake articles. McIntire Fake and Real News Dataset 6 , a quite large dataset and a balanced dataset that contains both fake stories and true news. McIntire used the part from Kaggle's fake news collection for the Fake news class, while he got articles from credible journalism organizations like the New York Times, Wall Street Journal, Bloomberg, National Public Radio, and The Guardian for the real news class. The dataset contains 6335 news articles out of the 3171 that are genuine and 3164 that are fake. The dataset is equally distributed with ~50-50% of real labeled articles and fake labeled articles (Refer Fig. <ns0:ref type='figure'>2</ns0:ref>). In real-world applications the data may be skewed, however, for experimentation and dataset building, we have considered a balanced dataset which leads to a more robust and generalized model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data filtering and feature Selection</ns0:head><ns0:p>The data filtering/cleaning is the task of preparing the data for encoding and training purposes.</ns0:p><ns0:p>Here firstly we remove the spaces and punctuations from the text as they do not play an important role in the understanding of the context. This module of our pipeline ends up with feature selection for our training. We perform stance detection on the title and text to get the similarity score which gives us an additional feature to make our classifier perform better. We then encode our selected features Stance Calculation We calculate the stance estimating the relative perspective of two pieces of the text concerning a subject, argument, or issue. It is the probability score that will be used as an additional feature to detect the misinformation. From Sensationalism detection by <ns0:ref type='bibr' target='#b12'>(Dong, Yao, Wang, Benatallah, & Huang, 2019)</ns0:ref> we consider that the similarity between the article body and article headline can be correlated with articles' credibility. The similarity is captured by first embedding the article body and article headline onto the same space and computing the cosine distance as the similarity measure. To achieve this we first tokenize the article's body and headline, then calculate the cosine distance between each article's body and headline pair. Cosine distance is a good similarity measure as it determines the document similarity irrespective of the size; this is because the cosine similarity captures the angle of the documents and not the magnitude. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space.</ns0:p><ns0:formula xml:id='formula_0'>𝐶𝑜𝑠𝜃 = 𝑎 . 𝑏 ‖𝑎‖ ‖𝑏‖ = 𝑛 ∑ 1 𝑎 𝑖 𝑏 𝑖 𝑛 ∑ 1 𝑎 2 𝑖 . 𝑛 ∑ 1 𝑏 2 𝑖 Here, = = is the dot product of the two vectors. 𝑎 . 𝑏 ∑ 𝑛 1 𝑎 𝑖 𝑏 𝑖 𝑎 1 𝑏 1 + 𝑎 2 𝑏 2 + … + 𝑎 𝑛 𝑏 𝑛</ns0:formula></ns0:div>
<ns0:div><ns0:head>Feature Encoding</ns0:head><ns0:p>Any AI model requires the input data to be in number format. Different state-of-art representation techniques like one-hot encoding, word embeddings like Continuous Bag of Words and Skipgram, GloVe (Global vectors for word representation), Open AI GPT pre-training model internally using the transformer decoder concept, BERT (Bidirectional Encoder Representations from Transformers) bi-directional pre-training model that uses transformer encoder within are available to perform such encoding tasks. Our model uses language model BERT's smaller version namely Distil-BERT <ns0:ref type='bibr' target='#b43'>(Sanh, Debut, Chaumond, & Wolf, 2019)</ns0:ref> which has about half the total number of parameters of the BERT base. In Distil-BERT, a triple loss combining language modeling, distillation, and loss of cosine distance was included to exploit the inductive biases gained during PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>training by these types of models. We are further fine-tuning the model for our text classification task on our dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Classification Models</ns0:head><ns0:p>We have evaluated three AI models namely simple Artificial Neural Network, Recurrent Neural Network, Long Short-Term Memory (LSTM), Bidirectional LSTM, and Convolutional Neural Network. 1. Long Short-Term Memory (LSTM): LSTMs is chosen over RNN as it succeeds in overcoming the issue of long-term dependencies by keeping a memory at all times. The output of the embedding layer is fed into the LSTM layer that consists of 64 units and further passed through the dense layers with sigmoid activation and binary cross-entropy for computation. The next model implemented is the bidirectional LSTM model, which is an extension of traditional LSTM that trains two LSTMs in opposite directions to enhance model efficiency in capturing all the necessary information. 2. Bidirectional LSTM: Similar to the LSTM model it takes input from the embedding layers, our Bi-LSTM layer contains 64 units that are fed to the dense layers for computation. For comparison purposes, the activation and loss values used are the same as the previous model. 3. Convolutional Neural Network: Convolutional neural networks although designed for computer vision tasks recently it has given excellent results with NLP tasks as well. In the case of computer vision tasks, the filters slide over the patches of an image, whereas in NLP tasks the filter slides few words at a time over the sentence matrices. This makes Convolutional Neural Networks work well for classification tasks. So we have also implemented a CNN model that consists of a single Conv-1D with the kernel of size 5 and the Max Pooling layer. Further, a flattened layer and a fully connected layer with 64 nodes are used for computation. In all of the above models, we have used a Dense layer that operates linearly where every input is related by some weight to each output. The Loss function used is the cross-entropy loss that measures classification model performance whose output is a probability value between 0 and 1.Loss of cross-entropy increases as the predicted likelihood diverges from the actual mark. In the case of binary classification, where the number of groups equals 2, cross-entropy can be determined as:</ns0:p><ns0:formula xml:id='formula_1'>𝐿 = -(𝑦𝑙𝑜𝑔 ( 𝑝 ) + ( 1 -𝑦 ) 𝑙𝑜𝑔𝑙𝑜𝑔 ( 1 -𝑝 ) )</ns0:formula><ns0:p>These models classify the news articles as fake or real using the sigmoid activation function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>The framework is developed in the Python backend and uses the python libraries namely Keras, NLTK, Pandas, Numpy, Sklearn, sci-kit, etc. The dataset was divided into training, validation and testing sets with train_test_split functionality of sci-kit learn. The scale of the training set size was 70 percent, the validation set scale of 15 percent, and 15 percent of the test set size. Data preprocessing involved the use of the NLTK tool for the removal of HTML tags, punctuations, multiple spaces, etc. The distil-Bert-base-uncased model from the hugging face is used to obtain the embeddings that are later fine-tuned on our dataset. We encode our data with the max length of 512, dimension to be 768, dropout of 0.1, and the number of layers as 6 to gain the input ids and the mask ids. For the classification purpose, we have used LSTM, Bidirectional, Conv1D, Dense layers from Keras. The number of units chosen for each of the layers was based on the experimentations carried out. For the GloVe model, different vector dimensions were tried 100 and 300 and the vector dimension 100 gave good accuracy results. The loss function and activation function used were cross-entropy loss and sigmoid activation for all the models.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Performance Metrics</ns0:head><ns0:p>We have used the Confusion matrix, Accuracy, Precision, Recall, F1, and ROC to evaluate our model's efficiency <ns0:ref type='bibr' target='#b21'>(Hossin & Sulaiman, 2015)</ns0:ref>.</ns0:p><ns0:p>1. Confusion Matrix: The information performed by a classifier regarding actual and predicted classifications is studied by a confusion matrix. 2. Accuracy: Accuracy is the proportion of true outcomes within the total number of examined cases. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>In this work, we have proposed an approach to upgrade the fake news identification system with the inclusion of an additional feature termed 'stance'. Stance helps us in understanding the relevance of the article title (i.e. the headline of the news) to the article body(i.e. the text of the news). We add this feature to our content features that are obtained from the pre-trained BERT model, which provides additional insight into the article. The AI models we have experimented with are ANN, LSTM, Bidirectional LSTM, and CNN. Results are obtained by training and testing these models with different vector representation techniques and even including the attention layer to some models. The results are presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Table <ns0:ref type='table'>2</ns0:ref>. shows the classification results for our proposed model. The best results we obtained are with the usage of a pre-trained language model. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the accuracies with different settings as with GloVe Embeddings we attain an accuracy of 92.1% for the LSTM model. With the inclusion of the Attention layer to these models the accuracy shows some improvement of about 1%. Our proposed model adapting the BERT embedding to capture the contextual information from the articles have proved to perform better with an accuracy of 95.32%. We have experimented and obtained the results using the pre-trained BERT model and fine-tuned BERT model. The results we obtained via both these models demonstrate a negligible difference. One of the possible explanations for this could be that the BERT is trained on the English language corpus. The dataset we have used for experimentation is also in English and has a similar structure and features. To show the effect of stance as a feature we have experimented with BERT encoding, which builds a representation of the news title and news body along with the similarity between them, and demonstrated that it outperforms encoding the news title and news body alone. From the results, it can be observed that there is a considerable increase in the testing accuracy (Refer Table <ns0:ref type='table'>3</ns0:ref>.). We have dealt only with the content part of the article, the reason being when a news article is published and not much-circulated yet, the metadata such as reposts, likes, shares, etc. are not available. Then content can be the only parameter considered for fake news detection. The below plots give a clear view of the results obtained from our model (Refer Fig. <ns0:ref type='figure'>3</ns0:ref>). We have carried out a 5-fold Cross-validation resampling procedure to evaluate our model and make the results comparable with the other models on the same dataset <ns0:ref type='bibr'>(Refer Table. 4</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To validate our approach, we carried out a comparison with pre-existing approaches by considering a benchmark dataset by G. McIntire Fake and Real News Dataset. Many have proposed a solution to the fake news detection problem using hand-crafted feature engineering and applying Machine Learning models like Naïve Bayes, Random Forest, Boosting, Support Vector Machine, Decision Tree, Logistic Regression. <ns0:ref type='bibr' target='#b40'>(Reddy, Raj, Gala, & Basava, 2020)</ns0:ref> reported an accuracy of 95% with a gradient boosting algorithm on the combination of stylometric and CBOW (Word2Vec) features. <ns0:ref type='bibr' target='#b2'>(Bali, Fernandes, Choubey, & Goel, 2019)</ns0:ref> used Sentiment polarity, Readability, Count of words, Word Embedding, and Cosine similarity as the features to discriminate fake news with machine learning models. He reported the highest accuracy of 87.3% with the XGBoost model. <ns0:ref type='bibr' target='#b14'>(Esmaeilzadeh, Peh, & Xu, 2019)</ns0:ref> uses the LSTM-encoder-decoder with attention to the text, headline, and self-generated summary of the text to obtain the features. He proved that the summary of the text gives better accuracy of 93.1%. Embedding methods such as LSTM, depth LSTM, LIWC CNN, and N-gram CNN were used <ns0:ref type='bibr' target='#b22'>(Huang & Chen, 2020</ns0:ref>) and weights were optimized using the Self-Adaptive Harmony Search (SAHS) algorithm with an initial accuracy of 87.7% that was increased to 91%. <ns0:ref type='bibr' target='#b28'>(Khan, Khondaker, Islam, Iqbal, & Afroz, 2019)</ns0:ref> in his research he used word count, average word length, article length, count of numbers, count of parts of speech(adjective), count of exclamation along with the sentiment feature, and n-gram GloVe embedding encoded features. His work reported accuracy of 90% with the Naïve Bayes classifier and 95% with the LSTM deep neural network. <ns0:ref type='bibr' target='#b3'>(Bhutani, Rastogi, Sehgal, & Purwar, 2019)</ns0:ref> reported an accuracy of 84.3% with the Naïve Bayes model and TF-IDF scores, sentiments, and Cosine similarity scores as features. <ns0:ref type='bibr' target='#b19'>(Gravanis, Vakali, Diamantaras, & Karadais, 2019)</ns0:ref> developed a content-based model and reported an accuracy of 89% with the Support Vector Machine. <ns0:ref type='bibr' target='#b17'>(George, Skariah, & Xavier, 2020)</ns0:ref> in his work used CNN for Linguistic features and Multi Headed self-Attention for the contextual features. Accuracies of 84.3% and 83.9% were achieved with machine learning models such as the Naïve Bayes classifier and Random forest respectively. With deep learning models like LSTM and FastText embedding accuracy reported was 94.3%. With the increasing creation and availability of misinformation, automatic feature extraction with the help of Deep Learning algorithms has also been experimented. A comparison of other work with our model is shown in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Information disorder and the spread of fake news is a highly prevalent and challenging problem in this digital era. The highly harmful potential of fake news on social stability and the psychological health of the masses is undisputed and proven time and again. To identify and detect the misinformation is the primary step in controlling its spread and combat its harmful effects on society. To this end, in this paper, we have proposed a methodology to identify the fake news based on stance detection in addition to other standard features. The content of an article is the basic informative piece that does play a significant role in assigning credibility to the article. We propose a model that uses language features based on the article's content to discriminate against fake news from real news. Our model tries to detect the fake articles at a stage when they are yet to propagate in the social network. To make the detection more precise we have added a stance feature to the dataset. This stance value has helped us understand the content by finding the relevance between the article headline and article body. Along with this additional feature, we have learned the language representation with the help of the BERT technique. The transfer learning concept that injects external knowledge gives a better understanding of our content. It is observed that our system has shown improvement in the results as compared to the other models. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Previous work is conducted with handcrafted linguistic features, stylometric features, TF-IDF scores, n-gram features as well as automatic feature extraction from the content of the articles. Evaluation of our model is done on Kaggle's open source dataset for news articles. The results presented demonstrate the performance of various deep learning architectures for misinformation detection, wherein the BERT embeddings based CNN architecture provides the highest performance of 95% accuracy and comparable precision and recall. Besides, we have also compared our approach with other pre-existing approaches and shown that our proposed model provides superior performance as compared to the other methods.</ns0:p><ns0:p>On the social media platform Images, Videos and Audio have become a part of a news article to make it more informative and sensational at times. The fake news articles are usually intertwined with the factual information that makes interpretation of the contextual information difficult. As an extension of this work Image, Video, and Audio can also be used in the contentbased detection systems to make them near to perfect. <ns0:ref type='bibr' target='#b3'>Bhutani et al., 2019)</ns0:ref> 84.3 83.9 <ns0:ref type='bibr' target='#b19'>(Gravanis et al., 2019)</ns0:ref> 70 89 <ns0:ref type='bibr'>(Bharadwaj & Shao, 2019)</ns0:ref> 90.7 94.7 92.7 <ns0:ref type='bibr' target='#b17'>(George et al., 2020)</ns0:ref> 84.3 83.9 94.3 <ns0:ref type='bibr' target='#b14'>(Esmaeilzadeh et al., 2019)</ns0:ref> 92.1 93.1 <ns0:ref type='bibr' target='#b22'>(Huang & Chen, 2020)</ns0:ref> 84.9 87.7 91 <ns0:ref type='bibr' target='#b28'>(Khan et al., 2019)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>3. precision: Precision tells about what proportion of predicted Positives is truly Positive. 4. recall: It tells us what proportion of real positives is graded correctly. 5. F1 Score: It gives the harmonic mean of precision and recall.6. ROC: ROC demonstrates how well the percentage of the positives are isolated from the negative groups. These metrics helped us analyze the results we gained from our model. Table1. Depicts the values for accuracies during training, validation, and testing, along with the precision, recall, F1, and the ROC.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,386.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,231.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Performance of different AI models</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Models</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell><ns0:cell>Precision (%)</ns0:cell><ns0:cell>Recall (%)</ns0:cell><ns0:cell>F1 (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Tokenizer</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>86.6</ns0:cell><ns0:cell>85.1</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>86.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>85.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.01</ns0:cell></ns0:row><ns0:row><ns0:cell>GloVe embeddings</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>92.1</ns0:cell><ns0:cell>91.7</ns0:cell><ns0:cell>92.7</ns0:cell><ns0:cell>92.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>93.9</ns0:cell><ns0:cell>92.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>91.6</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>90.9</ns0:cell></ns0:row><ns0:row><ns0:cell>GloVe embeddings and attention mechanism</ns0:cell><ns0:cell>LSTM Bi-LSTM</ns0:cell><ns0:cell>92.1 91.9</ns0:cell><ns0:cell>91.7 90.2</ns0:cell><ns0:cell>92.7 93.9</ns0:cell><ns0:cell>92.2 92.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>91.6</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>90.9</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT embeddings</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>91.16</ns0:cell><ns0:cell>91.01</ns0:cell><ns0:cell>91.01</ns0:cell><ns0:cell>91.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>93.05</ns0:cell><ns0:cell>88.76</ns0:cell><ns0:cell>88.76</ns0:cell><ns0:cell>93.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>95.32</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>95.31</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table : 5</ns0:head><ns0:label>:</ns0:label><ns0:figDesc>Performance of various Fake News Identification models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell>NB</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell cols='3'>SVM MLP Boost</ns0:cell><ns0:cell>LSTM Bi-LSTM CNN</ns0:cell></ns0:row><ns0:row><ns0:cell>(Reddy et al., 2020)</ns0:cell><ns0:cell>86</ns0:cell><ns0:cell>82.5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>95</ns0:cell></ns0:row><ns0:row><ns0:cell>(Bali et al., 2019)</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>87.3</ns0:cell></ns0:row><ns0:row><ns0:cell>(</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://ethicaljournalismnetwork.org/tag/fake-news/page/2 2 https://en.unesco.org/fightfakenews 3 https://www.bannerflow.com/blog/marketing-fake-news-dangerous-game/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='4'>https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news 5 https://guides.emich.edu/newseval PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:note>
<ns0:note place='foot' n='6'>https://github.com/GeorgeMcIntire/fake_real_news_dataset PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:1:1:NEW 12 Feb 2021)</ns0:note>
</ns0:body>
" | "Response to Reviewers comments
The authors thank reviewers for their valuable comments and suggestions. We also appreciate the editor's and reviewer's efforts in reviewing our manuscript in a very short period. The responses to comments are as below. The same responses have been incorporated into the manuscript. The line numbers refer to the clean manuscript file.
Reviewer 1 (Anonymous)
Basic reporting
The authors propose investigate the performance of LSTM, bi-LSTM and CNN neural models with BERT embeddings and stance feature for credibility analysis on text resources.
The paper is well structured and easy to follow. The challenged problem and the motivation to study this problem are described clearly. However the introduction is too lengthy describing the social aspect of the problem. Previous solutions in the literature are described well, however the difference of the proposed method and contribution over the previous work are not given clearly.
The figures and the charts are structured appropriately.
Response: Incorporated at line no. 261 to 268 of the final manuscript.
Authors have carried out a detailed study in finding out various linguistic features and using different combinations of features in classification tasks. Since deep learning automatically learns features from data, instead of adopting handcrafted features, it makes the task easier and can handle a huge amount of data. We have used a deep learning approach for feature extraction and added a new feature that helps improve the understanding of the text under consideration. The major contribution over the previous works lies in the addition of stance as a feature along with the state of art BERT Model.
Experimental design
The authors focus on the use of BERT embeddings and the similarity between the title and the content of the news text for credibility analysis. In a vast number of recent studies, we see that BERT has been used for different NLP related problems. So using BERT in credibility analysis is inline with the recent efforts. However the contribution is limited since the BERT pre-trained model is directly used without any further training effort.
Response: Incorporated at line no. 437 to 441 of the final clean manuscript.
Upon receiving this suggestion, we experimented further and obtained the results using the pre-trained BERT model and fine-tuned BERT model. The results we obtained via both these models demonstrate a negligible difference. One of the possible explanations for this could be that the BERT is trained on the English language corpus. The dataset we have used for experimentation is also in English and has a similar structure and features.
The use of similarity between the title and the content of the news text is interesting. However, there are several drawbacks to the definition and the analysis of the feature. First of all, this new feature is considered to be about stance of the news, however the measured similarity may not be about the stance. Since the title and body texts are represented through the terms, the similarity may be related with the topic.
Response: Incorporated at line No. 270 to 280 of the final clean manuscript.
We have considered cosine distance between the vectors of the news title and the news body as the similarity measure. In traditional literature, the stance is defined in several ways. E.g. stance is detected towards a particular topic (Sun, Wang, Zhu, & Zhou, 2018), agreement or disagreement towards a particular claim (Mohtarami et al., 2018), and even attitude expressed in a text towards a target (Augenstein, Rocktäschel, Vlachos, & Bontcheva, 2016). All of these have predefined categories like a negative, positive, neutral, agree, disagree, unrelated, for or against. Our intention here is to find the relation/similarity between the two text pieces(namely the title and the body of the text). Hence, we do not find the score towards a particular topic but the measure of similarity between the title and the body of the news. The reason we have made such a choice is that for the unseen data we are not already aware of the topic it is focusing on. And will make our system more generalized. Such an approach can identify how close or farther the title is to the text in the body of an article under consideration.
As another drawback at this point, the details as to how the vector is constructed are not given. As another limitation, it is not clear why this feature is considered and how effective it is. There is not experiment in the study which is designed to measure the effect of this feature particularly on the credibility analysis. All the experiments involve the BERT embeddings and this feature together.
Response: Incorporated at line no. 437 to 441 of the final clean manuscript.
We have experimented with BERT encoding, which builds a representation of the news title and news body along with the similarity between them, and demonstrated that it outperforms encoding the news title and news body alone. This gives an additional weightage to the vectors constructed for the news. The below results show the effect of the additional feature we have used and it can be observed that there is a considerable increase in the testing accuracy.
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body
ANN
89.2
88.0
88.33
86.57
86.57
88.61
88.41
LSTM
95.29
88.8
90.64
87.42
87.42
91.0
90.9
Bi-LSTM
97.99
89.79
92.21
93.6
93.6
92.0
92.26
CNN
99.1
93.68
93.90
91.3
91.3
94.0
94.0
News Title,
News Body, Similarity between them(Stance)
ANN
89.31
89.37
89.38
86.4
86.4
89.8
89.4
LSTM
94.36
89.05
91.06
87.8
87.8
91.44
91.37
Bi-LSTM
98.6
92.11
92.6
93.5
93.5
92.5
92.6
CNN
99.3
92.9
94.42
94.33
94.33
94.43
94.42
Table 3. Effectiveness of stance feature in the classification of news articles.
Another limitation of the study is that a single data set is considered. With a quick search, one can find various data sets on credibility analysis.
Response: The reason we opted for a single data set is to have an exact comparable platform for various methods and approaches. The results presented in the analysis section, use the same dataset for their implementation.
Validity of the findings
The results are not obtained under cross validation. Therefore they are open to bias in partitioning of the data set for training, test and validation.
The authors present a performance comparison against the results reported on the same data set, however it is not clear how comparable they are due to lack of cross validation and also since the data partitioning used in the results reported in other studies may be different.
Response:
The authors thank the reviewers for this suggestion. We have carried out the 3-fold and 5-fold cross-validation on the dataset and the following results are obtained. A cross-validation resampling procedure is used to evaluate our model and make the results comparable with the other models on the same dataset.
Below are the results obtained for 3-fold cross-validation.
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body, Similarity between them (Stance)
ANN
91.76
91.23
91.66
92.0
92.0
91.63
91.66
LSTM
97.36
86.23
86.61
80.83
80.83
87.74
87.93
Bi-LSTM
99.15
93.17
92.88
89.4
89.4
93.16
93.23
CNN
99.92
94.16
95.16
93.0
93.0
95.29
95.25
5-fold cross-validation is also applied that gave us the below results,
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body, Similarity between them (Stance)
ANN
91.31
88.72
90.73
87.35
87.35
91.14
91.17
LSTM
97.08
87.61
88.60
84.98
84.98
89.29
89.40
Bi-LSTM
99.23
92.72
93.24
92.03
92.03
93.34
93.44
CNN
99.92
95.25
95.85
94.81
94.81
95.89
95.90
Table 4: 5-fold cross-validation results for the proposed model
These result for 5-fold cross-validation and corresponding explanation is included in the manuscript in Table 4.
Comments for the Author
The authors challenge a contemporary problem and present the study clearly. The motivation and the social aspect of the problem are discussed a bit lengthy but It is an enjoyable manuscript to read.
Using BERT embedding is a nice approach following recent developments in the area. Using title/body text similarity is a good idea with potential. However the contribution of the feature to the credibility analysis is not analyzed well.
Response: The explanation towards this is added on lines 437 to 441 in the manuscripts.
Unfortunately, there are other limitations with the experimental setup. Since the experiments are not conducted with cross-validation it is not clear how comparable the results of different settings and the results reported on previous work.
Use of just a single data set is also a limitation to generalize the results.
Response: the explanation towards this is added on lines 454 to 457 of the final manuscript.
Reviewer 2 (Anonymous)
Basic reporting
no comment
Experimental design
no comment
The validity of the findings
no comment
Comments for the Author
The authors have proposed an approach to detect fake news using BERT embeddings and reported high accuracy scores. However, there are some issues in the paper that need to be addressed:
The major issue is the use of the terminology ‘stance detection’. What the authors have done is not stance detection; at best it could be seen as a naive approach for clickbait detection. According to [1], “Stance detection aims to assign a stance label (for or against) to a post toward a specific target.” For more related works refer to [2], [3].
Response:
The authors thank the reviewer for this comment. This comment has prompted us to dig deeper into the topic of the definition of stance. The stance is defined differently in different contexts as reported in the literature. Some define stance as attitude expressed in a text towards a target to be “positive”, “negative” or “neutral”(Augenstein et al., 2016). Some have defined it as a relative perspective of a piece of text to a claim, using labels such as agree, disagree, discuss, and un-related(Mohtarami et al., 2018). And also assigning a stance label (for or against) to a post toward a specific target(Sun et al., 2018). We have defined stance to find the relation/similarity between the two text pieces i.e the title and the body of the news. This is included in the manuscript line number 270 to 280.
- The title is slightly misleading. Credibility analysis and fake news detection are two related but different tasks. The title needs to be modified to make readers aware of what to expect in the paper.
- “Misinformation or Information Disorder in the form of fake news is usually in the form of completely false information, photographs or videos that are intentionally created and distributed to confuse or misinform…” - Firstly, this is not exactly true. Disinformation and fake news may not be completely false information. In fact, in most cases, these are statements, claims and narratives with some degree of truth, taken out of context to mislead and deceive readers. Images and videos may not be intentionally created for this purpose - again, genuine images may simply be taken out of context to deceive viewers.
Secondly, the structure of this sentence needs to be changed and made more understandable.
Response: Incorporated the changed sentence at line no. 78 to 82 of the final clean manuscript.
Misinformation or Information Disorder is usually in the form of false or out of context information, photographs, or videos that are either intentionally created and distributed. Sometimes, they are taken out of context to mislead, deceive, confuse or misinform the reader.
- The division of sections and subsections is very confusing and this needs to be fixed: The ‘Previous Work’’ section is presented as a subsection of the ‘Social and Cognitive Aspects’. I would suggest removing the ‘Social and Cognitive Aspects’ section, since there is a huge body of literature that needs to be explored and explained, if this section is to be included somewhere later on in the manuscript.
Response: Section merged in the introduction
- The architecture of the model needs to be described in words as well, alongside the diagram.
Response: Architecture details are explained in the subsections of Materials and Methods.
-Connections need to be made to the pre-existing works in the credibility literature.
- Change all occurrences of ‘dis-information’ to ‘disinformation’.
Response: Done
- URLs in footnotes 4 to 13 can be shortened, or moved to the references.
Response: We have moved the URLs to the reference section
- The manuscript needs to be thoroughly proofread to remove grammatical errors.
Response: Done
References:
[1] Sun, Q., Wang, Z., Zhu, Q., & Zhou, G. (2018, August). Stance detection with hierarchical attention network. In Proceedings of the 27th international conference on computational linguistics (pp. 2399-2409).
[2] Augenstein, I., Rocktäschel, T., Vlachos, A., & Bontcheva, K. (2016). Stance detection with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464.
[3] Mohtarami, M., Baly, R., Glass, J., Nakov, P., Màrquez, L., & Moschitti, A. (2018). Automatic stance detection using end-to-end memory networks. arXiv preprint arXiv:1804.07581.
Reviewer 3 (Anonymous)
Basic reporting
no comment
Experimental design
(1) Please state whether the parameters of the BERT model used in training was fine-tuned or just fixed.
(2) Regarding the class label ratio of the used dataset, to better simulate the situations in real-world applications (fake news are few), it would be better to conduct experiments trying different settings of the class-label-ratio, such as 1:5 for moderately skewed label distribution and 1:10 for high skewed distribution. This can be done by experimenting with a single model (e.g., the best one).
(3) As the stance signal is crucial to the task, it would be informative to show the differences between the stance scores (cosine) of fake news and those of the true news in the dataset.
Response:
Upon receiving this suggestion, we experimented further and obtained the results using the pre-trained BERT model and fine-tuned BERT model. The results we obtained via both these models demonstrate a negligible difference. One of the possible explanations for this could be that the BERT is trained on the English language corpus. The dataset we have used for experimentation is also in English and has a similar structure and features.
We have experimented with BERT encoding, which builds a representation of the news title and news body along with the similarity between them, and demonstrated that it outperforms encoding the news title and news body alone. This gives an additional weightage to the vectors constructed for the news. The below results show the effect of the additional feature we have used and it can be observed that there is a considerable increase in the testing accuracy.
The explanation towards this is added on lines 437 to 441 of the final clean manuscript.
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body
ANN
89.2
88.0
88.33
86.57
86.57
88.61
88.41
LSTM
95.29
88.8
90.64
87.42
87.42
91.0
90.9
Bi-LSTM
97.99
89.79
92.21
93.6
93.6
92.0
92.26
CNN
99.1
93.68
93.90
91.3
91.3
94.0
94.0
News Title,
News Body, Similarity between them(Stance)
ANN
89.31
89.37
89.38
86.4
86.4
89.8
89.4
LSTM
94.36
89.05
91.06
87.8
87.8
91.44
91.37
Bi-LSTM
98.6
92.11
92.6
93.5
93.5
92.5
92.6
CNN
99.3
92.9
94.42
94.33
94.33
94.43
94.42
Table 3. Effectiveness of stance feature in the classification of news articles.
Validity of the findings
(1) It would be worthing doing a statistical test on the results, in order to show that the differences between the results of the baseline methods with the proposed method are indeed statistically significant. To be able to do this, the authors may consider applying k-fold cross-validation to the dataset so that it is possible to obtain multiple results from each baseline and the proposed method for performing the statistical test.
(2) It would be also worthing to do an error analysis to discuss and compare the errors made by the baseline methods and the proposed method, e.g., showing test examples that are misclassified by the proposed method and speculate why the errors may occur.
Response:
The authors thank the reviewers for this suggestion. We have carried out the 3-fold and 5-fold cross-validation on the dataset and the following results are obtained. A cross-validation resampling procedure is used to evaluate our model and make the results comparable with the other models on the same dataset.
Below are the results obtained for 3-fold cross-validation.
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body, Similarity between them (Stance)
ANN
91.76
91.23
91.66
92.0
92.0
91.63
91.66
LSTM
97.36
86.23
86.61
80.83
80.83
87.74
87.93
Bi-LSTM
99.15
93.17
92.88
89.4
89.4
93.16
93.23
CNN
99.92
94.16
95.16
93.0
93.0
95.29
95.25
5-fold cross-validation is also applied that gave us the below results,
Features
Models
Training
Accuracy(%)
Validation
accuracy(%)
Testing
accuracy(%)
Precision
(%)
Recall
(%)
F1
(%)
ROC
(%)
News Title,
News Body, Similarity between them (Stance)
ANN
91.31
88.72
90.73
87.35
87.35
91.14
91.17
LSTM
97.08
87.61
88.60
84.98
84.98
89.29
89.40
Bi-LSTM
99.23
92.72
93.24
92.03
92.03
93.34
93.44
CNN
99.92
95.25
95.85
94.81
94.81
95.89
95.90
Table 4: 5-fold cross-validation results for the proposed model
These result for 5-fold cross-validation and corresponding explanation is included in the manuscript in Table no 4.
Augenstein, I., Rocktäschel, T., Vlachos, A., & Bontcheva, K. (2016). Stance detection with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464.
Mohtarami, M., Baly, R., Glass, J., Nakov, P., Màrquez, L., & Moschitti, A. (2018). Automatic stance detection using end-to-end memory networks. arXiv preprint arXiv:1804.07581.
Sun, Q., Wang, Z., Zhu, Q., & Zhou, G. (2018). Stance detection with hierarchical attention network. Paper presented at the Proceedings of the 27th international conference on computational linguistics.
" | Here is a paper. Please give your review comments after reading it. |
48 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The evolution of electronic media is a mixed blessing. Due to the easy access, low cost, and faster reach of the information, people search out and devour news from online social networks. In contrast, the increasing acceptance of social media reporting leads to the spread of fake news. This is a minacious problem that causes disputes and endangers the societal stability and harmony. Fake news spread has gained attention from researchers due to its vicious nature. proliferation of misinformation in all media, from the internet to cable news, paid advertising and local news outlets, has made it essential for people to identify the misinformation and sort through the facts. Researchers are trying to analyze the credibility of information and curtail false information on such platforms. Credibility is the believability of the piece of information at hand. Analyzing the credibility of fake news is challenging due to the intent of its creation and the polychromatic nature of the news. In this work, we propose a model for detecting fake news. Our method investigates the content of the news at the early stage i.e. when the news is published but is yet to be disseminated through social media. Our work interprets the content with automatic feature extraction and the relevance of the text pieces. In summary, we introduce stance as one of the features along with the content of the article and employ the pre-trained contextualized word embeddings BERT to obtain the state-of-art results for fake news detection. The experiment conducted on the real-world dataset indicates that our model outperforms the previous work and enables fake news detection with an accuracy of 95.32%.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the information age, social networking sites have become a hotbed for spreading misinformation. Misinformation <ns0:ref type='bibr' target='#b48'>(Soll, 2016)</ns0:ref> as a phenomenon is as old as true or factual ones. The scale and scope of misinformation, however, have assumed alarming levels as social media platforms and networks can spread misinformation rapidly. With the substantial increase in the spread of misinformation, adverse impacts on individuals and society at large have also become significant <ns0:ref type='bibr' target='#b15'>(Figueira & Oliveira, 2017)</ns0:ref>. In this work, we propose a framework for identifying the misinformation by employing state-of-the-art artificial intelligence algorithms. The first step in the identification of misinformation is to understand what constitutes the misinformation. Fake news, misinformation, disinformation all are various forms of non-factual information with variations in the intent of the creator/spreader. Ethical Journalism Network 1 (EJN) defines fake news as 'information deliberately fabricated and published to deceive and mislead others into believing falsehoods or doubting verifiable facts.' Misinformation, disinformation, and mal-information are specific subsets of information disorder. Disinformation is false and is deliberately designed to harm an individual, organization, social group, or country. Mal-information is reality-based, used to harm a person, social group, organization, or country 2 (UNESCO). Misinformation is not created to cause harm and is false information that is disseminated by people who believe that it is true. In comparison, disinformation is false information intentionally and knowingly propagated by individuals to gain political, financial, social, or psychological benefits. Disinformation via advertising can be used to discredit or falsely credit a product or a specific manufacturer for financial gain 3 (Banner flow). In the political domain, disinformation could manifest from using false information to discredit opposition parties or creating false narratives to aid one specific party or candidate <ns0:ref type='bibr' target='#b0'>(Allcott & Gentzkow, 2017)</ns0:ref>. Socially, one typical example could be the spread of certain medical myths that are prevalent in specific communities and spreading them without much thought <ns0:ref type='bibr' target='#b54'>(Waszak, Kasprzycka-Waszak, & Kubanek, 2018)</ns0:ref>. Misinformation or Information Disorder is usually in the form of false or out of context information, photographs, or videos that are either intentionally created and distributed. Sometimes, they are taken out of context to mislead, deceive, confuse or misinform the reader (Pérez-Rosas, <ns0:ref type='bibr' target='#b37'>Kleinberg, Lefevre, & Mihalcea, 2017)</ns0:ref>. Although there is news created for fun and circulated as a part of a joke they have seriously impacted society. Researchers <ns0:ref type='bibr' target='#b16'>(Friggeri, Adamic, Eckles, & Cheng, 2014)</ns0:ref> surveyed different aspects of false information and answered the question 'what can be termed as false?'. The primary points considered are who is spreading the false information, what are the reasons behind the reader's belief, and what is impact this false news creates. The effects of dis /misinformation on society can prove detrimental. Misinformation has caused a serious impact on various activities such as affecting the stock market <ns0:ref type='bibr' target='#b4'>(Bollen, Mao, & Zeng, 2011)</ns0:ref>, hampering the responses during natural disasters <ns0:ref type='bibr' target='#b20'>(Gupta, Lamba, Kumaraguru, & Joshi, 2013)</ns0:ref>, instigating terrorism activity <ns0:ref type='bibr' target='#b49'>(Starbird, Maddock, Orand, Achterman, & Mason, 2014)</ns0:ref>, kindling cyber-troop activity <ns0:ref type='bibr' target='#b5'>(Bradshaw & Howard, 2018)</ns0:ref>, hampering the decision-making ability during elections(News18) and creating panic, bringing about the economic crisis(herald) and inciting religion-based attacks <ns0:ref type='bibr'>(Indianexpress)</ns0:ref> during Covid-19 pandemic.</ns0:p><ns0:p>Looking at the huge outburst of fake news around the coronavirus, the World Health Organization(WHO)announced the new coronavirus pandemic was accompanied by a 'Misinformation Infodemic'. Various aspects of misinformation and its identification using AI tools for COVID 19 data is reported in a recent article <ns0:ref type='bibr' target='#b26'>(Jyoti Choudrie, 2020)</ns0:ref>. Fact Checkers report fake news from general elections and attacks at Pulwama to the scrapping of Article 370 and the ongoing protests against the Citizenship Amendment Act, which triggered a wide distribution of misinformation across social media platforms <ns0:ref type='bibr'>(Economictimes)</ns0:ref>. Misinformation affects communities and their susceptibility in various ways; for instance, mob lynching and communal poison.</ns0:p><ns0:p>The dependability of mass on social media news items has rapidly grown. It is reported that out of the English-speaking news seekers in India 84 percent rely on Online news whereas 16 percent on the outpaced print media <ns0:ref type='bibr'>(Reuters)</ns0:ref>. The urban, semi-urban teen-agers are the primary consumers of social media news <ns0:ref type='bibr' target='#b37'>(Pérez-Rosas et al., 2017)</ns0:ref>. Due to such tremendous use of online platforms, the spread of information disorder is overwhelming and immense, causing harm to society. In the year 2014, the World Economic Forum declared misinformation as one of the 10 global risks(W.E. <ns0:ref type='bibr'>Forum, 2014)</ns0:ref>. Governments have taken some anti-misinformation efforts aiming to curb the spread of unlawful content and misinformation spanning from the laws, Media literacy campaigns, government task force, bills, platform agreements, to arrests <ns0:ref type='bibr'>(Poynter)</ns0:ref>.</ns0:p><ns0:p>From the social media platforms available, Facebook and WhatsApp are particularly widely used for news updates. As reported by Reuters, 75% use Facebook, 82% use WhatsApp, 26% use Instagram, 18% use Twitter. Hence it becomes the responsibility of these platforms to help mitigate the spread of misinformation. Facebook 4 states that they are working in different ways -e.g. most false news is motivated due to financial aspects, hence undermining the economic incentives may prove useful. The International Fact-Checking Network and the fact-checkers are working hard to investigate the facts behind a piece of information likely to be fake. Several experiments were carried out to assess the effect of hoaxes, false reviews, and fake news. To create a misinformation detection system, we need to consider the various aspects of the knowledge and categorization of different features. Several researchers performed research and submitted it. We present the literature in parts that concentrate on social and cognitive dimensions, categorization strategies, and AI-based detection systems using different acceptable algorithms.</ns0:p><ns0:p>Mis-and disinformation can easily be disseminated -wittingly or unwittingly -through all types of media. The ease of access to such quick information without any validation, has put a responsibility on the reader to decide the correctness of the information at hand. Correctness, Trustworthiness, or Credibility is the quality of the information to be trusted and believed in. In the context of news, it encompasses the broader aspects of trustworthiness, reliability, dependability, integrity, and reputation. When people are unable to debunk the information and act accordingly, that makes poor decisions impacting their lives. It is essential to check the credibility of the source, author, check your biases, check the date and supporting sources to determine the reliability via comparison with reliable sources 5 (EMIC).</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>While performing the credibility analysis we need to first examine how misinformation and disinformation are being represented, spread understood, and acted upon. The role and motivation of an individual behind resharing the original content is an important aspect while devising a policy to curtail the spread and also for developing some technical solutions to mitigate it. The most powerful of the information disorder content is that which harms people by influencing their emotions. Since the social platforms are designed to express emotions through likes, comments, and shares, all the efforts towards fact-checking and debunking false information are ineffective since the emotional aspect of the sharing of information is impossible to control. Detection and mitigation of information disorder are challenging due to the psychological aspects of Motivation for Dissemination and the Proliferation of misinformation. The two primary channels for spreading the misinformation are employed namely, Echo Chamber <ns0:ref type='bibr' target='#b45'>(Shu, Bernard, & Liu, 2019)</ns0:ref> which is a situation where beliefs are reinforced or enhanced by contact and repetition within a closed structure and Filter Bubble <ns0:ref type='bibr' target='#b45'>(Shu et al., 2019)</ns0:ref> is the state of intellectual isolation that can result from custom searches when a website algorithm selectively estimates what information a user wants to see based on user information, such as location, past click history, and search. The concept of the filter bubble is used to target a specific group of people to spread the specific misinformation. As per <ns0:ref type='bibr' target='#b29'>(Kumar & Geethakumari, 2014)</ns0:ref> cognitive psychology plays an important role in the spread of misinformation.</ns0:p><ns0:p>As stated earlier, there are political, financial, and social aspects that play a role as a motivation behind the creation of fake news items. These diverse angles, namely, the dynamic and ubiquitous nature of information, difficulty in verification, and homophily prove to be some of the primary challenges in finding the credibility of the information.</ns0:p></ns0:div>
<ns0:div><ns0:head>Previous Work</ns0:head><ns0:p>Misinformation detection is studied in different ways, starting with how it is created, spread, and eventually affects the community. <ns0:ref type='bibr' target='#b46'>(Shu, Sliva, Wang, Tang, & Liu, 2017)</ns0:ref> surveys the literature from two distinct phases: characterization and detection. Characterization is concerned with understanding the basic concepts and principles of fake news in traditional and social media whereas data mining with feature extraction and model construction is included in detection. <ns0:ref type='bibr' target='#b46'>(Shu et al., 2017)</ns0:ref> in his paper presents the characteristics of Fake News on traditional and social media that include Psychological and social foundations as well as fake accounts and echo chamber creation on social media. The author also puts forward the detection approaches including News Content and Social Context. Various approaches towards fake social media news identification are proposed, including Data Orientation, Feature Orientation, Model Orientation, and Application Orientation. Depending on these approaches multiple systems have developed that concentrate on temporal features, psychology, or the data for a data-oriented approach. Much explored approaches are Feature Orientation that considers the content or the social context of the news. Depending on the dataset the Model is selected either to be Supervised, Unsupervised, or Semi-supervised <ns0:ref type='bibr' target='#b46'>(Shu et al., 2017)</ns0:ref>. Feature Selection is an important step while approaching fake news detection. Features are broadly categorized into content features and social context features by <ns0:ref type='bibr' target='#b6'>(Cao et al., 2018)</ns0:ref>. The content features include lexical, syntactic, and topic features whereas social context features include user features, propagation features, and temporal features. There is vast work done in detecting misinformation with various approaches, traditionally some classification methods used were Decision Tree & Bayesian Networks <ns0:ref type='bibr' target='#b7'>(Castillo, Mendoza, & Poblete, 2011)</ns0:ref>, Random Forest & SVM (Kwon, Cha, Jung, Chen, & Wang, 2013), Logistic Regression <ns0:ref type='bibr' target='#b7'>(Castillo et al., 2011)</ns0:ref> for the handcrafted features. The features like author, context, and writing style <ns0:ref type='bibr' target='#b38'>(Potthast, Kiesel, Reinartz, Bevendorff, & Stein, 2017)</ns0:ref> of the news can help in identifying the fake news, although writing style alone cannot be a good option. Linguistic signs may be used to identify language characteristics such as n-grams, punctuation, psycholinguistic characteristics, readability, etc. Classification based on the credibility of the person who liked it is an approach taken in some cases <ns0:ref type='bibr' target='#b46'>(Shu et al., 2017)</ns0:ref>. The conventional techniques of machine learning have often resulted in a high-dimensional representation of linguistic information leading to the curse of dimensionality where enormous sparse matrices need to be treated. This issue can be solved with the use of word embeddings, which gives us low dimensional distributed representations. Misinformation specifically a news item may constitute Words, Sentences, Paragraphs, and Images. For applying any AI technique on text firstly we need to format the input data into a proper representation that can be understood by the model we are designing. Different state-of-art representation techniques like one-hot encoding, word embeddings like Continuous Bag of Words and Skip-gram <ns0:ref type='bibr' target='#b32'>(Mikolov, Chen, Corrado, & Dean, 2013</ns0:ref>) that compute continuous vector representations of very big datasets of terms, GloVe is Global word representation vectors <ns0:ref type='bibr' target='#b36'>(Pennington, Socher, & Manning, 2014)</ns0:ref> global corpus statistics that train just on non-zero elements in a word-word matrix, and not on the entire sparse matrix or single background windows in a large corpus. BERT <ns0:ref type='bibr' target='#b10'>(Devlin, Chang, Lee, & Toutanova, 2018)</ns0:ref> bi-directional pre-training setup, using the transformer encoder. Open-AI GPT pre-training model internally using the transformer decoder concept. Pre-trained embeddings can be adapted to build a neural networkbased fake news detection model. Text data is a sequential time series data which has some dependencies between the previous and later part of the sentence. Recurrent Neural Networks has been widely used to solve NLP problems, traditionally encoder decoders architecture in Recurrent Neural Network was a good option where an input sequence is fed into the encoder to get the hidden representation which is further fed to the decoder and produce the output sequence. One step towards fake news detection was to detect stance <ns0:ref type='bibr' target='#b9'>(Davis & Proctor, 2017</ns0:ref>) that involves estimating the relative perspective (or stance) of two texts on a subject, argument, or problem. This can help in identifying the authenticity of the news article based on whether the headline agrees with, disagrees with, or is unrelated to the body of the article. Recurrent Neural Networks <ns0:ref type='bibr' target='#b46'>(Shu et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b31'>(Ma et al., 2016)</ns0:ref> to capture the variation of contextual information, CSI model composed of three modules <ns0:ref type='bibr' target='#b43'>(Ruchansky, Seo, & Liu, 2017)</ns0:ref> implements Recurrent Neural Network for capturing user activity's temporal pattern, learning the source characteristic based on user behavior, and classifying the article. Researchers have also investigated the Rumor form of the news that is circulated without confirmation or certainty to facts <ns0:ref type='bibr' target='#b11'>(DiFonzo & Bordia, 2007)</ns0:ref>. A rumor detection system <ns0:ref type='bibr' target='#b6'>(Cao et al., 2018)</ns0:ref> for Facebook (notify with a warning alert), Twitter(credibility rating is provided and the user is allowed to give feedback on it) and Weibo(users report fake tweets and elite users scrutinize and judge them) are in function.</ns0:p><ns0:p>As the news articles usually contain a huge amount of text, this makes the input sequence long enough. In such cases, the old information gets washed off and scattered focus over the sequences which is due to a lack of explicit word alignment during decoding. Theirs raised a need to solve these issues and the attention mechanism has done a good job. There are different flavors of attention mechanisms that came up depending on their use cases, first and very basic version i.e. the basic attention which extracts important elements from a sequence. Multi-dimensional attention captures the various types of interactions between different terms. Hierarchical attention extracts globally and locally important information. Self-attention <ns0:ref type='bibr' target='#b51'>(Vaswani et al., 2017)</ns0:ref> captures the deep contextual information within a sequence. Memory-based attention discovers the latent dependencies. Task-specific attention captures the important information specified by the task.</ns0:p><ns0:p>Singhania et.al. implemented a 3HAN hierarchical attention model <ns0:ref type='bibr' target='#b47'>(Singhania, Fernandez, & Rao, 2017</ns0:ref>) that has 3 layers for words, sentences, and headlines each using bi-directional GRUs of the network encoding and attention. Wang et.al. implemented Attention-based LSTM <ns0:ref type='bibr' target='#b53'>(Wang, Huang, Zhu, & Zhao, 2016)</ns0:ref> for aspect-level sentiment analysis that finds the aspects and their polarity for the sentence. Goldberg applied a novel design for the NLP task that incorporates an attention-like mechanism in a Convolutional Network <ns0:ref type='bibr' target='#b18'>(Goldberg, 2016)</ns0:ref>. Further enhancement with a deep attention model with RNN given by (Chen, Li, Yin, & Zhang, 2018) learns selective temporal hidden representations of the news item that bring together distinct features with a specific emphasis at the same time and generate hidden representation.</ns0:p><ns0:p>Convolutional Neural Networks were used in computer vision tasks but recently they have gained popularity in Natural Language processing tasks as well. CAMI <ns0:ref type='bibr' target='#b55'>(Yu, Liu, Wu, Wang, & Tan, 2017)</ns0:ref> tries to early detect the misinformation. It is carried out by dividing the events into phases and representing them using a paragraph vector <ns0:ref type='bibr' target='#b30'>(Le & Mikolov, 2014)</ns0:ref>. Automatic, identification of fake news based on geometric deep learning <ns0:ref type='bibr'>(Monti et al.,2019)</ns0:ref> generalizing classical CNNs to graphs. FNDNet <ns0:ref type='bibr' target='#b27'>(Kaliyar, Goswami, Narang, & Sinha, 2020)</ns0:ref> deep convolutional neural network. DMFN <ns0:ref type='bibr' target='#b35'>(Nguyen, Do, Calderbank, & Deligiannis, 2019)</ns0:ref> model capturing dependencies among random variables using a Markov random field. Pattern driven approach <ns0:ref type='bibr' target='#b56'>(Zhou & Zafarani, 2019)</ns0:ref> capturing the relation between news spreader and relation between the spreaders of that news item. A mutual evaluation model <ns0:ref type='bibr' target='#b25'>(Ishida & Kuraya, 2018</ns0:ref>) that dynamically builds a relational network model to identify credibility taking into consideration the consistency of the content. Without a dynamic relation network, the content dependent model would lead to a different score of the same article, since a different individual will have different perspectives. Several researchers have proposed various approaches to the detection of fake news as discussed in this section. Various classifiers and representation techniques are proposed. The reported accuracy for these models ranges from 85 to 90%. However, there is a scope for improving the accuracy of the fake news detection model.</ns0:p><ns0:p>From the above work, it can be observed that the a number of researchers have carried out a detailed study in finding out various linguistic features and using different combinations of features in classification tasks. Deep learning automatically learns features from data, instead of adopting handcrafted features, it makes the task easier and can handle a huge amount of data. We have used the deep learning approach for feature extraction and added a new feature that helps improve the understanding of the text under consideration namely the stance which estimates the relative perspective of two pieces of texts. The major contribution over the previous works lies in the addition of stance as a feature along with the state of art BERT Model. Recently BERT-based models are applied in NLP tasks, that is the hybrid of BERT and Artificial Intelligence techniques like RNN <ns0:ref type='bibr' target='#b59'>(Kula, Choraś, & Kozik, 2020)</ns0:ref>,CNN <ns0:ref type='bibr' target='#b58'>(Kaliyar, Goswami, & Narang, 2021)</ns0:ref> and both <ns0:ref type='bibr' target='#b57'>(Ding, Hu, & Chang, 2020)</ns0:ref>. BERT models have also proved its importance to deal with multi-modal news articles <ns0:ref type='bibr' target='#b60'>(Zhang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>We have considered cosine distance between the vectors of the news title and the news body as the similarity measure. In traditional literature, the stance is defined in several ways. E.g. stance is detected towards a particular topic <ns0:ref type='bibr' target='#b50'>(Sun, Wang, Zhu, & Zhou, 2018)</ns0:ref>, agreement or disagreement towards a particular claim <ns0:ref type='bibr' target='#b33'>(Mohtarami et al., 2018)</ns0:ref> and even attitude expressed in a text towards a target <ns0:ref type='bibr' target='#b1'>(Augenstein, Rocktäschel, Vlachos, & Bontcheva, 2016)</ns0:ref>. All of these have predefined categories like a negative, positive, neutral, agree, disagree, unrelated, for or against. Our intention here is to find the relation/similarity between the two text pieces(namely the title and the body of the text). Hence, we do not find the score towards a particular topic but the measure of similarity between the title and the body of the news. The reason we have made such a choice is that for the unseen data we are not already aware of the topic it is focusing on. And will make our system more generalized. Such an approach can identify how close or farther the title is to the text in the body of an article under consideration Due to this additional feature, our training data is better equipped for more accurate predictions. Also, with the help of state-of-art language model BERT, our model captures the semantics of the text efficiently with a multi-layer bidirectional transformer encoder which helps to learn deep bi-directional representations of text(article) and finetuning it on our training data to classify an article into fake or real, using the probability score our model assigns to it. In summary, the main contributions of this article are:</ns0:p><ns0:p>• Introducing stance as one of the features along with the content of the article to obtain stateof-the-art performance when detecting fake news using an AI model • Develop a model that captures the semantics of information using the pre-trained contextualized word embeddings BERT(Language Modelling Technique) • Experimentation and validation of the above approach on the benchmark dataset. The remaining article is structured as follows: Section 2 outlines the methodology we have adopted to design our system, Section 3 describes the experimental setup and parameters we are using, Section 4 describes our findings and discussion on them, Section 5 concludes our work.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>We are looking in this paper to build a classifier that detects fake news with better accuracy than already reported. We have experimented with multiple AI models and evaluated accuracies. Our model for fake news detection is based on the content features which use pre-trained embedding to better capture the contextual characteristics. The complete process is explained in the below subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pre-processing</ns0:head><ns0:p>It is observed that text data is not clean. The data is extracted from different sources hence they have different characteristics, to bring such data on a common ground text pre-processing is an important step. Our pre-processing pipeline depends on the embeddings we will be using for our task. Since most of the embeddings do not provide vector values for the special characters and punctuations we need to clean the data by removing such words. Our preprocessing step involves the removal of punctuations, special characters, extra spaces, and lowering the case of the text. Architecture Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>News Dataset 6 , a quite large dataset and a balanced dataset that contains both fake stories and true news. McIntire used the part from Kaggle's fake news collection for the Fake news class, while he got articles from credible journalism organizations like the New York Times, Wall Street Journal, Bloomberg, National Public Radio, and The Guardian for the real news class. The dataset contains 6335 news articles out of the 3171 that are genuine and 3164 that are fake. The dataset is equally distributed with ~50-50% of real labeled articles and fake labeled articles (Refer Fig. <ns0:ref type='figure'>2</ns0:ref>). In real-world applications the data may be skewed, however, for experimentation and dataset building, we have considered a balanced dataset which leads to a more robust and generalized model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data filtering and feature Selection</ns0:head><ns0:p>The data filtering/cleaning is the task of preparing the data for encoding and training purposes.</ns0:p><ns0:p>Here firstly we remove the spaces and punctuations from the text as they do not play an important role in the understanding of the context. This module of our pipeline ends up with feature selection for our training. We perform stance detection on the title and text to get the similarity score which gives us an additional feature to make our classifier perform better. We then encode our selected features</ns0:p></ns0:div>
<ns0:div><ns0:head>Stance Calculation</ns0:head><ns0:p>We calculate the stance estimating the relative perspective of two pieces of the text concerning a subject, argument, or issue. It is the probability score that will be used as an additional feature to detect the misinformation. From Sensationalism detection by <ns0:ref type='bibr' target='#b12'>(Dong, Yao, Wang, Benatallah, & Huang, 2019)</ns0:ref> we consider that the similarity between the article body and article headline can be correlated with articles' credibility. The similarity is captured by first embedding the article body and article headline onto the same space and computing the cosine distance as the similarity measure. To achieve this we first tokenize the article's body and headline, then calculate the cosine distance between each article's body and headline pair. Cosine distance is a good similarity measure as it determines the document similarity irrespective of the size; this is because the cosine similarity captures the angle of the documents and not the magnitude. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space. Manuscript to be reviewed Computer Science gram, GloVe (Global vectors for word representation), Open AI GPT pre-training model internally using the transformer decoder concept, BERT (Bidirectional Encoder Representations from Transformers) bi-directional pre-training model that uses transformer encoder within are available to perform such encoding tasks. Our model uses language model BERT's smaller version namely Distil-BERT <ns0:ref type='bibr' target='#b44'>(Sanh, Debut, Chaumond, & Wolf, 2019)</ns0:ref> which has about half the total number of parameters of the BERT base. In Distil-BERT, a triple loss combining language modeling, distillation, and loss of cosine distance was included to exploit the inductive biases gained during training by these types of models. We are further fine-tuning the model for our text classification task on our dataset.</ns0:p><ns0:formula xml:id='formula_0'>𝐶𝑜𝑠𝜃 = 𝑎 . 𝑏 ‖𝑎‖ ‖𝑏‖ = 𝑛 ∑ 1 𝑎 𝑖 𝑏 𝑖 𝑛 ∑ 1 𝑎 2 𝑖 . 𝑛 ∑ 1 𝑏 2 𝑖 Here, = =</ns0:formula></ns0:div>
<ns0:div><ns0:head>Classification Models</ns0:head><ns0:p>We have evaluated three AI models namely simple Artificial Neural Network, Recurrent Neural Network, Long Short-Term Memory (LSTM), Bidirectional LSTM, and Convolutional Neural Network. 1. Long Short-Term Memory (LSTM): LSTMs is chosen over RNN as it succeeds in overcoming the issue of long-term dependencies by keeping a memory at all times. The output of the embedding layer is fed into the LSTM layer that consists of 64 units and further passed through the dense layers with sigmoid activation and binary cross-entropy for computation. The next model implemented is the bidirectional LSTM model, which is an extension of traditional LSTM that trains two LSTMs in opposite directions to enhance model efficiency in capturing all the necessary information. 2. Bidirectional LSTM: Similar to the LSTM model it takes input from the embedding layers, our Bi-LSTM layer contains 64 units that are fed to the dense layers for computation. For comparison purposes, the activation and loss values used are the same as the previous model. 3. Convolutional Neural Network: Convolutional neural networks although designed for computer vision tasks recently it has given excellent results with NLP tasks as well. In the case of computer vision tasks, the filters slide over the patches of an image, whereas in NLP tasks the filter slides few words at a time over the sentence matrices. This makes Convolutional Neural Networks work well for classification tasks. So we have also implemented a CNN model that consists of a single Conv-1D with the kernel of size 5 and the Max Pooling layer. Further, a flattened layer and a fully connected layer with 64 nodes are used for computation. In all of the above models, we have used a Dense layer that operates linearly where every input is related by some weight to each output. The Loss function used is the cross-entropy loss that measures classification model performance whose output is a probability value between 0 and 1.Loss of cross-entropy increases as the predicted likelihood diverges from the actual mark. In the case of binary classification, where the number of groups equals 2, cross-entropy can be determined as:</ns0:p><ns0:formula xml:id='formula_1'>𝐿 = -(𝑦𝑙𝑜𝑔 ( 𝑝 ) + ( 1 -𝑦 ) 𝑙𝑜𝑔𝑙𝑜𝑔 ( 1 -𝑝 ) )</ns0:formula><ns0:p>These models classify the news articles as fake or real using the sigmoid activation function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>The framework is developed in the Python backend and uses the python libraries namely Keras, NLTK, Pandas, Numpy, Sklearn, sci-kit, etc. The dataset was divided into training, validation and testing sets with train_test_split functionality of sci-kit learn. The scale of the training set size was 70 percent, the validation set scale of 15 percent, and 15 percent of the test set size. Data preprocessing involved the use of the NLTK tool for the removal of HTML tags, punctuations, multiple spaces, etc. The distil-Bert-base-uncased model from the hugging face is used to obtain the embeddings that are later fine-tuned on our dataset. We encode our data with the max length of 512, dimension to be 768, dropout of 0.1, and the number of layers as 6 to gain the input ids and the mask ids. For the classification purpose, we have used LSTM, Bidirectional, Conv1D, Dense layers from Keras. The number of units chosen for each of the layers was based on the experimentations carried out. For the GloVe model, different vector dimensions were tried 100 and 300 and the vector dimension 100 gave good accuracy results. The loss function and activation function used were cross-entropy loss and sigmoid activation for all the models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Metrics</ns0:head><ns0:p>We have used the Confusion matrix, Accuracy, Precision, Recall, F1, and ROC to evaluate our model's efficiency <ns0:ref type='bibr' target='#b21'>(Hossin & Sulaiman, 2015)</ns0:ref>.</ns0:p><ns0:p>1. Confusion Matrix: The information performed by a classifier regarding actual and predicted classifications is studied by a confusion matrix. 2. Accuracy: Accuracy is the proportion of true outcomes within the total number of examined cases. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>In this work, we have proposed an approach to upgrade the fake news identification system with the inclusion of an additional feature termed 'stance'. Stance helps us in understanding the relevance of the article title (i.e. the headline of the news) to the article body(i.e. the text of the news). We add this feature to our content features that are obtained from the pre-trained BERT model, which provides additional insight into the article. The AI models we have experimented with are ANN, LSTM, Bidirectional LSTM, and CNN. Results are obtained by training and testing these models with different vector representation techniques and even including the attention layer to some models. The results are presented in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. Table <ns0:ref type='table'>2</ns0:ref>. shows the classification results for our proposed model. The best results we obtained are with the usage of a pre-trained language model. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> shows the accuracies with different settings as with GloVe Embeddings we attain an accuracy of 92.1% for the LSTM model. With the inclusion of the Attention layer to these models the accuracy shows some improvement of about 1%. Our proposed model adapting the BERT embedding to capture the contextual information from the articles have proved to perform better with an accuracy of 95.32%. We have experimented and obtained the results using the pre-trained BERT model and fine-tuned BERT model. The results we obtained via both these models demonstrate a negligible difference. One of the possible explanations for this could be that the BERT is trained on the English language corpus. The dataset we have used for experimentation is also in English and has a similar structure and features. To show the effect of stance as a feature we have experimented with BERT encoding, which builds a representation of the news title and news body along with the similarity between them, and demonstrated that it outperforms encoding the news title and news body alone. From the results, it can be observed that there is a considerable increase in the testing accuracy (Refer Table <ns0:ref type='table'>3</ns0:ref>.). We have dealt only with the content part of the article, the reason being when a news article is published and not much-circulated yet, the metadata such as reposts, likes, shares, etc. are not available. Then content can be the only parameter considered for fake news detection. The below plots give a clear view of the results obtained from our model (Refer Fig. <ns0:ref type='figure'>3</ns0:ref>). We have carried out a 5-fold Cross-validation resampling procedure to evaluate our model and make the results comparable with the other models on the same dataset <ns0:ref type='bibr'>(Refer Table. 4</ns0:ref>). We implemented a stratified k-fold cross validation, however observe a few misclassified samples in the test results. This is primarily due to the overlapping of features in the two classes and having unclear distinction due to that. To validate our approach, we carried out a comparison with pre-existing approaches by considering a benchmark dataset by G. McIntire Fake and Real News Dataset. Many have proposed a solution to the fake news detection problem using hand-crafted feature engineering and applying Machine Learning models like Naïve Bayes, Random Forest, Boosting, Support Vector Machine, Decision Tree, Logistic Regression. <ns0:ref type='bibr' target='#b40'>(Reddy, Raj, Gala, & Basava, 2020)</ns0:ref> reported an accuracy of 95% with a gradient boosting algorithm on the combination of stylometric and CBOW (Word2Vec) features. <ns0:ref type='bibr' target='#b2'>(Bali, Fernandes, Choubey, & Goel, 2019)</ns0:ref> used Sentiment polarity, Readability, Count of words, Word Embedding, and Cosine similarity as the features to discriminate fake news with machine learning models. He reported the highest accuracy of 87.3% with the XGBoost model. <ns0:ref type='bibr' target='#b14'>(Esmaeilzadeh, Peh, & Xu, 2019)</ns0:ref> uses the LSTM-encoder-decoder with attention to the text, headline, and self-generated summary of the text to obtain the features. He proved that the summary of the text gives better accuracy of 93.1%. Embedding methods such as LSTM, depth LSTM, LIWC CNN, and N-gram CNN were used <ns0:ref type='bibr' target='#b23'>(Huang & Chen, 2020</ns0:ref>) and weights were optimized using the Self-Adaptive Harmony Search (SAHS) algorithm with an initial accuracy of 87.7% that was increased to 91%. <ns0:ref type='bibr' target='#b28'>(Khan, Khondaker, Islam, Iqbal, & Afroz, 2019)</ns0:ref> in his research he used word count, average word length, article length, count of numbers, count of parts of speech(adjective), count of exclamation along with the sentiment feature, and n-gram GloVe embedding encoded features. His work reported accuracy of 90% with the Naïve Bayes classifier and 95% with the LSTM deep neural network. <ns0:ref type='bibr' target='#b3'>(Bhutani, Rastogi, Sehgal, & Purwar, 2019)</ns0:ref> reported an accuracy of 84.3% with the Naïve Bayes model and TF-IDF scores, sentiments, and Cosine similarity scores as features. <ns0:ref type='bibr' target='#b19'>(Gravanis, Vakali, Diamantaras, & Karadais, 2019)</ns0:ref> developed a content-based model and reported an accuracy of 89% with the Support Vector Machine. <ns0:ref type='bibr' target='#b17'>(George, Skariah, & Xavier, 2020)</ns0:ref> in his work used CNN for Linguistic features and Multi Headed self-Attention for the contextual features. Accuracies of 84.3% and 83.9% were achieved with machine learning models such as the Naïve Bayes classifier and Random forest respectively. With deep learning models like LSTM and FastText embedding accuracy reported was 94.3%. With the increasing creation and availability of misinformation, automatic feature extraction with the help of Deep Learning algorithms has also been experimented. A comparison of other work with our model is shown in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Information disorder and the spread of fake news is a highly prevalent and challenging problem in this digital era. The highly harmful potential of fake news on social stability and the psychological health of the masses is undisputed and proven time and again. To identify and detect the PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science misinformation is the primary step in controlling its spread and combat its harmful effects on society. To this end, in this paper, we have proposed a methodology to identify the fake news based on stance detection in addition to other standard features. The content of an article is the basic informative piece that does play a significant role in assigning credibility to the article. We propose a model that uses language features based on the article's content to discriminate against fake news from real news. Our model tries to detect the fake articles at a stage when they are yet to propagate in the social network. To make the detection more precise we have added a stance feature to the dataset. This stance value has helped us understand the content by finding the relevance between the article headline and article body. Along with this additional feature, we have learned the language representation with the help of the BERT technique. The transfer learning concept that injects external knowledge gives a better understanding of our content. It is observed that our system has shown improvement in the results as compared to the other models. Previous work is conducted with handcrafted linguistic features, stylometric features, TF-IDF scores, n-gram features as well as automatic feature extraction from the content of the articles. Evaluation of our model is done on Kaggle's open source dataset for news articles. The results presented demonstrate the performance of various deep learning architectures for misinformation detection, wherein the BERT embeddings based CNN architecture provides the highest performance of 95% accuracy and comparable precision and recall. Besides, we have also compared our approach with other pre-existing approaches and shown that our proposed model provides superior performance as compared to the other methods. One of the limitations is that it is essential to have required features in the dataset. Without this, this approach will not work effectively. We will extend this work in the future to overcome this limitation.</ns0:p><ns0:p>On the social media platform Images, Videos and Audio have become a part of a news article to make it more informative and sensational at times. The fake news articles are usually intertwined with the factual information that makes interpretation of the contextual information difficult. As an extension of this work Image, Video, and Audio can also be used in the contentbased detection systems to make them near to perfect. <ns0:ref type='bibr' target='#b3'>Bhutani et al., 2019)</ns0:ref> 84.3 83.9 <ns0:ref type='bibr' target='#b19'>(Gravanis et al., 2019)</ns0:ref> 70 89 <ns0:ref type='bibr'>(Bharadwaj & Shao, 2019)</ns0:ref> 90.7 94.7 92.7 <ns0:ref type='bibr' target='#b17'>(George et al., 2020)</ns0:ref> 84.3 83.9 94.3 <ns0:ref type='bibr' target='#b14'>(Esmaeilzadeh et al., 2019)</ns0:ref> 92.1 93.1 <ns0:ref type='bibr' target='#b23'>(Huang & Chen, 2020)</ns0:ref> 84.9 87.7 91 <ns0:ref type='bibr' target='#b28'>(Khan et al., 2019)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>depicts the complete pipeline of our model. Dataset Manual fact-checking is a tedious and lengthy task and such fact-checked articles are very less to train deep learning models. Researchers have extracted news articles from the websites they believe to be authentic as real or genuine and similarly the fake articles. McIntire Fake and Real PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>3. precision: Precision tells about what proportion of predicted Positives is truly Positive. 4. recall: It tells us what proportion of real positives is graded correctly.5. F1 Score: It gives the harmonic mean of precision and recall. 6. ROC: ROC demonstrates how well the percentage of the positives are isolated from the negative groups. These metrics helped us analyze the results we gained from our model. Table1. Depicts the values for accuracies during training, validation, and testing, along with the precision, recall, F1, and the ROC.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,386.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,231.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>is the dot product of the two vectors.𝑎 . 𝑏 ∑ 𝑛 1 𝑎 𝑖 𝑏 𝑖 𝑎 1 𝑏 1 + 𝑎 2 𝑏 2 + … + 𝑎 𝑛 𝑏 𝑛 Feature Encoding Any AI model requires the input data to be in number format. Different state-of-art representation techniques like one-hot encoding, word embeddings like Continuous Bag of Words and Skip-</ns0:figDesc><ns0:table /><ns0:note>6 https://github.com/GeorgeMcIntire/fake_real_news_dataset PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Performance of different AI models</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Models</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell><ns0:cell>Precision (%)</ns0:cell><ns0:cell>Recall (%)</ns0:cell><ns0:cell>F1 (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Tokenizer</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>86.6</ns0:cell><ns0:cell>85.1</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>86.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>85.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.0</ns0:cell><ns0:cell>93.01</ns0:cell></ns0:row><ns0:row><ns0:cell>GloVe embeddings</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>92.1</ns0:cell><ns0:cell>91.7</ns0:cell><ns0:cell>92.7</ns0:cell><ns0:cell>92.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>93.9</ns0:cell><ns0:cell>92.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>91.6</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>90.9</ns0:cell></ns0:row><ns0:row><ns0:cell>GloVe embeddings and attention mechanism</ns0:cell><ns0:cell>LSTM Bi-LSTM</ns0:cell><ns0:cell>92.1 91.9</ns0:cell><ns0:cell>91.7 90.2</ns0:cell><ns0:cell>92.7 93.9</ns0:cell><ns0:cell>92.2 92.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>91.6</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>90.9</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT embeddings</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>91.16</ns0:cell><ns0:cell>91.01</ns0:cell><ns0:cell>91.01</ns0:cell><ns0:cell>91.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bi-LSTM</ns0:cell><ns0:cell>93.05</ns0:cell><ns0:cell>88.76</ns0:cell><ns0:cell>88.76</ns0:cell><ns0:cell>93.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>95.32</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>95.31</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table : 5</ns0:head><ns0:label>:</ns0:label><ns0:figDesc>Performance of various Fake News Identification models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Models</ns0:cell><ns0:cell>NB</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell cols='3'>SVM MLP Boost</ns0:cell><ns0:cell>LSTM Bi-LSTM CNN</ns0:cell></ns0:row><ns0:row><ns0:cell>(Reddy et al., 2020)</ns0:cell><ns0:cell>86</ns0:cell><ns0:cell>82.5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>95</ns0:cell></ns0:row><ns0:row><ns0:cell>(Bali et al., 2019)</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>72.8</ns0:cell><ns0:cell>87.3</ns0:cell></ns0:row><ns0:row><ns0:cell>(</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://ethicaljournalismnetwork.org/tag/fake-news/page/2 2 https://en.unesco.org/fightfakenews 3 https://www.bannerflow.com/blog/marketing-fake-news-dangerous-game/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='4'>https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news 5 https://guides.emich.edu/newseval PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52762:2:0:NEW 5 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Authors would like to thank the reviewers and the editor for the suggestions in second round of review. The paper has improved to a great extent due to these valuable suggestions. The editor comments and response to them is presented below:
Editor Comments and Author responses:
The paper has improved substantially from its previous revision, as acknowledged by the reviewers. There are however some final revisions required prior to acceptance:
1. Discuss recent literature using BERT-based models (see references suggested by one of the reviewers).
Response : following is added in the manuscript at line no. 269 to 272. Also the references section is modified accordingly as per the suggested references by reviewer 1.
Recently BERT-based models are applied in NLP tasks, that is the hybrid of BERT and Artificial Intelligence techniques like RNN[2],CNN[3] and both[4]. BERT models have also proved its importance to deal with multi-modal news articles[1].
2. Perform error analysis to provide more detailed insights into the results.
Response: To analyse our results we make use of stratified k-fold cross validation. Our model is robust, however, in some cases misclassification is observed. This is possible due to overlapping of features in both the classes. In the future enhancement of this work we will try to resolve this issue to make our system near to perfect. Following lines are added at line no : 462-465.
We implemented a stratified k-fold cross validation, however observe a few misclassified samples in the test results. This is primarily due to the overlapping of features in the two classes and having unclear distinction due to that.
3. Either perform experiments on an additional dataset or acknowledge the limitations of experimenting on a single dataset.
Response : We have experimented with benchmark dataset by G. McIntire Fake and Real News Dataset alone as this dataset provide the necessary features that we are using to calculate the stance score. And have compared the results with the experiments researchers have published on the same dataset. It necessary to have the required features extracted from the data in order to successfully implement our framework. This is a limitation of this approach. As a future work we shall try and use different dataset that have such features or collect more samples that can be incorporated in the dataset we have used. Following has been included in the conclusion section (line 522-524).
One of the limitations is that it is essential to have required features in the dataset. Without this, this approach will not work effectively. We will extend this work in the future to overcome this limitation.
" | Here is a paper. Please give your review comments after reading it. |
49 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Online reviews regarding different products or services have become the main source to determine public opinions. Consequently, manufacturers and sellers are extremely concerned with customer reviews as these have a direct impact on their businesses.</ns0:p><ns0:p>Unfortunately, to gain profit or fame, spam reviews are written to promote or demote targeted products or services. This practice is known as review spamming. In recent years, Spam Review Detection problem (SRD) has gained much attention from researchers, but still there is a need to identify review spammers who often work collaboratively to promote or demote targeted products. It can severely harm the review system. This work presents the Spammer Group Detection (SGD) method which identifies suspicious spammer groups based on the similarity of all reviewer's activities considering their review time and review ratings. After removing these identified spammer groups and spam reviews, the resulting non-spam reviews are displayed using diversification technique. For the Diversification, this study proposed Diversified Set of Reviews (DSR) method which selects diversified set of top-k reviews having positive, negative, and neutral reviews/feedback covering all possible product features. Experimental evaluations are conducted on Roman Urdu and English real-world review datasets. The results show that the proposed methods outperformed the existing approaches when compared in terms of accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Customer reviews have become the major source to collect different opinions about products and services. These reviews can affect daily life decisions and professional activities: e.g., which restaurant is good, which car to purchase, which product to buy and which doctor to consult. Online reviews can be posted as genuine reviews or spam reviews. Spam reviews are usually written by individuals/spammers to highlight any product or service by giving spam reviews. Spam reviews may directly interpret financial advantages and losses for a company. For example, the large number of favourable reviews about products and services may attract more customers and negative reviews are often a reason for decline in the sale <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Distinguishing fake reviewers from the genuine reviewer in an online forum is a challenging and open research issue. Therefore, in recent years, Spam Review Detection (SRD) problem has gained much attention in the research community. It is believed that without solving this significant problem, the review websites could become a place full of lies and completely useless <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. A recent survey suggests that online reviews for purchasing products or services play a critical role in the decisions of potential buyers <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. It has been observed that 70 percent of customers trust in the reviews written by other customers, while 90 percent read these reviews before making financial decisions <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>.</ns0:p><ns0:p>A spammer group refers to a group of reviewers who works together for writing spam reviews to promote or demote a set of targeted products. Spammer groups are very damaging as they can produce a huge amount of spam reviews collectively. When a group is working collaboratively towards a product, it can take control of the sentiment of the customer for the product. The group of reviewers is usually represented as a set of reviewer-ids. These ids can belong to a single person having multiple ids, multiple persons with single ids and/or combination of both <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>In the last few years, writing spam reviews has become a business. Spammers do not write a single review. They usually write many spam reviews for different products to earn money. So, collective behaviors of a group working together on several products can be difficult to identify the spam reviews.</ns0:p><ns0:p>Most of the existing studies have identified spam reviews by utilizing the individual spammer behavioral features <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>[8] <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>.</ns0:p><ns0:p>To the best of our knowledge, the spammer group detection task has not been thoroughly studied for reviews and English reviews to increase customer confidence regarding product purchases in the South Asia, particularly in Pakistan.</ns0:p><ns0:p>It is a common practice that the reviewers usually read two or three reviews from the top reviews and decide about the purchase of the product/service. These top reviews usually contain positive/negative reviews and may not present a complete view of all reviewer's feedback <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> . Figure <ns0:ref type='figure' target='#fig_10'>1</ns0:ref> presents an example scenario of reviews and compares the presentation of non-diversified reviews with diversified reviews. Figure <ns0:ref type='figure' target='#fig_10'>1</ns0:ref>(a) displays all the positive top-k reviews regarding a mobile model which shows the bias toward promoting the product whereas Figure <ns0:ref type='figure' target='#fig_10'>1</ns0:ref>(b) presents an assorted representation of positive, negative, and neutral reviews covering all possible features of the same mobile model. The diversified reviews are more reliable and useful for the user to make the decision <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. The second objective of this research is to present a diversified view of top-k non-spam reviews having positive, negative, and neutral sentiments covering all possible product features. This study proposed a novel Spammer Group Detection (SGD) method to find out suspicious spammer groups that work together to produce spam reviews. SGD works in several phases. First, it produces co-reviewer graphs for identification of reviewers having similarity in reviews based on review post time and review rating. It, then, utilizes the Structural Clustering Algorithm for Networks (SCAN) algorithm to find out candidate spam groups. Next, it analyzes various individual spammer behavioral features and group spammer behavioral features to calculate the spam score of each candidate group reviews. Finally, using a predefined threshold value, the group having a higher spam score than the threshold is labeled as spam group and assumed that all reviews written by that group are spam. The training and testing of the proposed SGD method are conducted using deep learning classifiers:</ns0:p><ns0:p>Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), and Bidirectional Encoder Representations from Transformers (BERT).</ns0:p><ns0:p>In addition to finding non-spam reviews using SGD method, this work also presents a novel diversification method; named as Diversified Set of Reviews (DSR), which selects diversified set of top-k non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible product features. DSR method is used to present non-spam reviews which are identified by SGD method. DSR approach works in several phases, first, review dataset of the product or service is divided into three categories based on positive, negative and neutral sentiments, then, the diversified feature extraction process is applied for each product or service such that expanded features are extracted. In the end, all three diversified categories of datasets are combined to display top K diversified non-spam reviews of particular product or service having varied sentiments and expanded product features. The DSR method save time for the reviewers to decide about product and services without considering all reviews.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The outcome of this research is a proposed framework which, first, identify spammer and spam reviews using SGD method and then apply DSR method to produce a diversified set of top k non-spam reviews. The framework helps in identifying group spammers and displays identified non-spam reviews in diversified manner. To the best of the researchers' knowledge, this is the first study of its kind to identify spammer group and display diversified set of top K non-spam reviews about products and services.</ns0:p><ns0:p>Main contributions of the study are as follows:</ns0:p><ns0:p>1. Formulated a co-reviewer graph to identify the connection between different reviewers. and testing the proposed SGD method. 5. Proposed a novel diversification method (DSR) to obtain a diversified set of reviews. The rest of the paper is organized as follows. Introduction section is followed by section 2 which presents related work regarding group spam detection and existing diversification techniques. Section 3 describes the statistics of the datasets used for this research. Section 4 elaborates the proposed SGD and DSR methods. Section 5 describes the experimental evaluation of the proposed methods. Finally, Section 6 concludes the work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Literature Review</ns0:head><ns0:p>This study has reviewed the literature from two perspectives (a) identifying the spammer group in the Spam Review Detection (SRD) domain and (b) diversification techniques used in the domain of information retrieval. The aim is to determine the novel contributions of the proposed work by comparing it with prior studies.</ns0:p><ns0:p>i.</ns0:p></ns0:div>
<ns0:div><ns0:head>Group Spammer Detection Method</ns0:head><ns0:p>In this subsection, existing studies of group spam detection have been reviewed and analyzed. Mukherjee et al., <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> conducted the first study for detecting a group of spam reviewers working together. They used the frequent itemset mining method to get candidate groups and proposed GSRank framework for identifying the spam groups.</ns0:p><ns0:p>Allah bakhsh et al., <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> also used frequent item mining techniques. Spammer behavioral features like review time and rating scores were used to detect group spammers. They used the Linear Discriminant Analysis (LDA) model by boosting the count of malicious reviewers based on the burstiness of reviews and rating scores. A Spammer Group Detection (PSGD) method was introduced by Zhang et al., <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, which used a supervised learning approach for spotting spammer groups in online review systems. They used frequent item mining to get candidate spammer groups. Then, the Naive Bayesian and Expectation-Maximum (EM) algorithms were used for classification and identification of spammer groups. They performed their experiment on Amazon.cn dataset. Zhou et al., <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> identified spammer groups by using self-similarity and clustering coefficient methods.</ns0:p><ns0:p>They performed their experiments on Dianping dataset and observed that the clustering coefficient has the best indicator for detecting spammer groups. Rayana et al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> proposed a framework called SPEAGLE which used metadata (review text and spammer behavior) of reviews and relational data (review network). This framework can identify fake reviews, spammers, and target products. They also introduced a lightweight version of SPEAGLE called SPLITE which used a subset of features to avoid computational overhead. Li et al., <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> proposed an algorithm to detect individual and group spammers. They proposed Labelled Hidden Markov Modal (LHMM) to identify spammers. They extended their modal to Coupled Hidden Markov Modal (CHMM), which has two parallel HMMs. It represented posting behavior and co-bursting signals. They used hidden states to make a co-bursting network of reviewers to detect spammers who work in a group. Kaghazgaran et al., <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> proposed a framework called TwoFace using a neighborhoodbased method to spot spammer groups in an online review system. First, they exploited different crowdsourcing websites and selected Rapid Workers to get information about their activities of Amazon products in which they were targeted. Next, they have identified product ids from the amazon dataset for products mentioned in crowdsourcing activities. Later, they get a list of reviewers who have written reviews about these products and found reviews of all such reviewers who write a review on those products. After that, they have identified all those reviewers who have written reviews on the same product and created a co-reviewer graph. The model, then, applied the trust rank algorithm, which is based on the PageRank algorithm, to find ranking scores of different suspicious groups. They used a different machine learning algorithm to classify suspicious spammer groups. Zhang et al., <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> proposed a CONSGD method that used a cosine pattern and heterogeneous information network method to detect spammer groups. To find a tight spammer group candidate, they used the FP-Growth-like algorithm to find cosine patterns. They restricted the tightness of extracted groups with a low cosine threshold value to achieve efficiency. Xu et al., <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science model marked top-ranked groups as spammer groups. In a similar context, Hu et al., <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> used the CPM method to find spammer groups with the infinite change in the review stream.</ns0:p><ns0:p>Considering the existing work on spam group detection, most of the related studies <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>[14][15] <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> have used spammer behavioral features to detect spam groups. On the other hand, some researchers used graph-based techniques to identify suspicious spammer groups with a little focus on spammer behavioral features <ns0:ref type='bibr'>[17][18]</ns0:ref>[19] <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>[21] <ns0:ref type='bibr'>[22][23]</ns0:ref>. This research aims to develop a framework that will use both behavioral and graph features. First, it creates connections between suspicious reviewers based on the similarity of their behavioral features and then scans the identified suspicious reviewers to detect spammer groups.</ns0:p><ns0:p>ii.</ns0:p></ns0:div>
<ns0:div><ns0:head>Diversification Method</ns0:head><ns0:p>This subsection analyzes existing diversification methods used for information retrieval. The first study, about the recommender system using diversification technique, was introduced by Ziegler et al., <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>. They considered top-N reviews and proposed an approach based on clustering, which selects a small subset of reviews that cover better-diversified opinions and high-quality attributes. However, this method used a limited number of reviews, so it is difficult to assure that all required sentiments were considered. Naveed et al., <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> proposed FREuD method which is based on latent topics. The limitation of the proposed method was that it assigned equal weightage to both the negative and positive sentiment.</ns0:p><ns0:p>Guzman et al., <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> applied different weights to the sentiments and allowed stakeholders to assign desired importance to the sentiments. They proposed a diverse method, which retrieved a set of reviews that represented the diversified opinions of users. Moreover, they have also grouped reviews with similar attributes and sentiments.</ns0:p><ns0:p>Naveed et al., <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> used probabilistic topic modeling for diversification. They extracted the features of product reviews and displayed the diversified reviews based on these extracted features.</ns0:p><ns0:p>Based on the reviewed literature, it has been observed there exist very limited studies which considered review diversification problem out of which most of the related studies <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>[25] <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> have selected diversified set of top-k reviews having positive and negative sentiments based on search query.</ns0:p><ns0:p>On the other hand, the study <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> used features based approach for displaying top K reviews using search query. However, these existing studies either identifies sentiments or product feature using search queries and no existing study combined product features and sentiments to display diversified review without considering search queries. The aim of this study is to develop a method which can display reviews in a diversified manner such that the presented reviews represent positive, negative and neutral sentiments covering all related features about product and services. To obtain this objective, this study proposed a novel diversification method (DSR) to display diversified set of reviews using sentiment analysis and product features.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Review Datasets</ns0:head><ns0:p>This study has utilized two datasets: a) Yelp (spam and non-spam reviews) real-world dataset about hotels and restaurants which was provided by Rayana et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Table <ns0:ref type='table'>1</ns0:ref> presents the summary of Yelp dataset and, b) Roman Urdu product reviews real-world dataset, which was scrapped from Daraz 1 using python library (Scrappy). </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Proposed Methods</ns0:head><ns0:p>This research proposed two methods: (i) Spammer Group Detection (SGD) method which detects suspicious groups of reviewers, who write spam reviews to promote or demote the targeted products and services, (ii) Diversified Set of Reviews (DSR) method which selects a diversified set of top-k non-spam reviews having positive, negative, and neutral sentiments. Furthermore, it covers all possible features about the product or service.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.'>Proposed Spammer Group Detection (SGD) Method</ns0:head><ns0:p>This section explains the proposed Spammer Group Detection (SGD) method. The framework of the proposed spam review detection method is described in Figure <ns0:ref type='figure' target='#fig_12'>2</ns0:ref>. The execution of the SGD starts with Daraz (Roman Urdu reviews) and Yelp (English reviews) datasets. The proposed framework is divided into three phases. In the first phase, the co-reviewer graph of suspicious reviewers is generated which is based on identified similar behavioral features. The advantage of a co-reviewer graph is that it will create the linkage between the suspicious reviewers which are appeared to be similar based on products reviewed. In the second phase, Structural Clustering Algorithm for Networks (SCAN) utilizes a clustering approach to identify candidate spammer groups. In the third phase, the spam score is calculated for these groups based on individual and group spammer behavioral features. The groups having spam score less than a specific threshold are dropped from candidate spam groups. Moreover, it has been assumed that all the reviews of the identified spam group are considered as spam review.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1.'>Co-Reviewer Graph</ns0:head><ns0:p>In this section, the procedure to generate the Co-Reviewer graph is presented. It is represented by</ns0:p><ns0:p>Graph G = (V, E) where vertices V represent the reviewers and edges E represent the similarity between two reviewers. For edge E, a value of 0 represents no similarity between reviewers whereas the value of 1 means that two reviewers are similar. The value of edge E between two reviewers A and B is calculated using Equation (1) as follows: </ns0:p><ns0:formula xml:id='formula_0'>𝜆(𝐴,𝐵) = { 0, ∀ 𝑝 ∈ 𝑃 𝐴 ∩ 𝑃 𝐵 , 𝐶𝑅𝑆(𝐴,𝐵,𝑝) = 0 1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>CRS(𝐴,𝐵,𝑝) = { 0, (|𝑡 𝑝 𝐴 -𝑡 𝑝 𝐵 | > 𝛼) 𝑂𝑅 (|𝑅 𝑝 𝐴 -𝑅 𝑝 𝐵 | ≥ 𝛽) 1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where represents the time when reviewer A reviewed product p whereas represents the time 𝑡 𝑝 𝐴 𝑡 𝑝 𝐵 when reviewer B reviewed the product p. is the user-defined threshold of review time. Through 𝛼 experimental evaluations and analysis of the dataset, it has been observed that most spammers are active for a short period and post their reviews for at most first few couple of days. This is because spammers want to write spam reviews as soon as possible to give positive impact of the product. The researchers have experimentally evaluated different threshold values of . When threshold value has been increased 𝛼 by 2 then the number of similar reviewers is considerably increased. This also includes those reviewers which are probably the real users and not spammers and have written only one or two reviews. When the threshold value of 1 was used, hardly any similar reviewers were found. Therefore, the optimal value of 𝛼 is specified as 2-days. The threshold value represents reviewer's rating score which is defined as 2 based on the 𝛽 observations of existing study by Sabuti et al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. is the rating given by reviewer A on product p and 𝑅 𝑝 𝐴 is the rating score given by reviewer B on product p. Spammer group either tries to promote or demote 𝑅 𝑝 𝐵 the product and therefore give an extreme rating which seems different than other reviewers. If they are promoting a product their goal will be to assign a rating of 4 or 5 whereas in the case of defaming a product, they will give 1 or 2-star rating to the product. This is the reason reviewers with a rating difference of 2 or more are not considered as part of any spammer group. If the threshold value of is set 𝛽 to more than 2, then two reviewers who have rated a product 2 and 5 stars respectively will be treated as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science similar reviewers which is not true at all. On the other hand, if the rating difference is decreased from 2 i.e., is set to less than 2, then a large portion of candidate spammer groups will be created. Therefore, 𝛽 the optimal value of is set to 2. 𝛽 The calculation of Co-Review Similarity (CRS) is performed based on the following conditions. First, the difference in review posting time of two reviewers is compared and then the difference of the reviewer rating is compared. If review posting time is larger than a user-defined threshold or the 𝛼 difference of reviewers rating is greater than threshold , then a value of 0 is given to CRS which 𝛽 represents no similarity between two reviewers. On the other hand, if these two conditions become false then a value of 1 is given to CRS which shows that these reviewers are similar to each other and are behaving almost the same way.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2.'>Candidate Spam Groups using SCAN Algorithm</ns0:head><ns0:p>In this section, candidate spam groups are generated from a co-reviewer graph using the Structural Clustering Algorithm for Networks (SCAN) algorithm. SCAN algorithm was proposed by Xu et al., <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> for identifying clusters, hubs, and outliers in datasets. However, this research has utilized a SCAN Algorithm (Figure <ns0:ref type='figure' target='#fig_14'>3</ns0:ref>) for identifying only clusters where these clusters are represented as candidate spam groups.</ns0:p><ns0:p>A vertex , representing a reviewer, is called a core if it has a similar structure (commonly 𝑣 ∈ 𝑉 reviewed products in this case) with at least n vertices (or n reviewers) in its neighborhood. Structural similarity of a vertex with its neighbor is calculated using Equation (3) as follows:</ns0:p><ns0:p>𝑣 𝑥 𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎𝑙𝑆𝑖𝑚𝑖𝑙𝑖𝑎𝑟𝑦(𝑣,𝑥) = 𝑛𝑜. 𝑜𝑓 𝑠ℎ𝑎𝑟𝑒𝑑 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟𝑠 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑣 𝑎𝑛𝑑 𝑥 (𝑛𝑜.𝑜𝑓𝑣 ' 𝑠𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟𝑠) × (𝑛𝑜.𝑜𝑓𝑥 ' 𝑠𝑛𝑒𝑖𝑔𝑏𝑜𝑟𝑠)</ns0:p><ns0:p>(3) This structural similarity score identifies the similarity structure of two vertices with each other. The higher the structural similarity score is, the more similar structure these two vertices have and vice versa.</ns0:p><ns0:p>For a vertex to be considered as the core, the structural similarity score must be greater than a specified threshold with a minimum n number of neighbors. After experimental evaluation, the value of is 𝛾 𝛾 specified as 0.5 and the value of n is taken as 3. A similarity score = 0.5 was set to have confidence that 𝛾 both the reviewers have reviewed at-least half of the similar products. For example, if a reviewer has reviewed 4 products whereas the other has reviewed only 1 of those, then both of these should not be treated as spammer-neighbors. For this reason, the value of cannot be considered as less than 0.5 𝛾 because it may include many such instances. On the other hand, when the value of was increased from 𝛾 0.5, only a few such neighbors were found who reviewed similar products. Similarly, the count of minimum neighbor is set to 3. The reason is that if we consider only 1 or 2 neighbors, almost all the reviewers will be included in the spammer group which does not exist in real scenario as there are always</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science some genuine reviewers of the product which should not be treated like spammers. On the other hand, if n is increased from 3, then sparse spammer groups will be identified. This is the reason, a vertex can be 𝑣 treated as a core if it has a similarity score higher than =0.5 with at least n=3 neighbors. If a vertex 𝛾 is in the neighborhood of a core , then it is called Direct Reachable Structure (DRS). Moreover, 𝑥 ∈ 𝑉 𝑣 the connection of vertices v and x has value 1 as computed in the co-reviewer graph. Figure <ns0:ref type='figure' target='#fig_14'>3</ns0:ref> elaborates the working of Structural Clustering Algorithm for Networks (SCAN). First, all the vertices are labeled as unclassified (Line 1). SCAN Algorithm classifies these vertices into members or non-members. For every unclassified vertex (Line 2), it is checked that if it is a core (Line 3), if yes, then a new cluster-ID is generated (Line 4). Once a cluster-ID is generated based on this identified core, all the neighbors of the core are inserted into a queue 'Q' (Line 5). After inserting all the neighbors of core in Q (Line 6), every element in Q is used to explore Directly Reachable Structure (DRS) vertices from it. These identified DRS are, then, placed in 'R' (Line 7). Thereafter, each vertex in R is checked for having neighbors with a structural similarity score greater than a specified threshold which are still not 𝛾 assigned to any other cluster (Line 9). Such neighbors of are inserted into with the intention that this 𝑥 𝑄 cluster can grow from those neighbors (Line 10). Then the classification status of is checked and if it is 𝑥 unclassified yet, the current cluster-ID is assigned to it (Line 12). In case a vertex is not identified as core by Line 3 then it is labeled as a non-member (Lines 14-15) so that it should not be checked again for being the core. This is specified to minimize the time complexity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3.'>Spam Score using Spammer Behavioral Features</ns0:head><ns0:p>This section describes the third phase which calculates the spam score of candidate spam groups as generated by the SCAN Algorithm (Figure <ns0:ref type='figure' target='#fig_14'>3</ns0:ref>). The spam score of every candidate group is calculated using various individual spammer and group spammer behavioral features. The values of these behavioral features are calculated independently and then the average score of these behavioral features is assigned to that candidate group. In this research, a total of eight spammer behavior features (the combination of individual and group spammer behavioral features) are used to assign a spam score to every group. In addition to these spammer behavioral features, a loss function is also used to reduce the contingency of small groups <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>. The loss function is defined in Equation ( <ns0:ref type='formula' target='#formula_2'>4</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_2'>𝐿(𝑔) = 1 1 + 𝑒 -(|𝑅 𝑔 | + |𝑃 𝑔 | -3)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where is the number of reviewers in the group and represents the number of products in the group. 𝑅 𝑔 𝑃 𝑔 i.</ns0:p><ns0:p>Individual Spammer Behavioral Features In this work, three individual spammer behavioral features are used. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Time Burstiness (BST)</ns0:head><ns0:p>Usually, spammers target a product in a short period to achieve their goals. Time Burstiness of a reviewer r is defined in Equation ( <ns0:ref type='formula' target='#formula_3'>5</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_3'>𝐵𝑆𝑇(𝑟) = { 0,𝐿(𝑟) -𝐹(𝑟) > 𝜎 1 - 𝐿(𝑟) -𝐹(𝑟) 𝛽 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where is the date of the latest review by r, is the date of the first review by r and is the user-𝐿 ( 𝑟 ) 𝐹 ( 𝑟 ) 𝜎 defined time threshold specified as 3 days. The spammers while reviewing a product, are generally active for a shorter span of time and once their goal is achieved, they do not generate reviews for the product. If the threshold value is increased from 3, then many real users are also included in the candidate-spammers whereas, decreasing this value from 3 returned very few spam reviewers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Maximum Number of Reviews (MNR)</ns0:head><ns0:p>Generally, spammers tend to post a larger number of reviewers in a single day. MNR for a reviewer r is defined in Equation ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_4'>𝑀𝑁𝑅(𝑟) = 𝑚𝑎𝑥𝑉 𝑟 𝑚𝑎𝑥 𝑟 ∈ 𝑅 (𝑚𝑎𝑥𝑉 𝑟 )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Where is the number of reviews posted by r in a day and it is normalized by the maximum number of 𝑉 𝑟 reviews in the reviewer's review set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Average Rating Deviation (ARD)</ns0:head><ns0:p>Mostly, a spammer gives a different rating from the genuine reviewer's ratings because the purpose of the spammer is a false projection of a product either in a positive or negative sense. Average rating deviation is defined in Equation ( <ns0:ref type='formula' target='#formula_5'>7</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_5'>𝐴𝑅𝐷(𝑟) = 𝑎𝑣𝑔 𝑝 ∈ 𝑃 𝑟 𝛿 𝑝 𝑟 -𝛿 𝑝 𝑟 5<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Where is the set of products reviewed by reviewer r, represents rating score given by r to the 𝑃 𝑟 𝛿 𝑝 𝑟 product p and represents the average rating score of product p given by all reviewers. This value is 𝛿 𝑝 𝑟 then normalized by the maximum rating deviation i.e., 5 in case of a 5-star rating system.</ns0:p><ns0:p>ii. Groups Spammer Behavioral Features A total of five group spammer behavioral features are used in this work. The symbols used in describing group spammer behavioral features is represented in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Review Tightness (RT)</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Review tightness of a group is defined as the similarity of reviews by the reviewers of candidate spammer group. It is defined in Equation ( <ns0:ref type='formula'>8</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_6'>𝑅𝑇(𝑔) = |𝑉 𝑔 | |𝑅 𝑔 ||𝑃 𝑔 | (8)</ns0:formula><ns0:p>Where represents the number of reviewers in group g whereas is the cardinality of the</ns0:p><ns0:formula xml:id='formula_7'>|𝑉 𝑔 | |𝑅 𝑔 ||𝑃 𝑔 |</ns0:formula><ns0:p>Cartesian Product of the reviewer set and the product set in group g.</ns0:p></ns0:div>
<ns0:div><ns0:head>Product Tightness (PT)</ns0:head><ns0:p>Generally, in a spam group, the spammers target some specific products therefore the product tightness is an important spammer behavioral feature. It represents the similarity of products reviewed by reviewers of candidate spammer group. It is defined in Equation ( <ns0:ref type='formula' target='#formula_8'>9</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_8'>𝑃𝑇(𝑔) = | ∩ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 | | ∪ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 |<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where represents the number of the common product reviewed by the members in the group</ns0:p><ns0:formula xml:id='formula_9'>| ∩ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 |</ns0:formula><ns0:p>and represents all products reviewed by all members of the group.</ns0:p><ns0:formula xml:id='formula_10'>| ∪ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 | Rating Variance (RV)</ns0:formula><ns0:p>The members of a candidate spammer group usually give a similar rating to the reviewed products.</ns0:p><ns0:p>This type of spammer behavior can be identified by calculating Rating Variance which is defined in Equation <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref> as follows:</ns0:p><ns0:formula xml:id='formula_11'>𝑅𝑉(𝑔) = 2 ( 1 -1 1 + 𝑒 -𝑎𝑣𝑔 𝑝 ∈ 𝑃 𝑔 𝑆 2 (𝑝,𝑔) ) 𝐿(𝑔)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>Where represents the variance of the rating scores of product p by all reviewers of group g. 𝑆 2 ( 𝑝,𝑔 )</ns0:p></ns0:div>
<ns0:div><ns0:head>Group Size (GS)</ns0:head><ns0:p>It has been observed by existing study <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> that usually spammer groups of 2 or 3 reviewers are formed by coincidence who have no intensions or common interest with each other to write spam reviews. However, larger groups are usually formed by the intention to write spam reviews to target any product or service. Therefore, identifying group size is a good feature to observe the behavior of the candidate spammer groups. Moreover, it is worthwhile to give more weightage to the larger group size.</ns0:p><ns0:p>The group size indicator is defined in Equation ( <ns0:ref type='formula' target='#formula_12'>11</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_12'>𝐺𝑆(𝑔) = 1 1 + 𝑒 -(|𝑅 𝑔 | -3)<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where is the number of reviewers in group g. 𝑅 𝑔</ns0:p></ns0:div>
<ns0:div><ns0:head>Reviewer Ratio (RR)</ns0:head><ns0:p>In a candidate spammer group, if some products are being reviewed by one reviewer while other reviewers of the same spammer group have not posted any reviews about that product than this represents suspicious behavior of the reviewers of the candidate spammer group. The Reviewer Ratio is therefore calculated to assess this behavior of candidate spammer group. It is represented by Equation ( <ns0:ref type='formula' target='#formula_13'>12</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_13'>𝑅𝑅(𝑔) = 𝑚𝑎𝑥 𝑝 ∈ 𝑃 𝑔 |𝑅 𝑔 𝑝 | |𝑅 𝑝 |<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Where represents the number of reviewers in group g who reviewed the product p and is the total 𝑅 𝑔 𝑝 𝑅 𝑔 number of reviewers who reviewed the product p.</ns0:p><ns0:p>Based on the calculated values of these behavioral features (Equations 5-12), the spam score of each candidate spammer group is calculated by taking an average score of these eight behavioral features. This spam score highlights the suspiciousness of candidate spammer groups such that the higher the spam score, the more chances are for that group having spam reviewers. Through experimental evaluations, a threshold value of 0.6 is defined for spam score which is used to identify suspicious spammer groups.</ns0:p><ns0:p>While analyzing the existing datasets, it was observed that generally, 10-15% of total reviews are spam so this study decided to follow the same ratio. When this threshold was set to 0.4-0.5 all the candidategroups were treated as spam that resulted in almost 40% of the reviews as spam. On the other hand, increasing this value to greater than 0.6 resulted in only a few candidate spam groups, which produced less than 5% of the total reviews as spam. Therefore, the spam score threshold was set to 0.6 which labeled 13.51% of reviews as spam and provided optimal results.</ns0:p><ns0:p>It can also be assumed that the threshold value can vary depending upon different applications. For example, when an application wants to identify as many spam reviews as possible, then he or she ought to set threshold value to be relatively small. After identifying suspicious spammer groups, all the reviews by the members of these groups are labeled as spam which results in the labeled dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.'>Proposed Diversified Set of Reviews (DSR) Method</ns0:head><ns0:p>In this section, a novel Diversified Set of Reviews (DSR) method is proposed which returns a compact set of diversified non-spam reviews having positive, negative, and neutral sentiments covering the maximum number of features. In contrast to the earlier techniques, this work does not only retrieve the reviews based on diversified sentiments (i.e., positive, negative, and neutral) but also displays reviews covering all possible product features. Product features represents important components of the product PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>about which customers are writing reviews. For example, consider laptop is a product; its features will represent its battery life, display screen, RAM, and performance etc. The proposed DSR approach displays diversified reviews based on the features and sentiments simultaneously. Figure <ns0:ref type='figure' target='#fig_16'>4</ns0:ref> represents the framework of the DSR method which shows that the review dataset (not-spam reviews), achieved through the SGD method (Section 4.1), is utilized as an input of DSR. In this approach, first, this review dataset of a specific product is divided into three categories based on positive, negative and neutral sentiments, then, the diversified feature extraction process is applied on these categories separately. In feature extraction process, feature set is prepared for each product review. For example, consider a product review 'When I received the device and hit the power button, it didn't turn on easily. After setting it up, I notice that the volume up key doesn't work and the volume was raised randomly'. In this review 'power button' and 'volume up key' are features. Similarly, feature set of all reviews of the all products are prepared. For a specific product, three feature sets are prepared based on its positive, negative and neutral categories.</ns0:p><ns0:p>Next, each review of a product is assigned a weight based on its features using Equation ( <ns0:ref type='formula'>13</ns0:ref>) which is then used to calculate its utility score. All reviews are evaluated by comparing their utility scores to select top k reviews of the product. Figure <ns0:ref type='figure' target='#fig_18'>5</ns0:ref> explains this process in detail. Finally, all three diversified datasets of a product (positive, negative and neutral) are combined to display top K diversified reviews having varied sentiments and expanded product features. The sentiments and features extraction for the reviews are determined by using python's built-in library TextBlob <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_18'>5</ns0:ref> elaborates the working of DSR algorithm, which begins with the set of reviews having positive, negative, and neutral reviews (Line 1). The diversified reviews result set is initially set to 𝑆 empty (Line 2). The algorithm considers one type of sentiment (positive, negative, or neutral) at a time and iterates over all three sentiments (Line 3). For example, if this study has 25 positive, 15 negative and 20 neutral reviews in dataset R, then in the first iteration the value of k will consider all positive reviews (line 4), in the next iteration it will consider all negative reviews and in the last iteration, all neutral reviews are considered. In the next step, the feature set F is formulated which consists of all features described by review r (Line 5). The diversified set for every sentiment is selected and retrieved separately and is stored in set (Line 6). The loop iterates through the count of specific sentiment k (Line 7). For 𝑠 instance, if the number of positive reviews to be returned is 25, this loop will run 25 times, and in every iteration, it will retrieve one review to be added into the set . For each review (Line 8), the addressed 𝑠 features of reviews are observed. If the same feature exists in feature list , then these are added into a list 𝐹 (Line 9). To maximize the diversification process, the features selected from set F for review are 𝑓 * ( 𝑟 ) 𝑟 𝑖 not considered again for the next review . 𝑟 𝑗 The weights for these features are calculated using Equation ( <ns0:ref type='formula'>13</ns0:ref> feature list . These calculated weights of the features are then summed up as Utility ( ) of the review r 𝐹 𝑈 (Line 10). After calculating utility score for each review, review with the maximum utility is added into 𝑟 (Lines 12-13) and the same is discarded from the review dataset subsequently (Line 14). Moreover, 𝑠 𝑅 the features addressed in are also eliminated from the feature list (Line 15) with the aim that these 𝑟 𝐹 features may not be added in the utility of any other review to achieve maximized diversity. This updates the feature list after every selection. The advantage of updating the feature list is that the remaining unaddressed features are also considered to be displayed in a top-k diversified set of reviews. This feature set is regenerated for every sentiment-based category of dataset i.e., positive, negative, and neutral. Once a sub-set for a specific sentiment is retrieved, it is appended into original diversified set (Line 17).</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑠 𝑆</ns0:head><ns0:p>This diversified set of reviews are returned and presented to the end-user as top-k diversified review which consists of all positive, negative, and neutral reviews covering all possible product features. Figure <ns0:ref type='figure' target='#fig_20'>6</ns0:ref> represents the main contribution and framework of this study which identifies spammer groups and presents these non-spam reviews in diversified format. The execution of the framework starts with Yelp and Daraz datasets. The complete description of the datasets has been described in Section 3.</ns0:p><ns0:p>Next, the proposed Spammer Group Detection (SGD) method highlight the spammer and spam reviews.</ns0:p><ns0:p>The working of the SGD method has already been described in Section 4.1. The output of the SGD method is non-spam reviews from the Yelp and Daraz datasets. These non-spam reviews are considered for Diversified Set of Reviews (DSR) method. The complete working of DSR method has been described in Section 4.2. The output of the DSR method is to display diversified set of top-k non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible product features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Results and Discussion</ns0:head><ns0:p>This study is evaluated in the following two perspectives: (i) Evaluation of proposed Spammer Group Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.'>Evaluation of Spammer Group Detection (SGD) Method using Deep Learning</ns0:head><ns0:p>Classifiers This section describes an evaluation of the proposed SGD method which identifies suspicious spammer groups and spam reviews utilizing deep learning classifiers. It presents the analysis of different parameter settings to find out optimal parameters that can be used in deep learning classifiers for the training and testing of the SGD method. The study has used standard evaluation measures <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref> to analyze the performance of the proposed SGD method. These include precision, recall, F1score and accuracy.</ns0:p><ns0:p>Deep learning classifiers which are used for training and testing of the proposed SGD method are LSTM, GRU, CNN and BERT. In addition to it, K-fold cross-validation (k =5) is used to validate the accuracy of the proposed method. The datasets (Daraz and Yelp) are split in the ratio of 80 to 20 for training and testing so that more datasets can be utilized to train deep learning classifiers <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. The experimental evaluation of SGD is performed in three phases: (i) In the first phase, analysis of different parameter settings has been performed to achieve optimized hyperparameters for the proposed deep learning-based SGD method. (ii) In the second phase, the SGD method has been evaluated using different deep learning classifiers to analyze its accuracy. (iii) Finally, the performance comparison of the proposed SGD method has been conducted with existing approaches using different datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Analysis of Hyperparameters</ns0:head><ns0:p>In the first set of experiments, several initial parameters have been considered, as represented in Table <ns0:ref type='table'>4</ns0:ref>. These initial parameters are used as a starting point of analysis to find out optimized hyperparameters of deep learning classifiers using Daraz and Yelp datasets. Table <ns0:ref type='table'>4</ns0:ref> represents the values of these initial parameters which are used in the analysis of LSTM, GRU, CNN and BERT deep learning classifiers. The analysis of deep learning classifiers requires numerical values as an input; therefore, the review text needs to be converted into a numerical form <ns0:ref type='bibr'>[6][33]</ns0:ref>. In this study, Daraz review dataset is converted into the numeric form using Term Frequency-Inverse Document Frequency (TF-IDF) vectorization, whereas, Yelp review dataset is initialized by finding the corresponding word embedding using Google's Word2vec tool 2 which used the dimension of 300. The following subsections describe the comprehensive experimental details of different parameters which helped in optimizing deep learning classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>i. Activation Function</ns0:head><ns0:p>The activation function takes the output signal from the previous node (layer) and converts it into some usable form that can be taken as input to the next node (layer). This study, first, analyzes different non-linear activation functions (tanh, relu and sigmoid) on different deep learning classifiers. Next, based on the experimental results, the best performing activation function is utilized in deep learning classifiers for training and testing of the proposed SGD method. Figure <ns0:ref type='figure' target='#fig_22'>7</ns0:ref> ii. Optimization Method Deep learning classifiers usually have a certain loss or error during training <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref>. This loss or error is calculated using a cost function. The purpose of the optimization function is to effectively train the classifier such that the error or loss is minimized. This study analyzes different optimization methods (SGD, RMSProp, Adagrad, Adadelta, Adam, Adamax and Nadam) on different deep learning classifiers.</ns0:p><ns0:p>Based on the experimental results, the best performing optimization method is utilized in deep learning classifiers for training and testing of the proposed SGD method. Figure <ns0:ref type='figure' target='#fig_24'>8</ns0:ref> iii. Dropout Function Deep learning classifiers usually have an overfitting problem, especially on a low volume of data <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref>.</ns0:p><ns0:p>Therefore, the dropout function is used to overcome the overfitting problem. Figure <ns0:ref type='figure' target='#fig_26'>9</ns0:ref> iv.</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of Units</ns0:head><ns0:p>The architecture of the deep learning classifiers is generally controlled by the number of units (layers) and the number of nodes in each hidden unit <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_29'>10 (</ns0:ref> v.</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of Features</ns0:head><ns0:p>In deep learning classifiers words in the datasets are mostly represented as features <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref>. The number of features to be fed in the classifier need to be limited to the most frequently occurring words rather than taking all the features. This helps to reduce the overfitting problem. Figure <ns0:ref type='figure' target='#fig_30'>11</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Optimized Parameters for Deep Learning Classifiers</ns0:head><ns0:p>A comprehensive experimental evaluation is presented in section (5.1.1) to find out optimized hyperparameters of deep learning classifiers. After performing the analysis of different parameter settings, the final set of optimized parameters for the proposed deep learning-based SGD method is shown in Table <ns0:ref type='table'>5</ns0:ref>. These parameters settings are used for the training and testing of deep learning classifiers (CNN, LSTM, GRU and BERT).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Analysis of deep learning classifiers using the SGD method</ns0:head><ns0:p>In this subsection, deep learning classifiers are evaluated in terms of accuracy achieved by each classifier. Table <ns0:ref type='table'>6</ns0:ref> shows that the CNN classifier performs better than the other two classifiers (LSTM and GRU) on Daraz and Yelp datasets. It has been observed from the literature review <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> that CNN may perform better for text classification. This observation can be applied in this study and CNN classifier can perform better because it utilized a review (text) dataset about products and services. CNN classifier uses the kernel which slides along the features and the weights. This mechanism may also be in favour of the utilized datasets. It is observed from the literature review that the LSTM classifier processes the features or words using sequential learning method [39]. This process may not be in favour of the utilized datasets.</ns0:p><ns0:p>Therefore, CNN produces better accuracy as compare to LSTM. It was observed by the literature review that the GRU classifier uses internal memory for storing and filtering information using their update and reset gates <ns0:ref type='bibr'>[40][41]</ns0:ref>. Therefore, this feature can produce a better accuracy score as compared to the LSTM classifier. It has been observed from literature review that BERT perform better for the applications where search query is used to display matching results using Natural Language Processing (NLP) and may also Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>best suited when analyzing sentiments (positive or negative) of the reviews <ns0:ref type='bibr' target='#b43'>[42]</ns0:ref>. These observations cannot be favourable for this study as the framework is based on the identification of group spammer using individual spammer behaviours and group spammer behaviour features. Therefore, BERT does not achieve better accuracy score as compare to other classifiers. Table <ns0:ref type='table'>6</ns0:ref> shows that deep learning classifiers produce better accuracy results utilizing Yelp dataset (355,210 reviews) as compared to the Daraz dataset <ns0:ref type='bibr'>(3,922 reviews)</ns0:ref>. The reason for this worst performance on Daraz dataset is that deep learning classifiers use word2vec which utilizes semantics to encode the words. This requires a big corpus so that word2vec can build a good vocabulary for encoding.</ns0:p><ns0:p>As the Daraz dataset does not provide a big corpus for Roman Urdu reviews as compared to the Yelp dataset, therefore, in this study deep learning classifiers produce better accuracy results on Yelp dataset as compare to Daraz dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.3'>Performance Comparison with Existing Approaches using Different Datasets</ns0:head><ns0:p>In this subsection, we have presented the comparison of the proposed SGD method with previously available studies which are using various datasets for group spam detection. This comparison has been presented in Table <ns0:ref type='table'>7</ns0:ref> which demonstrates the effectiveness of proposed approach in terms of achieved accuracy using Amazon and Yelp datasets. Amazon dataset is a real-world product review dataset. In order to conduct this comparison, this study utilized 15,342 reviews, 10,522 reviewers and 5,312 products of the Amazon dataset. Yelp dataset contains 355,210 review, 74,446 reviewers of 2,488 hotels and restaurants. The accuracy results presented in Table <ns0:ref type='table'>7</ns0:ref> shows that the proposed Spammer Group Detection (SGD) method has outperformed the existing methods <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>[17] <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. Mukherjee et.al., <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> utilized spammer and group behavioral features to identify spam reviews on Amazon dataset and obtained an accuracy of 86%. Compared to this, the proposed approach achieved an accuracy of 91% when implemented on Amazon dataset. Next, we have compared our proposed approach with Kaghazgaran et. al., <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> using the Amazon dataset which utilized a neighbourhood-based method to spot spammer groups in an online review system. The proposed SGD method achieves an improved accuracy of 91% as compared to the obtained accuracy of 89% by Kaghazgaran et al. Finally, we have performed the comparison of the proposed approach by utilizing linguistic and behavioral features using the Yelp dataset. The results show that the proposed SGD method improved the accuracy to 86% when compared with the approach proposed by Shebuti et.al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> which achieved an accuracy of 79%. Table <ns0:ref type='table'>7</ns0:ref> presents a comprehensive comparison which validates the improved performance of the proposed approach compared to existing studies utilizing different datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.'>Evaluation of Diversified Set of Reviews (DSR) Method</ns0:head><ns0:p>In this section, the proposed DSR method is evaluated in terms of presenting reviews in diversified manner representing positive, negative, and neutral sentiments covering all related features about product Figure <ns0:ref type='figure' target='#fig_32'>12</ns0:ref> presents the working of the proposed DSR method using real reviews from Yelp dataset. Figure <ns0:ref type='figure' target='#fig_32'>12</ns0:ref>(a) shows non-spam reviews of a specific hotel which is located in New York whereas Figure <ns0:ref type='figure' target='#fig_32'>12</ns0:ref>(b)</ns0:p><ns0:p>presents top K non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible features about hotel after applying DSR method. For reader's convenience, the features of each review are highlighted in bold. In this example top K value has been set to 5 whereas it displays 2 positives reviews, 2 negative reviews and 1 neutral review.</ns0:p><ns0:p>The current research proposed to analyze the performance of DSR method. The is 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 calculated on the basis of features addressed in each review. There exists a relation between 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 and review diversification. The higher value of represents more diversified top K reviews. The 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 impact score can be reduced if the feature appears more than one time in the top k reviews. For example, if the occurrence of the feature in the review dataset is '1' then its overall score is '1' but if its occurrence is 3 times then the score of the feature will be reduced to 1/3=0.33. The impact score is calculated by the following Equation <ns0:ref type='bibr' target='#b13'>(14)</ns0:ref>.</ns0:p><ns0:p>𝐼𝑚𝑝𝑎𝑐𝑡 𝑆𝑐𝑜𝑟𝑒 (𝑓) = 1 𝑐𝑜𝑢𝑛𝑡 𝑜𝑓 𝑓 𝑖𝑛 𝑆 <ns0:ref type='bibr' target='#b13'>(14)</ns0:ref> review and more diversified that review is from the remaining diversified set of reviews. The mathematical representation for Review Diversity Score is given in Equation <ns0:ref type='bibr' target='#b14'>(15)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_14'>𝑅𝑒𝑣𝑖𝑒𝑤 𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 (𝑟) = ∑ 𝑓 ∈ 𝑟 𝐼𝑚𝑝𝑎𝑐𝑡𝑆𝑐𝑜𝑟𝑒(𝑓)<ns0:label>(15</ns0:label></ns0:formula><ns0:p>) For calculating , the review diversity scores (Equation <ns0:ref type='formula' target='#formula_14'>15</ns0:ref>) are normalized by dividing it with 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 the maximum review diversity score in the review set. In the end, these normalized scores are summed to obtain the for the diversified set. Equation ( <ns0:ref type='formula' target='#formula_15'>16</ns0:ref>) is used to calculate .</ns0:p><ns0:formula xml:id='formula_15'>𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 = ∑ 𝑟 ∈ 𝑆 𝑅𝑒𝑣𝑖𝑒𝑤𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦𝑆𝑐𝑜𝑟𝑒(𝑟) 𝑚𝑎𝑥 𝑟 ∈ 𝑆 (𝑅𝑒𝑣𝑖𝑒𝑤 𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 𝑟 )<ns0:label>(16</ns0:label></ns0:formula><ns0:p>) Table <ns0:ref type='table'>8</ns0:ref> shows experimental evaluation conducted on the product of the Daraz and Yelp dataset. First, this study selected the product from Daraz dataset which has maximum 80 reviews including 27 negative, 41 positive and 12 neutral reviews. The diversified set obtained for this product achieved a of 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 7.14. Similarly, the Yelp service is selected for experimental evaluation which contains a total of 574 reviews including 325 positive, 212 negatives and 37 neutral reviews. It achieved 10.88 which 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 is better than Daraz . The reason behind it is that Yelp services reviews are more in quantity as 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 compare to the Daraz dataset which produced more rich and diversified set of features for analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Conclusion</ns0:head><ns0:p>This study proposed Spammer Group Detection (SGD) method and Diversified Set of Reviews (DSR) method to evaluate real-world datasets such as Daraz and Yelp. SGD method used linguistic, behavioral and group spammer behavioral features to calculate the spam score of each group to identify group of the spammers. This study used deep learning classifiers for training and testing the proposed SGD method.</ns0:p><ns0:p>The proposed Diversified Set of Reviews (DSR) method utilized diversification technique to present a diversified set of top-k reviews having positive, negative, and neutral feedback covering all possible product features about a specific product or service. The study proposed a framework which works by combining SGD method with DSR method. The output of SGD method which is non-spam reviews are used as input to DSR method. The outcome of this framework are non-spam reviews of a specific product displayed in diversified manner. The findings of this research provide a practical implication for improving the trustworthiness of online products and services for Roman Urdu and English reviews in diversified manner. In future, additional attributes such as the email id, IP address of the spammer and signed-in location of the reviewer may be added to the dataset to improve the accuracy of the spam review detection model. Moreover, another future direction is to include location-dependent behavioral features of reviewer for in-depth identification of spam reviews.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Figure <ns0:ref type='figure' target='#fig_32'>12</ns0:ref> shows the review presentation before and after applying DSR method utilizing Yelp dataset.</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:note type='other'>Computer Science Figure 12</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>13) as follows: PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021) of a feature, represents the frequency of feature f (whose weight is to be 𝑤 ( 𝑓 ) 𝑐 ( 𝑓 ) calculated) in set F whereas is the highest frequency of any feature available in the 𝑚𝑎𝑥 𝑓 ' ∈ 𝐹 𝑐(𝑓 ' ) 𝑓 '</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Detection (SGD) Method using four deep learning classifiers (CNN, LSTM, GRU and BERT) in terms of accuracy in spam review detection. (ii) Evaluation of proposed Diversified Set of Reviews (DSR) method in terms of a diversified set of reviews. These evaluation results have been presented in the following subsections. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of different activation functions applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 7 (b) that on Daraz dataset, sigmoid function performs better for CNN and GRU classifiers whereas, relu performs better for LSTM classifier and tanh performs better for BERT classifier. It has also been observed from Figure 7 (c) that on Yelp dataset, the sigmoid function performs better for LSTM, GRU and LSTM classifiers while, tanh performs better for BERT classifier. Therefore, this study utilized best performing activation function in a deep learning classifier to obtain an input signal for the next node using the output of the previous node.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of different optimization methods applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 8 (b) that on Daraz dataset, Adam performs better for LSTM, GRU and BERT classifiers whereas, Nadam performs better for CNN classifier. It has also been observed from Figure 8 (c) that on Yelp dataset, the Adamax optimization method performs better for LSTM and GRU classifiers while, RMSProp performs better for CNN classifier and SGD performs better for BERT classifier. This study utilized best performing optimization function in deep learning classifiers to effectively train the model such that the error or loss is minimized.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of different dropout rates applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 9 (b) that on Daraz and Yelp (Figure 9 (c)) datasets, dropout values between (0.2 to 0.5) tends to show good results. Therefore, this study utilized this dropout rate (0.2 to 0.5) in deep learning classifiers to effectively handle the overfitting problem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>a) presents the experimental results of adapting different number of units (50, 100, 150 and 200) in deep learning classifiers utilizing Daraz and PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science Yelp dataset. Through experimental evaluations, no significant change has been observed after adding several units as presented in Figure 10 (b) and (c).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of utilizing several features (1000, 2000, 3000 and 4000) on deep learning classifiers using Daraz and Yelp datasets. It has been observed from Figure 11 (b) that on Daraz dataset feature set of 3000 to 4000 words performed better for LSTM, GRU and BERT classifiers. On the other hand, on Yelp dataset (Figure 11 (c)), feature set of 2000 words performed better for LSTM and GRU classifiers and 4000 words performed better for BERT classifier. Through experimental evaluations on CNN classifier, it is observed that applying hyperparameter (number of features) on CNN decreases its accuracy value. Based on this analysis, this study utilized best performing feature set (highlighted in table of Figure 11) in deep learning classifiers to overcome the overfitting problem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science and service. The execution of the DSR method works in two phases which takes in non-spam reviews obtained using SGD method from Daraz and Yelp datasets. 1) For the sentimental analysis phase, this study utilizes python's built-in library TextBlob 3 for Daraz and Yelp dataset to obtain positive, negative, and neutral sentiments of reviews. 2) In feature extraction phase, features are extracted from the review datasets using two different methods: i) For Daraz dataset, list of unique features or words are generated by writing programming code and almost 7,344 unique Roman Urdu words or features are considered for further evaluation; ii) For Yelp dataset, this study utilizes python's built-in library TextBlob for feature extraction. After evaluating sentimental analysis and feature extraction of both review datasets (Daraz and Yelp) with non-spam review, the Diversified Set of Reviews (DSR) algorithm has been used to present reviews in a diversified manner such that the presented reviews represent positive, negative and neutral sentiments covering all related features about specific product and services. This study initially set-top k reviews (k=10) considering 4 positive, 4 negative and 2 neutral reviews. This top k value can be adjustable according to the requirement of the end-user.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Example of all positive reviews and diversified reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 describes the example of all positive reviews and top K diversified reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Framework of proposed Spammer Group Detection (SGD) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 explains the proposed Spammer Group Detection (SGD) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Process of structural clustering algorithm for network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 defines structural clustering algorithm for network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Framework of Diversified Set of Reviews (DSR) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 explains the proposed Diversified Set of Reviews (DSR) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Procedure to find Diversified Set of Reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 explain the method to find Diversified Set of Reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: The framework of the proposed study using SGD and DRS methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 describe the proposed framework utilizing SGD and DSR methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Effect of different activation functions on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 presents the accuracy of deep learning classifiers using different activation functions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Effect of different optimization methods on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 presents the accuracy of deep learning classifiers using different optimization methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Effect of different dropout values on accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9 presents the accuracy of deep learning classifiers using different dropout values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Effect of the number of units on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>Figure 10 presents</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10 presents the accuracy of deep learning classifiers using different number of units.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Effect of the number of features on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>Figure 11 presents</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11 presents the accuracy of deep learning classifiers using different features.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: Example of before and after applying DSR method using Yelp dataset.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>2. Utilized the SCAN algorithm to identify candidate spam groups. 3. Employed different individual and group spammer behavioral features to calculate the spam score of each group. 4. Utilized four different deep learning classifier such as CNN, LSTM, LRU and BERT for training</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>March 2011 to August 2014. For this study, we have removed those reviewers who have posted less than two reviews. We have also removed those products which have less than three reviews from the Daraz and Yelp dataset. This study also removed junk characters, numerical values and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>presents the summary of the Daraz dataset.</ns0:cell></ns0:row><ns0:row><ns0:cell>This study has utilized Daraz dataset which contains reviews of products from a time span of February</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 to January 2020. Further, this study also used Yelp dataset containing reviews about hotels and</ns0:cell></ns0:row><ns0:row><ns0:cell>restaurants spanning from</ns0:cell></ns0:row></ns0:table><ns0:note>stop words from review text and removed those reviews which contained less than five words. After preprocessing, the Daraz dataset is reduced to 3,922, reviews and Yelp dataset is reduced to 355,210.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table 3 represents the list of notations used in describing individual spammer behavioral features.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>www.daraz.pk PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>https://code.google.com/archive/p/word2vec PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021)</ns0:note>
<ns0:note place='foot' n='3'>https://pypi.org/project/textblob/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>Table 2: Detailed distribution of Daraz dataset used in proposed methods.</ns0:note>
<ns0:note place='foot'>ReviewsNegative Reviews Neutral Reviews 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 Positive Reviews Negative Reviews Neutral Reviews</ns0:note>
</ns0:body>
" | "Original Manuscript ID: 54398
Original Article Title: “Spammer group detection and diversification of customers' reviews”
(#CS-2020:10:54398: 0:1: REVIEW)
To: PeerJ Computer Science Editor
Dear Editor,
Thank you for reviewing our manuscript. All authors have checked the manuscript and have agreed to the resubmission. Moreover, the manuscript has been thoroughly reviewed and has addressed all reviewer’s comments. We are uploading an updated manuscript (highlighted new version of manuscript and clean new version of manuscript) according to the template. We have also uploaded point-by-point response to reviewer’s comments.
Best regards,
Naveed Hussain et al.
Reviewer 1
Reviewer#1, Concern # 1: The data is relatively old to infer that the technique is valid nowadays.
Author response: Thank you, for in depth study of the manuscript. This study is based on two datasets.
1. The Yelp real world dataset contains English reviews about hotels and restaurants which are written by American reviewers. The main issue in the domain of spam review detection is that the real-world labeled (spam and not spam) dataset is not publicly available [1]. A study conducted by Rayana et al. [2], utilized an unlabeled real-world Yelp dataset for spam review detection and devised a method to label spam and not-spam review in the Yelp dataset. Therefore, this study utilized the labeled Yelp dataset provided by Rayana et al. [2]. The Yelp dataset used in this study contained 355,210 reviews about hotels and restaurants spanning from March 2011 to August 2014. Further, Yelp dataset used in this study is now publicly available for the researchers on the Kaggle (https://www.kaggle.com/naveedhn/yelp-review-with-sentiments-and-features).
2. The Daraz real world dataset contains Roman Urdu reviews about products written by South Asian reviewers particularly from Pakistan. The reviews were scrapped from daraz.pk using python library (Scrappy). Moreover, Daraz dataset about product is publicly available on Kaggle (https://www.kaggle.com/naveedhn/daraz-roman-urdu-reviews) for the researchers. Daraz dataset utilized in this study contains 3,922 reviews of products spanning from February 2016 to January 2020.
Your concern is about dataset that it is relatively old to infer that the technique is valid nowadays.
First, In the domain of spam review detection, it is very difficult to find up to date real world labeled review dataset. Secondly, it was observed by literature review that researchers mainly consider two directions for identification of spam reviews [1]: 1) They analyzed the review text using tokenization, transformation and feature selection. 2) They performed the analysis of reviewer’s behavior such as Time burstiness, Maximum no of reviews and average rating deviation etc. These two existing directions to find spammers and spam reviews do not depend upon the date and time of the posted reviews. Moreover, the proposed technique SGD uses reviewer behaviors to identify spammer group which is independent of the time frame of review dataset. Therefore, as per researcher’s knowledge, Yelp six-year-old dataset does not too much effect the validity of the proposed method. Moreover, the human psychology to write spam reviews remains same as six years ago.
Author action: Manuscript is updated in section 3, line number (201-202).
Reviewer#1, Concern#2: we can think of an 'Amazon Verified Purchase' to simplify certain computations. But, in general, I would appreciate if you can do or, at least, infer how it should work nowadays.
Author response: Thank you for your suggestion. Amazon product review dataset, available from this link (http://jmcauley.ucsd.edu/data/amazon/links.html), has the larger and real-world Amazon product review dataset. In this dataset there are following attributes.
• reviewerID - ID of the reviewer, e.g. A2SUAM1J3GNN3B
• asin - ID of the product, e.g., 0000013714
• reviewerName - name of the reviewer
• helpful - helpfulness rating of the review, e.g., 2/3
• reviewText - text of the review
• overall - rating of the product
• summary - summary of the review
• unixReviewTime - time of the review (unix time)
• reviewTime - time of the review (raw)
In the above dataset, there is no such attribute from which we can categorize that these reviews are Amazon verified purchase. On the other hand, Daraz.pk website allows only those customers to write reviews of product which purchase that particular product. Therefore, this study utilized only Daraz verified purchase product Roman Urdu reviews. But unfortunately manufacture can find many ways for the spammer to write spam reviews. For example, they can pay money to the spammer for purchasing product and spammer write spam reviews about that particular product.
It was observed by literature review that two main product and services website such as Amazon and Yelp are properly implementing spam review detection system [1][3]. Yelp display spam reviews on their website occasionally to discourage the spammer. The spam reviews filtering algorithms of these two businesses (Amazon and Yelp) are trade secrets [3]. In our previous research (mentioned below), we have utilized Amazon and Yelp product/services review datasets and the proposed algorithms achieve better results when analysis is performed using spammer behavioral feature for identification of the spam reviews [4][5].
Following are our two published studies:
1. This study [4] proposed, two different spam review detection: (1) Spam Review Detection using Behavioral Method utilizes thirteen different spammer’s behavioral features to calculate the review spam score which is then used to identify spammers and spam reviews, and (2) Spam Review Detection using Linguistic Method which works on the content of the reviews and utilizes transformation, feature selection and classification to identify the spam reviews. Experimental evaluations are conducted on a real-world Amazon review dataset which analyze 26.7 million reviews and 15.4 million reviewers. The evaluations conducted using behavioral method achieved better accuracy for identification of spam reviews.
2. This study [5] utilized Yelp real world dataset and achieved better accuracy using spammer individual behavioral features for identification of spam reviews.
From the above observations, it has been inferred that Amazon and Yelp algorithms also analyze spammer behavioral feature such as rating deviation, activity window and the ratio of positive reviews etc. for identification of spam reviews.
Reviewer#1, Concern # 3: Google has presented BERT and I do not know how it overcomes or facilitate a better detection of spammer groups than the latter. I would appreciate if you can tell something about that. For instance, the use of BERT with your technique, from the hyperparameters to the implementation of both techniques.
Author response: Thank you for your valuable suggestion. Your suggestion has improved the overall presentation of the Results and Decision (Section 5). Now this study has implemented BERT classifier. Table 6 shows the analysis of the BERT classifiers with other deep learning classifiers. The analysis of the classifiers has been described in Section 5.1.2. line (568-573). The BERT classifier is also properly implemented in Analysis of Hyperparameters (Section 5.1.1). The analysis to find out optimized hyperparameters for deep learning classifiers such as CNN, LSTM, GRU and BERT using Daraz and Yelp datasets is presented in Figure 7, 8, 9, 10 and 11.
It has been observed from literature review that BERT perform better for the applications where search query is used to display matching results using Natural Language Processing (NLP) and may also best suited when analyzing sentiments (positive or negative) of the reviews [6]. These observations cannot be favorable for this study as the framework is based on the identification of group spammer using individual spammer behaviors and group spammer behavior features. Therefore, BERT does not achieve better accuracy score as compare to other classifiers.
Author action: Manuscript has been updated after including BERT classifier in Section 5.1.2. line number (568-573). Figure 7, 8, 9, 10 and 11 have been redrawn after including BERT classifier.
Reviewer#1, Concern # 4: You propose two methods that I can see and understand both, but they seem like two different types of work and I don’t see the connection between these two. For instance, at the beginning of Section 4. I would appreciate a connection between both or in the conclusions. Regarding to the latter, I humbly think it should be rewritten to take into account the usefulness of both techniques working as they are and together.
Author response: Thank you for your suggestion. Now this study has included the framework of the proposed model which is described in Figure 6, which shows the connection between SGD. Moreover, it has also been described that how SGD and DSR methods work together to find out spammer groups and display top K non-spam reviews in diversified form. These have been included at the end of Section 4, line (455-463) and in conclusion section (section 6).
Author action: Manuscript has been updated in section 4 and section 6. Line (455-463) and Figure 6 is also included in the manuscript which describe the complete working of the proposed method.
Reviewer 2
Reviewer#2, Concern # 1: In the introduction section, the paper gave a nice overview of the SGD method. It would be better if the authors could enrich the description about the review diversification part (line 93-95).
Author response: Thank you for your suggestion. Now in the introduction section, we have included detail description about diversification part (line 88-103).
Author action: The manuscript has been updated after including detail description about diversification part (line 88-103).
Reviewer#2, Concern # 2: in literature review, the paper presented existing clustering methods and their comparison with a proposal for review spammer detection. It would be better if a comparison between the proposed review diversification method and existing ones (line 185-189).
Author response: Thank you for your suggestion. The manuscript has been updated line (189-199) after adding comparison between the proposed review diversification method and existing ones.
Author action: The manuscript has been updated line (189-199).
Reviewer#2, Concern # 3: Section 4.2 and Figure 4 presented that the review diversification method used features. However, it is unclear what the features were. Were these features word-based features (from looking at the excel sheet provided in the supplemental material)? It is good to see the authors provided an excel sheet showing the features for the Daraz dataset. It would be better if the authors can describe in details how they used the features in the paper.
Author response: Thank you for your suggestion. Yes, these features are word-based features. Product features represent important components of the product about which customers are writing reviews. For example, consider laptop is a product; its features will represent its battery, display screen, RAM, and performance etc. The example of the word-based feature is included in the updated manuscript line (415-418). This example defines and explain features in product reviews. The manuscript is also updated by adding more description about how features were used in feature extraction process in section 4.2 line (412-424) and Figure 4.
Author action: The manuscript has been updated by adding example of word-based features line (415-418). The description about how word-based features were used in the paper has been added in the manuscript line (412-424).
Reviewer#2, Concern # 4: The paper provided detailed parameter analysis of tuning different deep learning models. However, it is unclear to me why the authors introduced deep learning models for evaluation and how the deep learning models were used to evaluate the SGD method.
Author response: Thanks for your concern. Feature engineering is major task in machine learning. In traditional machine learning classifier such as Support Vector Machine and Naïve Bayes etc., we have to perform feature engineering task additionally while in deep learning feature engineering tasks are performed automatically. The deep learning architecture is flexible. Moreover, you can select optimized hyperparameters of deep learning classifiers according to the nature of your dataset. Therefore, in Section 5.1.1 analysis is performed to find out optimized hyperparameters of deep learning classifiers using Daraz and Yelp datasets. The description about how the deep learning models were used to evaluate the SGD method are explained in Section 5.1, line (476-481). SGD method is used to label review dataset as spam and not-spam. Next, deep learning classifier are used to validate the accuracy of the labelled review data set (spam, not-spam) using training and testing procedure.
Reviewer#2, Concern # 5: Were spammer scores described in Section 4.1.3 used as the ground truth labels which were further used for training and testing the deep learning models?
Author response: Thank you for your concern and clarification. Yes, spammer scores, described in Section 4.1.3, are used as the ground truth labels which were further used for training and testing the deep learning models.
Reviewer#2, Concern # 6: In Section 4.1, SGD needs several parameters, e.g., threshold of review time and threshold of review rating (Equation (2)), minimum number of neighbors (Equation (3)), threshold of review time (Equation (5)), and threshold of spammer group identification (line 358-359). The paper mentioned those parameters were determined through experimental evaluations. However, it is unclear how those parameters were tuned.
Author response: Thank you for your comment. The mentioned parameters are tuned using experimental evaluations. The details of how these parameters were tuned to obtain optimal threshold values have been updated in the manuscript as follows.
• Threshold of review (Equation 2) has been updated in Section 4.1.1, line (244-252).
• Threshold of review rating (Equation 2) has been updated in Section 4.1.1, line (253-263).
• Threshold of minimum number of neighbors (Equation 3) has been updated in Section 4.1.2, line (283-295).
• Threshold of review time (Equation 5) has been updated in Section 4.1.3, line (329-333).
• Threshold of spammer group identification has been updated in Section 4.1.3, line (390-397).
Author action: The manuscript format has been updated line (244-252), line (253-263), line (283-295), line (329-333) and line (390-397).
Reviewer#2, Concern # 7: For the review diversification evaluation, the results could be more convincing if they are compared with existing methods described in the literature review section similar to what the authors did for spammer group detection comparison.
Author response: Thank you for your comment. There exists very limited work which studied review diversification. Most of these existing studies have mostly used query-based approach to display diversified views of products. Moreover, these studies have worked with different evaluation measure than the proposed work. It has been observed by literature review that most of the existing studies used Discounted Cumulative Gain (DCG) measure for the evaluation of diversity problem but DCG calculates the degree to which the diversified results are relevant to the search query [7]. The proposed study does not use any search query to match with the result because it displays product based top K review. We have proposed a novel measure called DivScore (Equation 16) which has not been used by any existing work to display diversified view. Therefore, the result of the DSR method cannot be compared with existing literature as the nature of calculating diversified reviews of these works does not match. For example, search query-based techniques take in the following query “Search all mobile phone models which have 4000mah battery and 64 GB Internal memory”. The search query-based technique will select models relevant to the query and displays diversified reviews comparing all mobile phone models having mentioned specifications. In comparison to this, the proposed DSR method displays diversified top K reviews of a specific mobile phone (product) without comparing with other mobile phones (products).
Reviewer#2, Concern # 8: Please provide the reference of the standard evaluation measure for diversification “divScore” (line 551-552).
Author response: Thank you for your comment. DivScore is evaluation measure proposed by this study. The is calculated on the basis of features addressed in each review. There exists a relation between and review diversification. The higher value of represents more diversified top K reviews. It was a typo mistake and the manuscript has been updated at line (623-625).
Author action: The manuscript format has been updated in line (623-625).
Reviewer#2, Concern # 9: Thanks for providing the dataset. However, it is unclear how the datasets provided in the supplemental material were used for experiments. There are few places in the paper mentioning the dataset statistics, e.g., line 202-203, line 507-508, line 518-520, and line 566-572. However, I found it is difficult to match those statistics against those in the excel sheets provided in the supplemental material. The Yelp dataset for review diversification is missing.
Author response: Thank you for your in-depth study of the manuscript. Daraz dataset utilized in this study contains 3,922 reviews of products whereas Yelp data set contains 355,210 reviews. Due to limited space provided by Manuscript Submission System, we were not able to upload complete dataset. For this reason, the statistics mentioned in the manuscript doesn’t match with the uploaded datasets. Now we have uploaded complete Daraz (https://www.kaggle.com/naveedhn/daraz-roman-urdu-reviews) and Yelp (https://www.kaggle.com/naveedhn/yelp-review-with-sentiments-and-features) data set on Kaggle. The uploaded Yelp dataset has also been updated and contain review features and sentiments for DSR method as were available in Daraz dataset. Now the statistics of Daraz and Yelp dataset mentioned in the manuscript line (210-211), line (574-575), and line (587-588) are aligned. As far as the objection on the mentioned line (566-572) of previous version of manuscript, this study (In DSR method) selects a single product/service from datasets which has reviews to calculate the . Therefore, the total number of reviews, presented in Section 5.2, are represented for a single product/service having maximum reviews which are selected for experimental evaluation to calculate . The detail about experimental setting for DSR method is described in line (640-646).
Author action: The manuscript has been updated at line (210-211), line (574-575), and line (587-588).
Reviewer#2, Concern # 10: Were the reviews in Figure 1 obtained before and after running your diversification algorithm? If not, could you provide examples before and after you run your review diversification algorithm? How does your algorithm help to present a comprehensive overview of a product or service features?
Author response: Thank you for your comment and suggestion. Figure 1 represents a generic view to introduce the problem of diversification. It is not obtained after applying our proposed DSR method. As per your suggestions, we have included an example scenario by implementing our DSR algorithm and presented the reviews before and after running our proposed DSR algorithm. Figure 12 represents the working of the proposed DSR method. Figure 12(a) shows some non-spam reviews before implementing DSR of a specific hotel which is located in New York whereas Figure 12(b) represents comprehensive view of top K reviews by showing all sentiments and product features highlighted in bold. The example has been included in manuscript line (617-622).
Author action: The manuscript has been updated at line (617-622).
References
[1]
Naveed Hussain, Hamid Turab Mirza, Ghulam Rasool, Ibrar Hussain, and Mohammad Kaleem. 'Spam review detection techniques: A systematic literature review.' Applied Sciences 9, no. 5 (2019): 987.
[2]
Rayana, Shebuti, and Leman Akoglu. 'Collective opinion spam detection: Bridging review networks and metadata.' In Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, pp. 985-994. 2015.
[3]
Mukherjee, Arjun, Vivek Venkataraman, Bing Liu, and Natalie Glance. 'What yelp fake review filter might be doing?' In Proceedings of the International AAAI Conference on Web and Social Media, vol. 7, no. 1. 2013.
[4]
Naveed Hussain, Hamid Turab Mirza, Ibrar Hussain, Faiza Iqbal, and Imran Memon. 'Spam review detection using the linguistic and spammer Behavioral methods.' IEEE Access 8 (2020): 53801-53816.
[5]
Naveed Hussain, Hamid Turab Mirza, Faiza Iqbal, Ibrar Hussain, and Mohammad Kaleem. 'Detecting Spam Product Reviews in Roman Urdu Script.' The Computer Journal (2020).
[6]
Zhao, Lingyun, Lin Li, and Xinhao Zheng. 'A BERT based sentiment analysis and key entity detection approach for online financial texts.' arXiv preprint arXiv:2001.05326 (2020).
[7]
Adnan Abid, Naveed Hussain, Kamran Abid, Farooq Ahmad, Muhammad Shoaib Farooq, Uzma Farooq, Sher Afzal Khan, Yaser Daanial Khan, Muhammad Azhar Naeem, and Nabeel Sabir. 'A survey on search results diversification techniques.' Neural Computing and Applications 27, no. 5 (2016): 1207-1229.
" | Here is a paper. Please give your review comments after reading it. |
50 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Online reviews regarding different products or services have become the main source to determine public opinions. Consequently, manufacturers and sellers are extremely concerned with customer reviews as these have a direct impact on their businesses.</ns0:p><ns0:p>Unfortunately, to gain profit or fame, spam reviews are written to promote or demote targeted products or services. This practice is known as review spamming. In recent years, Spam Review Detection problem (SRD) has gained much attention from researchers, but still there is a need to identify review spammers who often work collaboratively to promote or demote targeted products. It can severely harm the review system. This work presents the Spammer Group Detection (SGD) method which identifies suspicious spammer groups based on the similarity of all reviewer's activities considering their review time and review ratings. After removing these identified spammer groups and spam reviews, the resulting non-spam reviews are displayed using diversification technique. For the diversification, this study proposed Diversified Set of Reviews (DSR) method which selects diversified set of top-k reviews having positive, negative, and neutral reviews/feedback covering all possible product features. Experimental evaluations are conducted on Roman Urdu and English real-world review datasets. The results show that the proposed methods outperformed the existing approaches when compared in terms of accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Customer reviews have become the major source to collect different opinions about products and services. These reviews can affect daily life decisions and professional activities: e.g., which restaurant is good, which car to purchase, which product to buy and which doctor to consult. Online reviews can be posted as genuine reviews or spam reviews. Spam reviews are usually written by individuals/spammers to highlight any product or service by giving spam reviews. Spam reviews may directly interpret financial advantages and losses for a company. For example, the large number of favourable reviews about products and services may attract more customers and negative reviews are often a reason for decline in the sale <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Distinguishing fake reviewers from the genuine reviewer in an online forum is a challenging and open research issue. Therefore, in recent years, Spam Review Detection (SRD) problem has gained much attention in the research community. It is believed that without solving this significant problem, the review websites could become a place full of lies and completely useless <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. A recent survey suggests that online reviews for purchasing products or services play a critical role in the decisions of potential buyers <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. It has been observed that 70 percent of customers trust in the reviews written by other customers, while 90 percent read these reviews before making financial decisions <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>.</ns0:p><ns0:p>A spammer group refers to a group of reviewers who works together for writing spam reviews to promote or demote a set of targeted products. Spammer groups are very damaging as they can produce a huge amount of spam reviews collectively. When a group is working collaboratively towards a product, it can take control of the sentiment of the customer for the product. The group of reviewers is usually represented as a set of reviewer-ids. These ids can belong to a single person having multiple ids, multiple persons with single ids and/or combination of both <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>In the last few years, writing spam reviews has become a business. Spammers do not write a single review. They usually write many spam reviews for different products to earn money. So, collective behaviors of a group working together on several products can be difficult to identify the spam reviews.</ns0:p><ns0:p>Most of the existing studies have identified spam reviews by utilizing the individual spammer behavioral features <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>[8] <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>.</ns0:p><ns0:p>To the best of our knowledge, the spammer group detection task has not been thoroughly studied for the South Asian languages, specifically for Roman Urdu reviews. Roman Urdu is used for the Urdu language written in Roman script (using English alphabets) <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. For example, Urdu sentence:</ns0:p><ns0:p>‫ﮨﮯ'‬ ‫ﺯﯾﺎﺩﮦ‬ ‫ﻗﯿﻤﺖ‬ ‫ﮐﯽ‬ ‫ﺍﺱ‬ ‫ﺳﮯ‬ ‫ﻟﺤﺎﻅ‬ ‫ﺍﺱ‬ ‫ﺍﻭﺭ‬ ‫ﮨﮯ‬ ‫ﮐﻢ‬ ‫ﺑﮩﺖ‬ ‫ﺩﻭﺭﺍﻧﯿﮧ‬ ‫ﮐﺎ‬ ‫ﺑﯿﭩﺮﯼ‬ ‫ﮐﯽ‬ ‫'ﻣﻮﺑﺎﺋﻞ‬ will be written in Roman Urdu as 'Mobile ki battery ka dorania both kam hai aur is lehaz se es ki qeemat both ziada hai' whereas in English it will be translated as 'Mobile's battery duration is too short, and compared to this its price is very high.' Roman Urdu uses English alphabets and Urdu semantics. The first objective to conduct this research is to identify spammer groups and spam reviews in Roman Urdu</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science reviews and English reviews to increase customer confidence regarding product purchases in the South Asia, particularly in Pakistan.</ns0:p><ns0:p>It is a common practice that the reviewers usually read two or three reviews from the top reviews and decide about the purchase of the product/service. These top reviews usually contain positive/negative reviews and may not present a complete view of all reviewer's feedback <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> . Figure <ns0:ref type='figure' target='#fig_14'>1</ns0:ref> presents an example scenario of reviews and compares the presentation of non-diversified reviews with diversified reviews. Figure <ns0:ref type='figure' target='#fig_14'>1</ns0:ref>(a) displays all the positive top-k reviews regarding a mobile model which shows the bias toward promoting the product whereas Figure <ns0:ref type='figure' target='#fig_14'>1</ns0:ref>(b) presents an assorted representation of positive, negative, and neutral reviews covering all possible features of the same mobile model. The diversified reviews are more reliable and useful for the user to make the decision <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. The second objective of this research is to present a diversified view of top-k non-spam reviews having positive, negative, and neutral sentiments covering all possible product features. This study proposed a novel Spammer Group Detection (SGD) method to find out suspicious spammer groups that work together to produce spam reviews. SGD works in several phases. First, it produces co-reviewer graphs for identification of reviewers having similarity in reviews based on review post time and review rating. It, then, utilizes the Structural Clustering Algorithm for Networks (SCAN) algorithm to find out candidate spam groups. Next, it analyzes various individual spammer behavioral features and group spammer behavioral features to calculate the spam score of each candidate group reviews. Finally, using a predefined threshold value, the group having a higher spam score than the threshold is labeled as spam group and assumed that all reviews written by that group are spam. The training and testing of the proposed SGD method are conducted using deep learning classifiers:</ns0:p><ns0:p>Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Units (GRU), and Bidirectional Encoder Representations from Transformers (BERT).</ns0:p><ns0:p>In addition to finding non-spam reviews using SGD method, this work also presents a novel diversification method; named as Diversified Set of Reviews (DSR), which selects diversified set of top-k non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible product features. DSR method is used to present non-spam reviews which are identified by SGD method. DSR approach works in several phases, first, review dataset of the product or service is divided into three categories based on positive, negative and neutral sentiments, then, the diversified feature extraction process is applied for each product or service such that expanded features are extracted. In the end, all three diversified categories of datasets are combined to display top K diversified non-spam reviews of particular product or service having varied sentiments and expanded product features. The DSR method save time for the reviewers to decide about product and services without considering all reviews.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The outcome of this research is a proposed framework which, first, identify spammer and spam reviews using SGD method and then apply DSR method to produce a diversified set of top k non-spam reviews. The framework helps in identifying group spammers and displays identified non-spam reviews in diversified manner. To the best of the researchers' knowledge, this is the first study of its kind to identify spammer group and display diversified set of top K non-spam reviews about products and services.</ns0:p><ns0:p>Main contributions of the study are as follows:</ns0:p><ns0:p>1. Formulated a co-reviewer graph to identify the connection between different reviewers. and testing the proposed SGD method. 5. Proposed a novel diversification method (DSR) to obtain a diversified set of reviews. The rest of the paper is organized as follows. Introduction section is followed by section 2 which presents related work regarding group spam detection and existing diversification techniques. Section 3 describes the statistics of the datasets used for this research. Section 4 elaborates the proposed SGD and DSR methods. Section 5 describes the experimental evaluation of the proposed methods. Finally, Section 6 concludes the work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Literature Review</ns0:head><ns0:p>This study has reviewed the literature from two perspectives (a) identifying the spammer group in the Spam Review Detection (SRD) domain and (b) diversification techniques used in the domain of information retrieval. The aim is to determine the novel contributions of the proposed work by comparing it with prior studies.</ns0:p><ns0:p>i.</ns0:p></ns0:div>
<ns0:div><ns0:head>Group Spammer Detection Method</ns0:head><ns0:p>In this subsection, existing studies of group spam detection have been reviewed and analyzed. Mukherjee et al., <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> conducted the first study for detecting a group of spam reviewers working together. They used the frequent itemset mining method to get candidate groups and proposed GSRank framework for identifying the spam groups.</ns0:p><ns0:p>Allah bakhsh et al., <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> also used frequent item mining techniques. Spammer behavioral features like review time and rating scores were used to detect group spammers. They used the Linear Discriminant Analysis (LDA) model by boosting the count of malicious reviewers based on the burstiness of reviews and rating scores. A Spammer Group Detection (PSGD) method was introduced by Zhang et al., <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, which used a supervised learning approach for spotting spammer groups in online review systems. They used frequent item mining to get candidate spammer groups. Then, the Naive Bayesian and Expectation-Maximum (EM) algorithms were used for classification and identification of spammer groups. They performed their experiment on Amazon.cn dataset. Zhou et al., <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> identified spammer groups by using self-similarity and clustering coefficient methods.</ns0:p><ns0:p>They performed their experiments on Dianping dataset and observed that the clustering coefficient has the best indicator for detecting spammer groups. Rayana et al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> proposed a framework called SPEAGLE which used metadata (review text and spammer behavior) of reviews and relational data (review network). This framework can identify fake reviews, spammers, and target products. They also introduced a lightweight version of SPEAGLE called SPLITE which used a subset of features to avoid computational overhead. Li et al., <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> proposed an algorithm to detect individual and group spammers. They proposed Labelled Hidden Markov Modal (LHMM) to identify spammers. They extended their modal to Coupled Hidden Markov Modal (CHMM), which has two parallel HMMs. It represented posting behavior and co-bursting signals. They used hidden states to make a co-bursting network of reviewers to detect spammers who work in a group. Kaghazgaran et al., <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> proposed a framework called TwoFace using a neighborhoodbased method to spot spammer groups in an online review system. First, they exploited different crowdsourcing websites and selected Rapid Workers to get information about their activities of Amazon products in which they were targeted. Next, they have identified product ids from the amazon dataset for products mentioned in crowdsourcing activities. Later, they get a list of reviewers who have written reviews about these products and found reviews of all such reviewers who write a review on those products. After that, they have identified all those reviewers who have written reviews on the same product and created a co-reviewer graph. The model, then, applied the trust rank algorithm, which is based on the PageRank algorithm, to find ranking scores of different suspicious groups. They used a different machine learning algorithm to classify suspicious spammer groups. Zhang et al., <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> proposed a CONSGD method that used a cosine pattern and heterogeneous information network method to detect spammer groups. To find a tight spammer group candidate, they used the FP-Growth-like algorithm to find cosine patterns. They restricted the tightness of extracted groups with a low cosine threshold value to achieve efficiency. Xu et al., <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science model marked top-ranked groups as spammer groups. In a similar context, Hu et al., <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref> used the CPM method to find spammer groups with the infinite change in the review stream.</ns0:p><ns0:p>Considering the existing work on spam group detection, most of the related studies <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>[14][15] <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> have used spammer behavioral features to detect spam groups. On the other hand, some researchers used graph-based techniques to identify suspicious spammer groups with a little focus on spammer behavioral features <ns0:ref type='bibr'>[17][18]</ns0:ref>[19] <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>[21] <ns0:ref type='bibr'>[22][23]</ns0:ref>. This research aims to develop a framework that will use both behavioral and graph features. First, it creates connections between suspicious reviewers based on the similarity of their behavioral features and then scans the identified suspicious reviewers to detect spammer groups.</ns0:p><ns0:p>ii.</ns0:p></ns0:div>
<ns0:div><ns0:head>Diversification Method</ns0:head><ns0:p>This subsection analyzes existing diversification methods used for information retrieval. The first study, about the recommender system using diversification technique, was introduced by Ziegler et al., <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref>. They considered top-N reviews and proposed an approach based on clustering, which selects a small subset of reviews that cover better-diversified opinions and high-quality attributes. However, this method used a limited number of reviews, so it is difficult to assure that all required sentiments were considered. Naveed et al., <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref> proposed FREuD method which is based on latent topics. The limitation of the proposed method was that it assigned equal weightage to both the negative and positive sentiment.</ns0:p><ns0:p>Guzman et al., <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> applied different weights to the sentiments and allowed stakeholders to assign desired importance to the sentiments. They proposed a diverse method, which retrieved a set of reviews that represented the diversified opinions of users. Moreover, they have also grouped reviews with similar attributes and sentiments.</ns0:p><ns0:p>Naveed et al., <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> used probabilistic topic modeling for diversification. They extracted the features of product reviews and displayed the diversified reviews based on these extracted features.</ns0:p><ns0:p>Based on the reviewed literature, it has been observed there exist very limited studies which considered review diversification problem out of which most of the related studies <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref>[25] <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> have selected diversified set of top-k reviews having positive and negative sentiments based on search query.</ns0:p><ns0:p>On the other hand, the study <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> used features based approach for displaying top K reviews using search query. However, these existing studies either identifies sentiments or product feature using search queries and no existing study combined product features and sentiments to display diversified review without considering search queries. The aim of this study is to develop a method which can display reviews in a diversified manner such that the presented reviews represent positive, negative and neutral sentiments covering all related features about product and services. To obtain this objective, this study proposed a novel diversification method (DSR) to display diversified set of reviews using sentiment analysis and product features. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Review Datasets</ns0:head><ns0:p>This study has utilized two datasets: a) Yelp (spam and non-spam reviews) real-world dataset about hotels and restaurants which was provided by Rayana et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Table <ns0:ref type='table'>1</ns0:ref> presents the summary of Yelp dataset and, b) Roman Urdu product reviews real-world dataset, which was scrapped from Daraz 1 using python library (Scrappy). Table <ns0:ref type='table'>2</ns0:ref> presents the summary of the Daraz dataset. This study has utilized Daraz dataset which contains reviews of products from a time span of February 2016 to January 2020. Further, this study also used Yelp dataset containing reviews about hotels and restaurants spanning from March 2011 to August 2014. For this study, we have removed those reviewers who have posted less than two reviews. We have also removed those products which have less than three reviews from the Daraz and Yelp dataset. This study also removed junk characters, numerical values and stop words from review text and removed those reviews which contained less than five words. After preprocessing, the Daraz dataset is reduced to 3,923, reviews and Yelp dataset is reduced to 355,210.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Proposed Methods</ns0:head><ns0:p>This research proposed two methods: (i) Spammer Group Detection (SGD) method which detects suspicious groups of reviewers, who write spam reviews to promote or demote the targeted products and services, (ii) Diversified Set of Reviews (DSR) method which selects a diversified set of top-k non-spam reviews having positive, negative, and neutral sentiments. Furthermore, it covers all possible features about the product or service.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.'>Proposed Spammer Group Detection (SGD) Method</ns0:head><ns0:p>This section explains the proposed Spammer Group Detection (SGD) method. The framework of the proposed spam review detection method is described in Figure <ns0:ref type='figure' target='#fig_16'>2</ns0:ref>. The execution of the SGD starts with Daraz (Roman Urdu reviews) dataset. The proposed framework is divided into three phases. In the first phase, the co-reviewer graph of suspicious reviewers is generated which is based on identified similar behavioral features. The advantage of a co-reviewer graph is that it will create the linkage between the suspicious reviewers which are appeared to be similar based on products reviewed. In the second phase, Structural Clustering Algorithm for Networks (SCAN) utilizes a clustering approach to identify candidate spammer groups. In the third phase, the spam score is calculated for these groups based on individual and group spammer behavioral features. The groups having spam score less than a specific threshold are dropped from candidate spam groups. Moreover, it has been assumed that all the reviews of the identified spam group are considered as spam review.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1.'>Co-Reviewer Graph</ns0:head><ns0:p>In this section, the procedure to generate the Co-Reviewer graph is presented. It is represented by</ns0:p><ns0:p>Graph G = (V, E) where vertices V represent the reviewers and edges E represent the similarity between two reviewers. For edge E, a value of 0 represents no similarity between reviewers whereas the value of 1 means that two reviewers are similar. The value of edge E between two reviewers A and B is calculated using Equation (1) as follows: </ns0:p><ns0:formula xml:id='formula_0'>𝜆(𝐴,𝐵) = { 0, ∀ 𝑝 ∈ 𝑃 𝐴 ∩ 𝑃 𝐵 , 𝐶𝑅𝑆(𝐴,𝐵,𝑝) = 0 1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(1</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>CRS(𝐴,𝐵,𝑝) = { 0, (|𝑡 𝑝 𝐴 -𝑡 𝑝 𝐵 | > 𝛼) 𝑂𝑅 (|𝑅 𝑝 𝐴 -𝑅 𝑝 𝐵 | ≥ 𝛽) 1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where represents the time when reviewer A reviewed product p whereas represents the time 𝑡 𝑝 𝐴 𝑡 𝑝 𝐵 when reviewer B reviewed the product p. is the user-defined threshold of review time. Through 𝛼 experimental evaluations and analysis of the dataset, it has been observed that most spammers are active for a short period and post their reviews for at most first few couple of days. This is because spammers want to write spam reviews as soon as possible to give positive impact of the product. The researchers have experimentally evaluated different threshold values of . When threshold value has been increased 𝛼 by 2 then the number of similar reviewers is considerably increased. This also includes those reviewers which are probably the real users and not spammers and have written only one or two reviews. When the threshold value of 1 was used, hardly any similar reviewers were found. Therefore, the optimal value of 𝛼 is specified as 2-days. The threshold value represents reviewer's rating score which is defined as 2 based on the 𝛽 observations of existing study by Sabuti et al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. is the rating given by reviewer A on product p and 𝑅 𝑝 𝐴 is the rating score given by reviewer B on product p. Spammer group either tries to promote or demote 𝑅 𝑝 𝐵 the product and therefore give an extreme rating which seems different than other reviewers. If they are promoting a product their goal will be to assign a rating of 4 or 5 whereas in the case of defaming a product, they will give 1 or 2-star rating to the product. This is the reason reviewers with a rating difference of 2 or more are not considered as part of any spammer group. If the threshold value of is set 𝛽 to more than 2, then two reviewers who have rated a product 2 and 5 stars respectively will be treated as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science similar reviewers which is not true at all. On the other hand, if the rating difference is decreased from 2 i.e., is set to less than 2, then a large portion of candidate spammer groups will be created. Therefore, 𝛽 the optimal value of is set to 2. 𝛽 The calculation of Co-Review Similarity (CRS) is performed based on the following conditions. First, the difference in review posting time of two reviewers is compared and then the difference of the reviewer rating is compared. If review posting time is larger than a user-defined threshold or the 𝛼 difference of reviewers rating is greater than threshold , then a value of 0 is given to CRS which 𝛽 represents no similarity between two reviewers. On the other hand, if these two conditions become false then a value of 1 is given to CRS which shows that these reviewers are similar to each other and are behaving almost the same way.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2.'>Candidate Spam Groups using SCAN Algorithm</ns0:head><ns0:p>In this section, candidate spam groups are generated from a co-reviewer graph using the Structural Clustering Algorithm for Networks (SCAN) algorithm. SCAN algorithm was proposed by Xu et al., <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> for identifying clusters, hubs, and outliers in datasets. However, this research has utilized a SCAN Algorithm (Figure <ns0:ref type='figure' target='#fig_18'>3</ns0:ref>) for identifying only clusters where these clusters are represented as candidate spam groups.</ns0:p><ns0:p>A vertex , representing a reviewer, is called a core if it has a similar structure (commonly 𝑣 ∈ 𝑉 reviewed products in this case) with at least n vertices (or n reviewers) in its neighborhood. Structural similarity of a vertex with its neighbor is calculated using Equation (3) as follows:</ns0:p><ns0:p>𝑣 𝑥 𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎𝑙𝑆𝑖𝑚𝑖𝑙𝑖𝑎𝑟𝑦(𝑣,𝑥) = 𝑛𝑜. 𝑜𝑓 𝑠ℎ𝑎𝑟𝑒𝑑 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟𝑠 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑣 𝑎𝑛𝑑 𝑥 (𝑛𝑜.𝑜𝑓𝑣 ' 𝑠𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟𝑠) × (𝑛𝑜.𝑜𝑓𝑥 ' 𝑠𝑛𝑒𝑖𝑔𝑏𝑜𝑟𝑠)</ns0:p><ns0:p>(3) This structural similarity score identifies the similarity structure of two vertices with each other. The higher the structural similarity score is, the more similar structure these two vertices have and vice versa.</ns0:p><ns0:p>For a vertex to be considered as the core, the structural similarity score must be greater than a specified threshold with a minimum n number of neighbors. After experimental evaluation, the value of is 𝛾 𝛾 specified as 0.5 and the value of n is taken as 3. A similarity score = 0.5 was set to have confidence that 𝛾 both the reviewers have reviewed at-least half of the similar products. For example, if a reviewer has reviewed 4 products whereas the other has reviewed only 1 of those, then both of these should not be treated as spammer-neighbors. For this reason, the value of cannot be considered as less than 0.5 𝛾 because it may include many such instances. On the other hand, when the value of was increased from 𝛾 0.5, only a few such neighbors were found who reviewed similar products. Similarly, the count of minimum neighbor is set to 3. The reason is that if we consider only 1 or 2 neighbors, almost all the reviewers will be included in the spammer group which does not exist in real scenario as there are always Manuscript to be reviewed</ns0:p><ns0:p>Computer Science some genuine reviewers of the product which should not be treated like spammers. On the other hand, if n is increased from 3, then sparse spammer groups will be identified. This is the reason, a vertex can be 𝑣 treated as a core if it has a similarity score higher than =0.5 with at least n=3 neighbors. If a vertex 𝛾 is in the neighborhood of a core , then it is called Direct Reachable Structure (DRS). Moreover, 𝑥 ∈ 𝑉 𝑣 the connection of vertices v and x has value 1 as computed in the co-reviewer graph. Figure <ns0:ref type='figure' target='#fig_18'>3</ns0:ref> elaborates the working of Structural Clustering Algorithm for Networks (SCAN). First, all the vertices are labeled as unclassified (Line 1). SCAN Algorithm classifies these vertices into members or non-members. For every unclassified vertex (Line 2), it is checked that if it is a core (Line 3), if yes, then a new cluster-ID is generated (Line 4). Once a cluster-ID is generated based on this identified core, all the neighbors of the core are inserted into a queue 'Q' (Line 5). After inserting all the neighbors of core in Q (Line 6), every element in Q is used to explore Directly Reachable Structure (DRS) vertices from it. These identified DRS are, then, placed in 'R' (Line 7). Thereafter, each vertex in R is checked for having neighbors with a structural similarity score greater than a specified threshold which are still not 𝛾 assigned to any other cluster (Line 9). Such neighbors of are inserted into with the intention that this 𝑥 𝑄 cluster can grow from those neighbors (Line 10). Then the classification status of is checked and if it is 𝑥 unclassified yet, the current cluster-ID is assigned to it (Line 12). In case a vertex is not identified as core by Line 3 then it is labeled as a non-member (Lines 14-15) so that it should not be checked again for being the core. This is specified to minimize the time complexity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3.'>Spam Score using Spammer Behavioral Features</ns0:head><ns0:p>This section describes the third phase which calculates the spam score of candidate spam groups as generated by the SCAN Algorithm (Figure <ns0:ref type='figure' target='#fig_18'>3</ns0:ref>). The spam score of every candidate group is calculated using various individual spammer and group spammer behavioral features. The values of these behavioral features are calculated independently and then the average score of these behavioral features is assigned to that candidate group. In this research, a total of eight spammer behavior features (the combination of individual and group spammer behavioral features) are used to assign a spam score to every group. In addition to these spammer behavioral features, a loss function is also used to reduce the contingency of small groups <ns0:ref type='bibr' target='#b28'>[29]</ns0:ref>. The loss function is defined in Equation ( <ns0:ref type='formula' target='#formula_2'>4</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_2'>𝐿(𝑔) = 1 1 + 𝑒 -(|𝑅 𝑔 | + |𝑃 𝑔 | -3)<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where is the number of reviewers in the group and represents the number of products in the group. 𝑅 𝑔 𝑃 𝑔 i.</ns0:p><ns0:p>Individual Spammer Behavioral Features In this work, three individual spammer behavioral features are used. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Time Burstiness (BST)</ns0:head><ns0:p>Usually, spammers target a product in a short period to achieve their goals. Time Burstiness of a reviewer r is defined in Equation ( <ns0:ref type='formula' target='#formula_3'>5</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_3'>𝐵𝑆𝑇(𝑟) = { 0,𝐿(𝑟) -𝐹(𝑟) > 𝜎 1 - 𝐿(𝑟) -𝐹(𝑟) 𝛽 , 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where is the date of the latest review by r, is the date of the first review by r and is the user-𝐿 ( 𝑟 ) 𝐹 ( 𝑟 ) 𝜎 defined time threshold specified as 3 days. The spammers while reviewing a product, are generally active for a shorter span of time and once their goal is achieved, they do not generate reviews for the product. If the threshold value is increased from 3, then many real users are also included in the candidate-spammers whereas, decreasing this value from 3 returned very few spam reviewers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Maximum Number of Reviews (MNR)</ns0:head><ns0:p>Generally, spammers tend to post a larger number of reviewers in a single day. MNR for a reviewer r is defined in Equation ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_4'>𝑀𝑁𝑅(𝑟) = 𝑚𝑎𝑥𝑉 𝑟 𝑚𝑎𝑥 𝑟 ∈ 𝑅 (𝑚𝑎𝑥𝑉 𝑟 )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Where is the number of reviews posted by r in a day and it is normalized by the maximum number of 𝑉 𝑟 reviews in the reviewer's review set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Average Rating Deviation (ARD)</ns0:head><ns0:p>Mostly, a spammer gives a different rating from the genuine reviewer's ratings because the purpose of the spammer is a false projection of a product either in a positive or negative sense. Average rating deviation is defined in Equation ( <ns0:ref type='formula' target='#formula_5'>7</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_5'>𝐴𝑅𝐷(𝑟) = 𝑎𝑣𝑔 𝑝 ∈ 𝑃 𝑟 𝛿 𝑝 𝑟 -𝛿 𝑝 𝑟 5<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Where is the set of products reviewed by reviewer r, represents rating score given by r to the 𝑃 𝑟 𝛿 𝑝 𝑟 product p and represents the average rating score of product p given by all reviewers. This value is 𝛿 𝑝 𝑟 then normalized by the maximum rating deviation i.e., 5 in case of a 5-star rating system.</ns0:p><ns0:p>ii. Groups Spammer Behavioral Features A total of five group spammer behavioral features are used in this work. The symbols used in describing group spammer behavioral features is represented in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Review Tightness (RT)</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Review tightness of a group is defined as the similarity of reviews by the reviewers of candidate spammer group. It is defined in Equation ( <ns0:ref type='formula'>8</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_6'>𝑅𝑇(𝑔) = |𝑉 𝑔 | |𝑅 𝑔 ||𝑃 𝑔 | (8)</ns0:formula><ns0:p>Where represents the number of reviewers in group g whereas is the cardinality of the</ns0:p><ns0:formula xml:id='formula_7'>|𝑉 𝑔 | |𝑅 𝑔 ||𝑃 𝑔 |</ns0:formula><ns0:p>Cartesian Product of the reviewer set and the product set in group g.</ns0:p></ns0:div>
<ns0:div><ns0:head>Product Tightness (PT)</ns0:head><ns0:p>Generally, in a spam group, the spammers target some specific products therefore the product tightness is an important spammer behavioral feature. It represents the similarity of products reviewed by reviewers of candidate spammer group. It is defined in Equation ( <ns0:ref type='formula' target='#formula_8'>9</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_8'>𝑃𝑇(𝑔) = | ∩ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 | | ∪ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 |<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Where represents the number of the common product reviewed by the members in the group</ns0:p><ns0:formula xml:id='formula_9'>| ∩ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 |</ns0:formula><ns0:p>and represents all products reviewed by all members of the group.</ns0:p><ns0:formula xml:id='formula_10'>| ∪ 𝑟 ∈ 𝑅 𝑔 𝑃 𝑟 | Rating Variance (RV)</ns0:formula><ns0:p>The members of a candidate spammer group usually give a similar rating to the reviewed products.</ns0:p><ns0:p>This type of spammer behavior can be identified by calculating Rating Variance which is defined in Equation <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref> as follows:</ns0:p><ns0:formula xml:id='formula_11'>𝑅𝑉(𝑔) = 2 ( 1 -1 1 + 𝑒 -𝑎𝑣𝑔 𝑝 ∈ 𝑃 𝑔 𝑆 2 (𝑝,𝑔) ) 𝐿(𝑔)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>Where represents the variance of the rating scores of product p by all reviewers of group g. 𝑆 2 ( 𝑝,𝑔 )</ns0:p></ns0:div>
<ns0:div><ns0:head>Group Size (GS)</ns0:head><ns0:p>It has been observed by existing study <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> that usually spammer groups of 2 or 3 reviewers are formed by coincidence who have no intensions or common interest with each other to write spam reviews. However, larger groups are usually formed by the intention to write spam reviews to target any product or service. Therefore, identifying group size is a good feature to observe the behavior of the candidate spammer groups. Moreover, it is worthwhile to give more weightage to the larger group size.</ns0:p><ns0:p>The group size indicator is defined in Equation ( <ns0:ref type='formula' target='#formula_12'>11</ns0:ref>) as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_12'>𝐺𝑆(𝑔) = 1 1 + 𝑒 -(|𝑅 𝑔 | -3)<ns0:label>(11</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where is the number of reviewers in group g. 𝑅 𝑔</ns0:p></ns0:div>
<ns0:div><ns0:head>Reviewer Ratio (RR)</ns0:head><ns0:p>In a candidate spammer group, if some products are being reviewed by one reviewer while other reviewers of the same spammer group have not posted any reviews about that product than this represents suspicious behavior of the reviewers of the candidate spammer group. The Reviewer Ratio is therefore calculated to assess this behavior of candidate spammer group. It is represented by Equation ( <ns0:ref type='formula' target='#formula_13'>12</ns0:ref>) as follows:</ns0:p><ns0:formula xml:id='formula_13'>𝑅𝑅(𝑔) = 𝑚𝑎𝑥 𝑝 ∈ 𝑃 𝑔 |𝑅 𝑔 𝑝 | |𝑅 𝑝 |<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Where represents the number of reviewers in group g who reviewed the product p and is the total 𝑅 𝑔 𝑝 𝑅 𝑔 number of reviewers who reviewed the product p.</ns0:p><ns0:p>Based on the calculated values of these behavioral features (Equations 5-12), the spam score of each candidate spammer group is calculated by taking an average score of these eight behavioral features. This spam score highlights the suspiciousness of candidate spammer groups such that the higher the spam score, the more chances are for that group having spam reviewers. Through experimental evaluations, a threshold value of 0.6 is defined for spam score which is used to identify suspicious spammer groups.</ns0:p><ns0:p>While analyzing the existing datasets, it was observed that generally, 10-15% of total reviews are spam so this study decided to follow the same ratio. When this threshold was set to 0.4-0.5 all the candidategroups were treated as spam that resulted in almost 40% of the reviews as spam. On the other hand, increasing this value to greater than 0.6 resulted in only a few candidate spam groups, which produced less than 5% of the total reviews as spam. Therefore, the spam score threshold was set to 0.6 which labeled 13.51% of reviews as spam and provided optimal results.</ns0:p><ns0:p>It can also be assumed that the threshold value can vary depending upon different applications. For example, when an application wants to identify as many spam reviews as possible, then he or she ought to set threshold value to be relatively small. After identifying suspicious spammer groups, all the reviews by the members of these groups are labeled as spam which results in the labeled dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.'>Proposed Diversified Set of Reviews (DSR) Method</ns0:head><ns0:p>In this section, a novel Diversified Set of Reviews (DSR) method is proposed which returns a compact set of diversified non-spam reviews having positive, negative, and neutral sentiments covering the maximum number of features. In contrast to the earlier techniques, this work does not only retrieve the reviews based on diversified sentiments (i.e., positive, negative, and neutral) but also displays reviews covering all possible product features. Product features represents important components of the product Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>about which customers are writing reviews. For example, consider laptop is a product; its features will represent its battery life, display screen, RAM, and performance etc. The proposed DSR approach displays diversified reviews based on the features and sentiments simultaneously. Figure <ns0:ref type='figure' target='#fig_20'>4</ns0:ref> represents the framework of the DSR method which shows that the review dataset (not-spam reviews), achieved through the SGD method (Section 4.1), is utilized as an input of DSR. In this approach, first, this review dataset of a specific product is divided into three categories based on positive, negative and neutral sentiments, then, the diversified feature extraction process is applied on these categories separately. In feature extraction process, feature set is prepared for each product review. For example, consider a product review 'When I received the device and hit the power button, it didn't turn on easily. After setting it up, I notice that the volume up key doesn't work and the volume was raised randomly'. In this review 'power button' and 'volume up key' are features. Similarly, feature set of all reviews of the all products are prepared. For a specific product, three feature sets are prepared based on its positive, negative and neutral categories.</ns0:p><ns0:p>Next, each review of a product is assigned a weight based on its features using Equation ( <ns0:ref type='formula'>13</ns0:ref>) which is then used to calculate its utility score. All reviews are evaluated by comparing their utility scores to select top k reviews of the product. Figure <ns0:ref type='figure' target='#fig_22'>5</ns0:ref> explains this process in detail. Finally, all three diversified datasets of a product (positive, negative and neutral) are combined to display top K diversified reviews having varied sentiments and expanded product features. The sentiments and features extraction for the reviews are determined by using python's built-in library TextBlob <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_22'>5</ns0:ref> elaborates the working of DSR algorithm, which begins with the set of reviews having positive, negative, and neutral reviews (Line 1). The diversified reviews result set is initially set to 𝑆 empty (Line 2). The algorithm considers one type of sentiment (positive, negative, or neutral) at a time and iterates over all three sentiments (Line 3). For example, if this study has 25 positive, 15 negative and 20 neutral reviews in dataset R, then in the first iteration the value of k will consider all positive reviews (line 4), in the next iteration it will consider all negative reviews and in the last iteration, all neutral reviews are considered. In the next step, the feature set F is formulated which consists of all features described by review r (Line 5). The diversified set for every sentiment is selected and retrieved separately and is stored in set (Line 6). The loop iterates through the count of specific sentiment k (Line 7). For 𝑠 instance, if the number of positive reviews to be returned is 25, this loop will run 25 times, and in every iteration, it will retrieve one review to be added into the set . For each review (Line 8), the addressed 𝑠 features of reviews are observed. If the same feature exists in feature list , then these are added into a list 𝐹 (Line 9). To maximize the diversification process, the features selected from set F for review are 𝑓 * ( 𝑟 ) 𝑟 𝑖 not considered again for the next review . 𝑟 𝑗 The weights for these features are calculated using Equation ( <ns0:ref type='formula'>13</ns0:ref> feature list . These calculated weights of the features are then summed up as Utility ( ) of the review r 𝐹 𝑈 (Line 10). After calculating utility score for each review, review with the maximum utility is added into 𝑟 (Lines 12-13) and the same is discarded from the review dataset subsequently (Line 14). Moreover, 𝑠 𝑅 the features addressed in are also eliminated from the feature list (Line 15) with the aim that these 𝑟 𝐹 features may not be added in the utility of any other review to achieve maximized diversity. This updates the feature list after every selection. The advantage of updating the feature list is that the remaining unaddressed features are also considered to be displayed in a top-k diversified set of reviews. This feature set is regenerated for every sentiment-based category of dataset i.e., positive, negative, and neutral. Once a sub-set for a specific sentiment is retrieved, it is appended into original diversified set (Line 17).</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑠 𝑆</ns0:head><ns0:p>This diversified set of reviews are returned and presented to the end-user as top-k diversified review which consists of all positive, negative, and neutral reviews covering all possible product features. Figure <ns0:ref type='figure' target='#fig_24'>6</ns0:ref> represents the main contribution and framework of this study which identifies spammer groups and presents these non-spam reviews in diversified format. The execution of the framework starts with Yelp and Daraz datasets. Daraz dataset is initially unlabeled. The proposed Spammer Group Detection (SGD) method is used to label the Daraz dataset and highlight the spammer and spam reviews.</ns0:p><ns0:p>Yelp dataset is already labelled thus SGD method has not been applied on it. The complete description of the datasets has been described in Section 3. The working of the SGD method has been described in </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Results and Discussion</ns0:head><ns0:p>This study is evaluated in the following two perspectives: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in terms of a diversified set of reviews. These evaluation results have been presented in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.'>Evaluation of Spammer Group Detection (SGD) Method using Deep Learning</ns0:head><ns0:p>Classifiers This section describes an evaluation of the proposed SGD method which identifies suspicious spammer groups and spam reviews utilizing deep learning classifiers. It presents the analysis of different parameter settings to find out optimal parameters that can be used in deep learning classifiers for the training and testing of the SGD method. The study has used standard evaluation measures <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref> to analyze the performance of the proposed SGD method. These include precision, recall, F1score and accuracy.</ns0:p><ns0:p>Deep learning classifiers which are used for training and testing of the proposed SGD method are LSTM, GRU, CNN and BERT. In addition to it, K-fold cross-validation (k =5) is used to validate the accuracy of the proposed method. The datasets (Daraz and Yelp) are split in the ratio of 80 to 20 for training and testing so that more datasets can be utilized to train deep learning classifiers <ns0:ref type='bibr' target='#b31'>[32]</ns0:ref>. The experimental evaluation of SGD is performed in three phases: (i) In the first phase, analysis of different parameter settings has been performed to achieve optimized hyperparameters for the proposed deep learning-based SGD method. (ii) In the second phase, the SGD method has been evaluated using different deep learning classifiers to analyze its accuracy. (iii) Finally, the performance comparison of the proposed SGD method has been conducted with existing approaches using different datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Analysis of Hyperparameters</ns0:head><ns0:p>In the first set of experiments, several initial parameters have been considered, as represented in Table <ns0:ref type='table'>4</ns0:ref>. These initial parameters are used as a starting point of analysis to find out optimized hyperparameters of deep learning classifiers using Daraz and Yelp datasets. Table <ns0:ref type='table'>4</ns0:ref> represents the values of these initial parameters which are used in the analysis of LSTM, GRU, CNN and BERT deep learning classifiers. The analysis of deep learning classifiers requires numerical values as an input; therefore, the review text needs to be converted into a numerical form <ns0:ref type='bibr'>[6][33]</ns0:ref>. In this study, Daraz review dataset is converted into the numeric form using Term Frequency-Inverse Document Frequency (TF-IDF) vectorization, whereas, Yelp review dataset is initialized by finding the corresponding word embedding using Google's Word2vec tool 2 which used the dimension of 300. The following subsections describe the comprehensive experimental details of different parameters which helped in optimizing deep learning classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>i. Activation Function</ns0:head><ns0:p>The activation function takes the output signal from the previous node (layer) and converts it into some usable form that can be taken as input to the next node (layer). This study, first, analyzes different ii. Optimization Method Deep learning classifiers usually have a certain loss or error during training <ns0:ref type='bibr' target='#b33'>[34]</ns0:ref>. This loss or error is calculated using a cost function. The purpose of the optimization function is to effectively train the classifier such that the error or loss is minimized. This study analyzes different optimization methods (SGD, RMSProp, Adagrad, Adadelta, Adam, Adamax and Nadam) on different deep learning classifiers.</ns0:p><ns0:p>Based on the experimental results, the best performing optimization method is utilized in deep learning classifiers for training and testing of the proposed SGD method. Figure <ns0:ref type='figure' target='#fig_28'>8</ns0:ref> iii. Dropout Function Deep learning classifiers usually have an overfitting problem, especially on a low volume of data <ns0:ref type='bibr' target='#b34'>[35]</ns0:ref>.</ns0:p><ns0:p>Therefore, the dropout function is used to overcome the overfitting problem. Figure <ns0:ref type='figure' target='#fig_30'>9</ns0:ref> v.</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of Features</ns0:head><ns0:p>In deep learning classifiers words in the datasets are mostly represented as features <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref>. The number of features to be fed in the classifier need to be limited to the most frequently occurring words rather than taking all the features. This helps to reduce the overfitting problem. Figure <ns0:ref type='figure' target='#fig_35'>11 (</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Optimized Parameters for Deep Learning Classifiers</ns0:head><ns0:p>A comprehensive experimental evaluation is presented in section (5.1.1) to find out optimized hyperparameters of deep learning classifiers. After performing the analysis of different parameter settings, the final set of optimized parameters for the proposed deep learning-based SGD method is shown in Table <ns0:ref type='table'>5</ns0:ref>. These parameters settings are used for the training and testing of deep learning classifiers (CNN, LSTM, GRU and BERT).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Analysis of deep learning classifiers</ns0:head><ns0:p>In this subsection, deep learning classifiers are evaluated in terms of accuracy achieved by each classifier. Table <ns0:ref type='table'>6</ns0:ref> shows that the CNN classifier performs better than the other two classifiers (LSTM and GRU) on Daraz and Yelp datasets. It has been observed from the literature review <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> that CNN may perform better for text classification. This observation can be applied in this study and CNN classifier can perform better because it utilized a review (text) dataset about products and services. CNN classifier uses the kernel which slides along the features and the weights. This mechanism may also be in favour of the utilized datasets. It is observed from the literature review that the LSTM classifier processes the features or words using sequential learning method <ns0:ref type='bibr' target='#b39'>[39]</ns0:ref>. This process may not be in favour of the utilized datasets.</ns0:p><ns0:p>Therefore, CNN produces better accuracy as compare to LSTM. It was observed by the literature review that the GRU classifier uses internal memory for storing and filtering information using their update and reset gates <ns0:ref type='bibr'>[40][41]</ns0:ref>. Therefore, this feature can produce a better accuracy score as compared to the LSTM classifier. It has been observed from literature review that BERT perform better for the applications where Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>search query is used to display matching results using Natural Language Processing (NLP) and may also best suited when analyzing sentiments (positive or negative) of the reviews <ns0:ref type='bibr' target='#b42'>[42]</ns0:ref>. These observations cannot be favourable for this study as the framework is based on the identification of group spammer using individual spammer behaviours and group spammer behaviour features. Therefore, BERT does not achieve better accuracy score as compare to other classifiers. Table <ns0:ref type='table'>6</ns0:ref> shows that deep learning classifiers produce better accuracy results utilizing Yelp dataset (355,210 reviews) as compared to the Daraz dataset <ns0:ref type='bibr'>(3,923 reviews)</ns0:ref>. The reason for this worst performance on Daraz dataset is that deep learning classifiers use word2vec which utilizes semantics to encode the words. This requires a big corpus so that word2vec can build a good vocabulary for encoding.</ns0:p><ns0:p>As the Daraz dataset does not provide a big corpus for Roman Urdu reviews as compared to the Yelp dataset, therefore, in this study deep learning classifiers produce better accuracy results on Yelp dataset as compare to Daraz dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.3'>Performance Comparison with Existing Approaches using Different Datasets</ns0:head><ns0:p>In this subsection, we have presented the comparison of the proposed SGD method with previously available studies which are using various datasets for group spam detection. This comparison has been presented in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref> which demonstrates the effectiveness of proposed approach in terms of achieved accuracy using Amazon and Yelp datasets. Amazon dataset is a real-world product review dataset. In order to conduct this comparison, this study utilized 15,342 reviews, 10,522 reviewers and 5,312 products of the Amazon dataset. Yelp dataset contains 355,210 review, 74,446 reviewers of 2,488 hotels and restaurants. The accuracy results presented in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref> shows that the proposed Spammer Group Detection (SGD) method has outperformed the existing methods <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>[17] <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. Mukherjee et.al., <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> utilized spammer and group behavioral features to identify spam reviews on Amazon dataset and obtained an accuracy of 86%. Compared to this, the proposed approach achieved an accuracy of 91% when implemented on Amazon dataset. Next, we have compared our proposed approach with Kaghazgaran et.</ns0:p><ns0:p>al., <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> using the Amazon dataset which utilized a neighbourhood-based method to spot spammer groups in an online review system. The proposed SGD method achieves an improved accuracy of 91% as compared to the obtained accuracy of 89% by Kaghazgaran et al. Finally, we have performed the comparison of the proposed approach by utilizing linguistic and behavioral features using the Yelp dataset. The results show that the proposed SGD method improved the accuracy to 86% when compared with the approach proposed by Shebuti et.al., <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> which achieved an accuracy of 79%. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.'>Evaluation of Diversified Set of Reviews (DSR) Method</ns0:head><ns0:p>In this section, the proposed DSR method is evaluated in terms of presenting reviews in diversified manner representing positive, negative, and neutral sentiments covering all related features about product and service. The execution of the DSR method works in two phases which takes in non-spam reviews obtained using SGD method from Daraz and Yelp datasets. 1) For the sentimental analysis phase, this study utilizes python's built-in library TextBlob Figure <ns0:ref type='figure' target='#fig_36'>12</ns0:ref> presents the working of the proposed DSR method using real reviews from Yelp dataset. Figure <ns0:ref type='figure' target='#fig_36'>12</ns0:ref>(a) shows non-spam reviews of a specific hotel which is located in New York whereas Figure <ns0:ref type='figure' target='#fig_36'>12</ns0:ref>(b)</ns0:p><ns0:p>presents top K non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible features about hotel after applying DSR method. For reader's convenience, the features of each review are highlighted in bold. In this example top K value has been set to 5 whereas it displays 2 positives reviews, 2 negative reviews and 1 neutral review.</ns0:p><ns0:p>The current research proposed to analyze the performance of DSR method. The is 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 calculated on the basis of features addressed in each review. There exists a relation between 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 and review diversification. The higher value of represents more diversified top K reviews. The 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 impact score can be reduced if the feature appears more than one time in the top k reviews. For example, if the occurrence of the feature in the review dataset is '1' then its overall score is '1' but if its occurrence is 3 times then the score of the feature will be reduced to 1/3=0.33. The impact score is calculated by the following Equation <ns0:ref type='bibr' target='#b13'>(14)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_14'>𝐼𝑚𝑝𝑎𝑐𝑡 𝑆𝑐𝑜𝑟𝑒 (𝑓) = 1 𝑐𝑜𝑢𝑛𝑡 𝑜𝑓 𝑓 𝑖𝑛 𝑆<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>In above equation, represents a diversified set of reviews. After calculating the impact score for all 𝑆 the features, the scores for the feature in a review are used to calculate Review Diversity Score (Equation <ns0:ref type='formula' target='#formula_15'>15</ns0:ref>) for that review. As the review diversity score is higher, more features are being addressed by that review and more diversified that review is from the remaining diversified set of reviews. The mathematical representation for Review Diversity Score is given in Equation <ns0:ref type='bibr' target='#b14'>(15)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_15'>𝑅𝑒𝑣𝑖𝑒𝑤 𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 (𝑟) = ∑ 𝑓 ∈ 𝑟 𝐼𝑚𝑝𝑎𝑐𝑡𝑆𝑐𝑜𝑟𝑒(𝑓)<ns0:label>(15</ns0:label></ns0:formula><ns0:p>) For calculating , the review diversity scores (Equation <ns0:ref type='formula' target='#formula_15'>15</ns0:ref>) are normalized by dividing it with 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 the maximum review diversity score in the review set. In the end, these normalized scores are summed to obtain the for the diversified set. Equation ( <ns0:ref type='formula' target='#formula_16'>16</ns0:ref>) is used to calculate . 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒</ns0:p><ns0:formula xml:id='formula_16'>𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 = ∑ 𝑟 ∈ 𝑆 𝑅𝑒𝑣𝑖𝑒𝑤𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦𝑆𝑐𝑜𝑟𝑒(𝑟) 𝑚𝑎𝑥 𝑟 ∈ 𝑆 (𝑅𝑒𝑣𝑖𝑒𝑤 𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 𝑟 )<ns0:label>(16</ns0:label></ns0:formula><ns0:p>) Table <ns0:ref type='table' target='#tab_6'>8</ns0:ref> shows experimental evaluation conducted on the products/services of the Daraz and Yelp datasets. For this evaluation, top ten products and services, having maximum reviews, have been selected from Daraz and Yelp datasets. The reason to select top ten products having maximum reviews is that these can represent maximum features about these products and services. The diversified set obtained for these products achieved a which has been displayed in the Table <ns0:ref type='table' target='#tab_6'>8</ns0:ref> for all ten products. It can 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 be observed that the services of Yelp dataset achieved better than the products of Daraz 𝐷𝑖𝑣𝑆𝑐𝑜𝑟𝑒 dataset. The reason behind it is that total reviews per services of Yelp dataset are more in quantity as compare to the total reviews per product of Daraz dataset which produced rich and diversified set of features for analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Conclusion</ns0:head><ns0:p>This study proposed Spammer Group Detection (SGD) method and Diversified Set of Reviews (DSR) method to evaluate real-world datasets such as Daraz and Yelp. SGD method used linguistic, behavioral Manuscript to be reviewed</ns0:p><ns0:p>Computer Science diversified manner. In future, additional attributes such as the email id, IP address of the spammer and signed-in location of the reviewer may be added to the dataset to improve the accuracy of the spam review detection model. Moreover, another future direction is to include location-dependent behavioral features of reviewer for in-depth identification of spam reviews. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Figure <ns0:ref type='figure' target='#fig_36'>12</ns0:ref> shows the review presentation before and after applying DSR method utilizing Yelp dataset.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:note type='other'>Computer Science Figure 12</ns0:note><ns0:p>Table <ns0:ref type='table' target='#tab_6'>8</ns0:ref>: Performance evaluation of proposed DSR on Daraz and Yelp reviews. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>13) as follows: PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021) of a feature, represents the frequency of feature f (whose weight is to be 𝑤 ( 𝑓 ) 𝑐 ( 𝑓 ) calculated) in set F whereas is the highest frequency of any feature available in the 𝑚𝑎𝑥 𝑓 ' ∈ 𝐹 𝑐(𝑓 ' ) 𝑓 '</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Section 4 . 1 .</ns0:head><ns0:label>41</ns0:label><ns0:figDesc>Next, the labelled datasets are fed into deep learning classifiers for training and testing. The output of deep learning models is non-spam reviews from the Yelp and Daraz datasets. These non-spam reviews are considered for Diversified Set of Reviews (DSR) method. The complete working of DSR method has been described in Section 4.2. The output of the DSR method is to display diversified set of top-k non-spam reviews having positive, negative, and neutral reviews/feedback covering all possible product features.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>(i) Evaluation of proposed Spammer Group Detection (SGD) Method using four deep learning classifiers (CNN, LSTM, GRU and BERT) in terms of accuracy in spam review detection. (ii) Evaluation of proposed Diversified Set of Reviews (DSR) method PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>non-linear activation functions (tanh, relu and sigmoid) on different deep learning classifiers. Next, based on the experimental results, the best performing activation function is utilized in deep learning classifiers for training and testing of the proposed SGD method. Figure 7 (a) presents the experimental results of different activation functions applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 7 (b) that on Daraz dataset, sigmoid function performs better for CNN and GRU classifiers whereas, relu performs better for LSTM classifier and tanh performs better for BERT classifier. It has also been observed from Figure 7 (c) that on Yelp dataset, the sigmoid function performs better for LSTM, GRU and LSTM classifiers while, tanh performs better for BERT classifier. Therefore, this study utilized best performing activation function in a deep learning classifier to obtain an input signal for the next node using the output of the previous node.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of different optimization methods applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 8 (b) that on Daraz dataset, Adam performs better for LSTM, GRU and BERT classifiers whereas, Nadam performs better for CNN classifier. It has also been observed from Figure 8 (c) that on Yelp dataset, the Adamax optimization method performs better for LSTM and GRU classifiers while, RMSProp performs better for CNN classifier and SGD performs better for BERT classifier. This study utilized best performing optimization function in deep learning classifiers to effectively train the model such that the error or loss is minimized.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>(a) presents the experimental results of different dropout rates applied to deep learning classifiers utilizing Daraz and Yelp datasets. It has been observed from Figure 9 (b) that on Daraz and Yelp (Figure 9 (c)) datasets, dropout values between (0.2 to 0.5) tends to show good results. Therefore, this study utilized this dropout rate (0.2 to 0.5) in deep learning classifiers to effectively handle the overfitting problem. iv. Number of Units The architecture of the deep learning classifiers is generally controlled by the number of units (layers) and the number of nodes in each hidden unit [36]. Figure 10 (a) presents the experimental results of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021) Manuscript to be reviewed Computer Science adapting different number of units (50, 100, 150 and 200) in deep learning classifiers utilizing Daraz and Yelp dataset. Through experimental evaluations, no significant change has been observed after adding several units as presented in Figure 10 (b) and (c).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>a) presents the experimental results of utilizing several features (1000, 2000, 3000 and 4000) on deep learning classifiers using Daraz and Yelp datasets. It has been observed from Figure 11 (b) that on Daraz dataset feature set of 3000 to 4000 words performed better for LSTM, GRU and BERT classifiers. On the other hand, on Yelp dataset (Figure 11 (c)), feature set of 2000 words performed better for LSTM and GRU classifiers and 4000 words performed better for BERT classifier. Through experimental evaluations on CNN classifier, it is observed that applying hyperparameter (number of features) on CNN decreases its accuracy value. Based on this analysis, this study utilized best performing feature set (highlighted in table of Figure 11) in deep learning classifiers to overcome the overfitting problem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>and group spammer behavioral features to calculate the spam score of each group to identify group of the spammers. This study used deep learning classifiers for training and testing the proposed SGD method.The proposed Diversified Set of Reviews (DSR) method utilized diversification technique to present a diversified set of top-k reviews having positive, negative, and neutral feedback covering all possible product features about a specific product or service. The study proposed a framework which works by combining SGD method with DSR method. The output of SGD method which is non-spam reviews are used as input to DSR method. The outcome of this framework are non-spam reviews of a specific product displayed in diversified manner. The findings of this research provide a practical implication for improving the trustworthiness of online products and services for Roman Urdu and English reviews inPeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Example of all positive reviews and diversified reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 describes the example of all positive reviews and top K diversified reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Framework of proposed Spammer Group Detection (SGD) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 explains the proposed Spammer Group Detection (SGD) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Process of structural clustering algorithm for network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 defines structural clustering algorithm for network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Framework of Diversified Set of Reviews (DSR) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 explains the proposed Diversified Set of Reviews (DSR) method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Procedure to find Diversified Set of Reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 explain the method to find Diversified Set of Reviews.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: The framework of the proposed study using SGD and DRS methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 describe the proposed framework utilizing SGD and DSR methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Effect of different activation functions on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 presents the accuracy of deep learning classifiers using different activation functions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Effect of different optimization methods on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 presents the accuracy of deep learning classifiers using different optimization methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Effect of different dropout values on accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9 presents the accuracy of deep learning classifiers using different dropout values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Effect of the number of units on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_33'><ns0:head>Figure 10 presents</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10 presents the accuracy of deep learning classifiers using different number of units.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_34'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Effect of the number of features on the accuracy of deep learning classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_35'><ns0:head>Figure 11 presents</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11 presents the accuracy of deep learning classifiers using different features.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_36'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: Example of before and after applying DSR method using Yelp dataset.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>2. Utilized the SCAN algorithm to identify candidate spam groups. 3. Employed different individual and group spammer behavioral features to calculate the spam score of each group. 4. Utilized four different deep learning classifier such as CNN, LSTM, LRU and BERT for training</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Table 3 represents the list of notations used in describing individual spammer behavioral features.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>presents a</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>3 for Daraz and Yelp dataset to obtain positive, negative, and neutral sentiments of reviews. 2) In feature extraction phase, features are extracted from the review datasets using two different methods: i) For Daraz dataset, list of unique features or words are generated by writing programming code and almost 7,344 unique Roman Urdu words or features are considered for further evaluation; ii) For Yelp dataset, this study utilizes python's built-in library TextBlob for feature extraction. After evaluating sentimental analysis and feature extraction of both review datasets (Daraz and Yelp) with non-spam review, the Diversified Set of Reviews (DSR) algorithm has been used to present reviews in a diversified manner such that the presented reviews represent positive, negative and neutral sentiments covering all related features about specific product and services. This study initially set-top k</ns0:figDesc><ns0:table /><ns0:note>reviews (k=10) considering 4 positive, 4 negative and 2 neutral reviews. This top k value can be adjustable according to the requirement of the end-user.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>shows performance evaluation of proposed DSR on Daraz and Yelp reviews.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Performance evaluation of proposed DSR on Daraz and Yelp reviews.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Top Products</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Yelp Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Daraz Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>and Services</ns0:cell><ns0:cell>Positive Reviews</ns0:cell><ns0:cell>Negative Reviews</ns0:cell><ns0:cell>Neutral Review</ns0:cell><ns0:cell>Total Reviews</ns0:cell><ns0:cell>𝑫𝒊𝒗𝑺𝒄𝒐𝒓𝒆</ns0:cell><ns0:cell>Positive Reviews</ns0:cell><ns0:cell>Negative Reviews</ns0:cell><ns0:cell>Neutral Review</ns0:cell><ns0:cell>Total Reviews</ns0:cell><ns0:cell>𝑫𝒊𝒗𝑺𝒄𝒐𝒓𝒆</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>6282</ns0:cell><ns0:cell>274</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell>6648</ns0:cell><ns0:cell>10.8</ns0:cell><ns0:cell>41</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>7.1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>4968</ns0:cell><ns0:cell>722</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>5818</ns0:cell><ns0:cell>9.9</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>6.9</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>3714</ns0:cell><ns0:cell>225</ns0:cell><ns0:cell>53</ns0:cell><ns0:cell>3992</ns0:cell><ns0:cell>8.8</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>6.5</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>3127</ns0:cell><ns0:cell>221</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>3382</ns0:cell><ns0:cell>8.7</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>6.4</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>2695</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>2940</ns0:cell><ns0:cell>8.1</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>6.3</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>2596</ns0:cell><ns0:cell>278</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>2917</ns0:cell><ns0:cell>8.1</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>6.2</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>2500</ns0:cell><ns0:cell>261</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>2800</ns0:cell><ns0:cell>7.9</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>6.2</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>2288</ns0:cell><ns0:cell>257</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>2588</ns0:cell><ns0:cell>7.6</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>6.0</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>2378</ns0:cell><ns0:cell>158</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>2576</ns0:cell><ns0:cell>7.6</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>5.8</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>2328</ns0:cell><ns0:cell>137</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>2493</ns0:cell><ns0:cell>7.5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>5.7</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>www.daraz.pk PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)</ns0:note>
<ns0:note place='foot' n='2'>https://code.google.com/archive/p/word2vec PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='3'>https://pypi.org/project/textblob/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54398:2:0:NEW 8 Mar 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>Table 2: Detailed distribution of Daraz dataset used in proposed methods.</ns0:note>
</ns0:body>
" | "Original Manuscript ID: 54398
Original Article Title: “Spammer group detection and diversification of customers' reviews”
(#CS-2020:10:54398: 1:1: REVIEW)
To: PeerJ Computer Science Editor
Dear Editor,
Thank you for reviewing our manuscript. All authors have checked the manuscript and have agreed to the resubmission. Moreover, the manuscript has been thoroughly reviewed and has addressed all reviewer comments. We are uploading an updated manuscript (highlighted new version of manuscript and clean new version of manuscript) according to the template. We have also uploaded point-by-point response to reviewer comments.
Best regards,
Naveed Hussain et al.
Reviewer 2
Reviewer#2, Concern # 1: Why do you use deep learning models for evaluation? In my opinion, you should use the ground truth --- the spammer scores described in Section 4.1.3 for evaluation. Is it because you have limited ground truth data? Then you will need to rely on an automatic metric to approximate the ground truth. How much ground truth data do you have? You use the ground truth to train your deep learning models, right? What is the performance of the deep learning models? If the deep learning model performs not well, it is not a good approximation of ground truth. If the deep learning model works well, why don't you just use them for spammer detection, as you mentioned they have advantages of using non-handcrafted features.
Author response: Thank you for your concern. The proposed study is based on supervised learning and binary classification problem (i.e., Spam and non-spam reviews). Therefore, deep learning models are required for evaluation. For the classification of spam and not spam reviews, we need labelled dataset having labels spam and non-spam. This study has utilized two datasets: a) Yelp real-world labelled dataset (spam and non-spam reviews) about hotels and restaurants which was provided by Rayana et al. [1] and, b) Roman Urdu product reviews real-world dataset, which was scrapped from Daraz1 using python library (Scrappy). As per researcher’s knowledge, this is the first study of its kind which utilized Roman Urdu dataset for identification of spammer groups. For this reason, we couldn’t find labelled Roman Urdu review dataset. For this study, we have devised SGD method which calculates spam score to label spam and non-spam review in Daraz dataset. On the other hand, Yelp dataset was already labelled.
We have designed a Figure (as demonstrated below) to describe the whole process of the identification of spammer and non-spam reviews which elaborates the use of deep learning models for evaluation. This figure also explains the connections of SGD method (spam score) and deep learning models. The study proposed SGD method for labelling Daraz dataset utilizing spam score which produce ground truth dataset, the detail about how to calculate spam score is mentioned in Section 4.1.3. We cannot use spam score-based evaluation because we do not have any existing labelled roman Urdu reviews dataset about products which could be used for the identification of spammer groups. In order to use spam score for evaluation, we need pre-labelled roman Urdu review dataset which should be used to compare the labelling of the proposed SGD based labelling. Moreover, to calculate precision, recall, F1 score and accuracy, we require true positive, true negative, false positive and false negative which in this case is not possible for the reason mentioned above. Therefore, SGD method (spam score) can only be used for labelling of unlabelled Daraz dataset. As Yelp data set was already labelled, therefore SGD method has not been applied to it.
As shown in Figure below, labelled datasets (Daraz and Yelp) are then fed into deep learning models for training and testing to calculate the performance of the models. Deep learning classifier are used because by using these trained classifiers we can predict the class label (spam and non-spam) for incoming reviews about product and services.
Following are the responses to your questions.
i. How much ground truth data do you have?
The Daraz dataset has 3,923 reviews and Yelp dataset has 355,210 reviews.
ii. You use the ground truth to train your deep learning models, right?
Yes, this study used the ground truth to train the deep learning models. Deep learning models are trained using 80% of real-world ground truth Yelp and Daraz datasets. The testing of these models is performed on remaining 20% of the datasets.
iii. What is the performance of the deep learning models?
The best performing deep learning classifier is Convolutional Neural Network (CNN). Using Daraz dataset, it achieved 81.31% accuracy while utilizing Yelp dataset it gives 89.41% accuracy. Table 6 of the manuscript shows performance comparison of deep learning classifiers using Daraz and Yelp datasets.
iv. If the deep learning model works well, why don't you just use them for spammer detection, as you mentioned they have advantages of using non-handcrafted features?
Deep learning models perform automatic linguistic based feature engineering which is based on review text only. It does not perform automatic detection of spammer based on the reviewer’s behavior. For identification of spammer groups, we need to analyze the behavioral of the reviewers such as time burstiness, maximum number of reviews and average rating deviation. Therefore, we have used spam score to identify spam reviews based on the behaviors of the reviewers. This study proposed eight different individual and group spammer behavioral features for identification of spammers and spam reviews as mentioned in Section 4.1.3.
Author action:
Manuscript has been updated from line (455 to 466). It elaborates the complete framework of proposed study. Figure 6 has been updated accordingly.
Reviewer#2, Concern#2: The size for the Daraz dataset should be 3,923. Why did you select only one single product for the review diversification task in Section in Section 5.2? Your proposed method would be better justified and generalized by using reviews from more products.
Author response: Thank you for your valuable suggestion. Your suggestion has improved the overall presentation of the Section 5.2. The size of the Daraz dataset has been updated on line 211 and 578. Further, Table 2 has been updated. We have conducted the experimental evaluation of DSR method using top ten products and services from Daraz and Yelp datasets. The results of which have been presented in Table 8. Manuscript has been updated from line 643-651.
Author action:
Manuscript has been updated from line 643-651, Table 8 and Table 2.
References
[1]
Rayana, Shebuti, and Leman Akoglu. 'Collective opinion spam detection: Bridging review networks and metadata.' In Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, pp. 985-994. 2015.
" | Here is a paper. Please give your review comments after reading it. |
51 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Global routing is an important link in Very Large Scale Integration (VLSI) design. As the best model of global routing, X-architecture Steiner Minimal Tree (XSMT) has a good performance in wire length optimization. XSMT belongs to non-Manhattan structural model, and its construction process cannot be completed in polynomial time, so the generation of XSMT is an NP hard problem. In this paper, an X-architecture Steiner Minimal Tree algorithm based on Multi-strategy optimization Discrete Differential Evolution (XSMT-MoDDE) is proposed. Firstly, an effective encoding strategy, a fitness function of XSMT, and an initialization strategy of population are proposed to record the structure of XSMT, evaluate the cost of XSMT and obtain better initial particles, respectively. Secondly, elite selection and cloning strategy, multiple mutation strategies, and adaptive learning factor strategy are presented to improve the search process of discrete differential evolution algorithm. Thirdly, an effective refining strategy is proposed to further improve the quality of the final Steiner tree. Finally, the results of the comparative experiments prove that XSMT-MoDDE can get the shortest wire length so far, and achieve a better optimization degree in the larger-scale problem.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>At present, VLSI technology is developing at a high speed. Initially, the model to solve global routing problem was based on Manhattan structure <ns0:ref type='bibr' target='#b12'>(Held et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Siddiqi and Sait, 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chu and Wong, 2007)</ns0:ref>. There are two ways to connect each pin in this structure, which are horizontal direction and vertical direction. In the development of this structure, limitation of the interconnect wire length optimization appeared, and in the actual situation, there is still a lot of optimization space for wire length of Steiner Minimum Tree (SMT). Wire length has a decisive influence on the chip performance. Based on this situation, non-Manhattan structure, which can make full use of the routing resources and shorten the wire length, has become the mainstream model of global routing <ns0:ref type='bibr' target='#b50'>(Zhu et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b51'>Zhuang et al., 2020;</ns0:ref><ns0:ref type='bibr'>Zhang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>X-architecture Steiner Minimal Tree (XSMT) is a representative model of non-Manhattan structure <ns0:ref type='bibr' target='#b5'>(Coulston, 2003;</ns0:ref><ns0:ref type='bibr' target='#b3'>Chiang and Chiang, 2002)</ns0:ref>. SMT problem is to find a minimum connection tree under a given set of pins by introducing additional Steiner points <ns0:ref type='bibr' target='#b21'>(Liu et al., 2014b)</ns0:ref>. Because of SMT cannot be constructed in polynomial time, how to quickly and effectively construct an SMT is a key issue to be solved in VLSI manufacturing process <ns0:ref type='bibr' target='#b20'>(Liu et al., 2015b</ns0:ref><ns0:ref type='bibr' target='#b24'>(Liu et al., , 2019))</ns0:ref>. Heuristic search algorithm has a strong ability to solve NP-hard problem <ns0:ref type='bibr' target='#b16'>(Liu et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b18'>(Liu et al., , 2020a))</ns0:ref> DE is a global optimization algorithm proposed by <ns0:ref type='bibr' target='#b30'>Storn and</ns0:ref><ns0:ref type='bibr' target='#b30'>Price in 1997 (Storn and</ns0:ref><ns0:ref type='bibr' target='#b30'>Price, 1997)</ns0:ref>.</ns0:p><ns0:p>Each particle in DE corresponds to a solution vector, and the main process is composed of three steps: mutation, crossover, and selection. DE algorithm has many advantages, such as robustness, reliability, simple algorithm structure and few control parameters, etc., and it has been widely applied in global optimization <ns0:ref type='bibr' target='#b46'>(Zhao et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b10'>Ge et al., 2017)</ns0:ref>, artificial intelligence <ns0:ref type='bibr' target='#b47'>(Zhao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b34'>Tang et al., 2020c)</ns0:ref>, bioinformatics <ns0:ref type='bibr'>(Zhang et al., 2020)</ns0:ref>, signal processing <ns0:ref type='bibr' target='#b41'>(Yin et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b44'>Zhang et al., 2017)</ns0:ref>, machine design <ns0:ref type='bibr' target='#b48'>(Zhou et al., 2018)</ns0:ref>, and other fields <ns0:ref type='bibr' target='#b28'>(Ren et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Tang et al., 2020b)</ns0:ref>. Generation strategy of trial vector and setting method of control parameters will greatly affect the performance of DE algorithm. Many scholars have improved DE algorithm in these directions, and it has made great progress in recent years. DE was originally proposed for continuous problems and can not be directly used to solve discrete problems such as XSMT, therefore, this paper explores and formulates a Discrete Differential Evolution (DDE) algorithm for solving XSMT problem.</ns0:p><ns0:p>For this reason, this paper proposes X-architecture Steiner Minimal Tree algorithm based on Multistrategy optimization Discrete Differential Evolution (XSMT-MoDDE). Firstly, we design an encoding strategy, a fitness function of XSMT, and a population initialization strategy based on Prim algorithm for DDE algorithm to record the structure of XSMT, evaluate XSMT and obtain high quality initial solution, respectively. Secondly, we design an elite selection and cloning strategy, a multiple mutation strategy, and an adaptive learning factor strategy to optimize the search process. At the end of the algorithm, an effective refining strategy is proposed to improve the quality of the final XSMT.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head></ns0:div>
<ns0:div><ns0:head>Research status of RSMT and XSMT</ns0:head><ns0:p>Optimizing the wire length of SMT is a popular research direction, and there are many important research achievements. In <ns0:ref type='bibr' target='#b31'>Tang et al. (2020a)</ns0:ref>, three kinds of sub problems and three kinds of general routing methods in Steiner tree construction were analyzed, and the research progress in two new technology modes was analyzed <ns0:ref type='bibr' target='#b31'>(Tang et al., 2020a)</ns0:ref>. <ns0:ref type='bibr'>Chen et al. (2020)</ns0:ref> introduced five commonly used swarm intelligence technologies and related models, as well as three classic routing problems: Steiner tree construction, global routing, and detailed routing. On this basis, the research status of Steiner minimum tree construction, wire length driven routing, obstacle avoidance routing, timing driven routing, and power driven routing were summarized <ns0:ref type='bibr'>(Chen et al., 2020)</ns0:ref>. In <ns0:ref type='bibr' target='#b15'>Liu et al. (2011)</ns0:ref>, Rectilinear Steiner Minimal Tree (RSMT) based on Discrete Particle Swarm Optimization (DPSO) algorithm was proposed to effectively optimize the average wire length <ns0:ref type='bibr' target='#b15'>(Liu et al., 2011)</ns0:ref>. <ns0:ref type='bibr' target='#b13'>Liu et al. (2014a)</ns0:ref> proposed a multi-layer obstacle avoidance RSMT construction method based on geometric reduction method <ns0:ref type='bibr' target='#b13'>(Liu et al., 2014a)</ns0:ref>. <ns0:ref type='bibr' target='#b43'>Zhang et al. (2016)</ns0:ref> proposed a heuristic for constructing a RSMT with slew constraints to maximize routing resources over obstacles <ns0:ref type='bibr' target='#b43'>(Zhang et al., 2016)</ns0:ref>. <ns0:ref type='bibr' target='#b36'>Teig (2002)</ns0:ref> adopted XSMT, which is superior to RSMT in terms of average wire length optimization <ns0:ref type='bibr' target='#b36'>(Teig, 2002)</ns0:ref>. In <ns0:ref type='bibr' target='#b49'>Zhu et al. (2005)</ns0:ref>, an XSMT construction method was proposed by side substitution and triangle contraction methods <ns0:ref type='bibr' target='#b49'>(Zhu et al., 2005)</ns0:ref>. <ns0:ref type='bibr' target='#b23'>Liu et al. (2020c)</ns0:ref> constructed a multi-layer global router based on the X-architecture. Compared with other global routers, it had better performance in overflow and wire length <ns0:ref type='bibr' target='#b23'>(Liu et al., 2020c)</ns0:ref>. <ns0:ref type='bibr' target='#b21'>Liu et al. (2014b)</ns0:ref> proposed a PSO-based multi-layer obstacle-avoiding XSMT, which used an effective penalty mechanism to help particles to avoid obstacles <ns0:ref type='bibr' target='#b21'>(Liu et al., 2014b)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b22'>Liu et al. (2020b)</ns0:ref>, a novel DPSO and multi-stage transformation were used to construct XSMT and RSMT. The simulation results on industrial circuits showed that this method could obtain high-quality routing solutions <ns0:ref type='bibr' target='#b22'>(Liu et al., 2020b)</ns0:ref>. <ns0:ref type='bibr'>Chen et al. (2020)</ns0:ref> proposed an XSMT construction algorithm based on Social Learning Particle Swarm Optimization (SLPSO), which can effectively balance the exploration and exploitation capabilities <ns0:ref type='bibr'>(Chen et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The present situation of DE and DDE algorithm</ns0:head><ns0:p>DE algorithm has high efficiency and powerful search ability in solving continuous optimization problems.</ns0:p><ns0:p>In the past 20 years after its emergence, many scholars have proposed improved versions of DE algorithm.</ns0:p><ns0:p>These improvements better balance the exploitation and exploration ability of DE, and show strong optimization ability on many problems.</ns0:p><ns0:p>An Self-adaptive DE (SaDE) algorithm was proposed in <ns0:ref type='bibr' target='#b26'>Qin et al. (2008)</ns0:ref>. In different stages of the evolution process, the value of control parameters is adjusted according to experience, which saves the trial and error cost of developers in the process of adjusting parameters <ns0:ref type='bibr' target='#b26'>(Qin et al., 2008)</ns0:ref>. <ns0:ref type='bibr' target='#b27'>Rahnamayan et al. (2008)</ns0:ref> proposed an algorithm for accelerating DE, using opposition-based DE and opposition-based learning methods to initialize population and realize generation jumping to accelerate convergence of DE <ns0:ref type='bibr' target='#b27'>(Rahnamayan et al., 2008)</ns0:ref>. Subsequently, <ns0:ref type='bibr' target='#b37'>Wang et al. (2011a)</ns0:ref> proposed an improved version of accelerated DE, which could be used to solve high-dimensional problems <ns0:ref type='bibr' target='#b37'>(Wang et al., 2011a)</ns0:ref>. <ns0:ref type='bibr' target='#b40'>Wang et al. (2011b)</ns0:ref> proposed Composite DE (CoDE). The algorithm proposed three generation strategies of trial vector and three control parameter settings, and randomly combined the generation strategies and control parameters. The experimental results showed that the algorithm had strong competitiveness <ns0:ref type='bibr' target='#b40'>(Wang et al., 2011b)</ns0:ref>. <ns0:ref type='bibr' target='#b39'>Wang et al. (2015)</ns0:ref> combined adaptive DE algorithm with Back Propagation Neural Network (BPNN) to improve its prediction accuracy <ns0:ref type='bibr' target='#b39'>(Wang et al., 2015)</ns0:ref>.</ns0:p><ns0:p>DDE algorithm is a derivative of DE, which can solve discrete problems. Many existing results have applied DDE algorithm to solve practical problems. In <ns0:ref type='bibr' target='#b25'>Pan et al. (2008)</ns0:ref>, DDE was used to solve the permutation flow shop scheduling problem with the total flow time criterion. For the total flow time criterion, its performance is better than the PSO algorithm proposed by predecessors <ns0:ref type='bibr' target='#b25'>(Pan et al., 2008)</ns0:ref>. In <ns0:ref type='bibr' target='#b35'>Tasgetiren et al. (2010)</ns0:ref>, an ensemble of DDE (eDDE) algorithms with parallel populations was presented.</ns0:p><ns0:p>eDDE uses different parameter sets and crossover operators for each parallel population, and each parallel parent population has to compete with the offspring populations produced by this population and all other parallel populations <ns0:ref type='bibr' target='#b35'>(Tasgetiren et al., 2010)</ns0:ref>. <ns0:ref type='bibr' target='#b6'>Deng and Gu (2012)</ns0:ref> presented a Hybrid DDE (HDDE) algorithm for the no-idle permutation flow shop scheduling problem with makespan criterion. A new acceleration method based on network representation was proposed and applied to HDDE, and the local search of the inserted neighborhood in HDDE was effectively improved to balance global search and local development <ns0:ref type='bibr' target='#b6'>(Deng and Gu, 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PRELIMINARIES XSMT problem</ns0:head><ns0:p>Unlike the traditional Manhattan structure, which only has horizontal and vertical connections, two connection methods of 45 • and 135 • are added to the XSMT problem <ns0:ref type='bibr' target='#b14'>(Liu et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b19'>(Liu et al., , 2015a</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the i −th pin to be connected, and the corresponding coordinate is (x i , y i ). Given 5 pins, the corresponding coordinates are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, and the corresponding pin layout is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Differential evolution algorithm</ns0:head><ns0:p>DE algorithm is a heuristic search algorithm based on modern intelligence theory. The particles of population cooperate and compete with each other to determine the search direction.</ns0:p></ns0:div>
<ns0:div><ns0:head>The update process of DE</ns0:head><ns0:p>Initialization of the population: N particles are randomly generated, and the dimension of each particle is D. For example, X 0 i represents the particle i, X L is the lower limit of D-dimensional particles, and X H is the upper limit of D-dimensional particles. The corresponding initialization method is as follows:</ns0:p><ns0:formula xml:id='formula_0'>X 0 i = X L + randam(0, 1) × (X H − X L )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Mutation operator: In the process of the g-th iteration, mutation operator randomly select three particles X g a , X g b , and X g c in the population which are different from each other, and generate particles V g i according to the following mutation formula:</ns0:p><ns0:formula xml:id='formula_1'>V g i = X g a + F × (X g b − X g c ) (2)</ns0:formula><ns0:p>where F is a learning factor, F ∈ [0, 2].</ns0:p></ns0:div>
<ns0:div><ns0:head>4/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Crossover operator: In the process of crossover, the value of each dimension is selected from Particle</ns0:p><ns0:formula xml:id='formula_2'>X g i or Particle V g i .</ns0:formula><ns0:p>The probability of selection is cr. The formula of crossover is as follows:</ns0:p><ns0:formula xml:id='formula_3'>u j i = v j i rand(0, 1) ≤ cr x j i else<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where j represents the dimension, cr is the crossover probability, cr ∈ [0, 1].</ns0:p><ns0:p>Selection operator: It adopts greedy strategy in the process of selection, that is, selecting the particle with the optimal adaptive value. The formula is as follows:</ns0:p><ns0:formula xml:id='formula_4'>X (g+1) i = V g i f (V g i ) < f (X g i ) X g i else (4)</ns0:formula><ns0:p>where the value of Function f (X) represents the fitness value of Particle X, and the fitness function definitions for each problem are different.</ns0:p></ns0:div>
<ns0:div><ns0:head>The flow of DE algorithm</ns0:head><ns0:p>Step 1. Initialize the population according to Formula 1, and initialize the parameters of DE algorithm.</ns0:p><ns0:p>Step 2. Calculate the fitness value of each particle in the population according to fitness function.</ns0:p><ns0:p>Step 3. During each iteration, mutation operation is performed on particles according to Formula 2 or other mutation operators to produce mutated particles.</ns0:p><ns0:p>Step 4. Check whether the algorithm reaches the termination condition. If so, the algorithm is terminated. Otherwise, return to Step 2 and update the related parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>XSMT-MODDE ALGORITHM Encoding strategy</ns0:head><ns0:p>Property 1. The encoding strategy of edge-point pairs is suitable for DDE algorithm, and it can well record the structure of XSMT.</ns0:p><ns0:p>Suppose there are n pin points in the pin graph, and the corresponding Steiner tree has n − 1 edges and n − 1 PS points. Number each pin, determine an edge by recording two endpoints, and add a bit to record selection method of edge. Finally, a bit is added at the end to represent the fitness value of the particle, and the final encoding length is 3 × (n − 1) + 1. The Steiner tree corresponding to pins in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> is shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, and the corresponding encoding is: 1 3 1 2 3 0 4 5 0 3 4 3 46.284.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fitness function</ns0:head><ns0:p>Property 2. The wire length of XSMT is a key factor that affects global routing results, and the fitness value based on the wire length of XSMT can make the algorithm go in the direction of optimal wire length to the greatest extent.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_12'>2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In an edge set of a XSMT, all edges belong to one of the following four types: horizontal, vertical, a 135 • diagonal counterclockwise 45 • to form a horizontal line, so that the four types of edges can be replaced by two types. Make the starting point number of all edges smaller than the ending point number, and then sort all edges according to the starting point number, and subtract the overlapping part of the edges. At this time, the total wire length of XSMT can be obtained.</ns0:p><ns0:p>The excellence of XSMT is determined by the total wire length. The smaller the wire length is, the higher the excellence of XSMT will be. Therefore, fitness value measured by XSMT-MoDDE is total wire length of particle. The fitness function of XSMT-MoDDE is shown in Formula 5.</ns0:p><ns0:p>f itness(T x ) = ∑ e i ∈T x length(e i )</ns0:p><ns0:p>(5)</ns0:p></ns0:div>
<ns0:div><ns0:head>Initialization</ns0:head><ns0:p>Property 3. Prim algorithm can search an edge subset, which not only includes all the vertices in a connected graph, but also minimizes the sum of the weights of all the edges in subset. Selecting different starting points can get the same weight but different edge subsets. Prim algorithm is used to initialize population, so that particles in population have diversity and the solution space can be reduced at the same time.</ns0:p><ns0:p>Traditional DE algorithm directly uses Formula 1 to initialize the population. However, for XSMT, if the random strategy is used to initialize each particle (i.e., randomly select a point as root, and use backtracking method to randomly select edges to build a legal tree), will lead to the problem that the solution space is too large to converge well. Therefore, this paper uses Prim algorithm to construct Minimum Spanning Tree (MST) to initialize population. The weight of each edge in MST is determined by Manhattan distance between each two pins. Each particle randomly selects a starting point s to generate a MST and randomly select a connection method for each edge of MST.</ns0:p><ns0:p>The relevant pseudo code is shown in Algorithm 1, where T is edge set of MST, s is starting point, U is point set of MST, V is pin set, P is population, and N is population size. From Lines 1-18 is the function to generate MST. Lines 2-3 randomly select a starting point s and add it to the set U. Line 4 initializes the edge set T . Line 6 selects a visited point i from the set U, and Line 7 sets the minimum cost to infinity. Lines 8-13 select a unvisited point j from the adjacent points of point i, the edge i j with the least cost will be selected and added to set T , and the point j is marked as visited and added to set U. The MST algorithm ends when the set U is the same as the set V , and Line 17 returns a randomly generated MST. Lines 21-24 construct the population, and the initial particle is an MST generated by function PRIMALGORITHM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Elite selection and cloning strategy</ns0:head><ns0:p>Property 4. This strategy proposes two particle mutation strategies based on set, which can mutate elite particles in a very short time. The elite particles are cloned and mutated, and the optimal particle is selected based on greedy strategy to construct a elite buffer with high quality in a short time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Brief description</ns0:head><ns0:p>The elite selection and cloning strategy consists of four steps: selection, cloning, mutation, and extinction.</ns0:p><ns0:p>Part of particles in the population are selected as elite particles, and then the elite particles are cloned to form cloned population. Cloned particles randomly mutate into mutated particles. Mutated particles are selected to enter the elite buffer according to extinction strategy. The elite buffer has the same size as the population and participates in the subsequent process of DE.</ns0:p><ns0:p>The elite selection and cloning strategy can effectively expand the search range of DDE, improve the global search ability of the algorithm, avoid falling into local peaks to a certain extent, and prevent the algorithm from premature convergence.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm flow</ns0:head><ns0:p>(1) Selection: Sort population according to fitness value, and select the first n particles to form an elite population, n = k × N. k is elite ratio, and the best result can be obtained when k is selected as 0.2 after experimental verification.</ns0:p><ns0:p>(2) Cloning: Clone the particles of the elite population to form a cloned population C. The number of cloned particles is calculated according to Formula 6. T ← / 0 5:</ns0:p><ns0:formula xml:id='formula_5'>while (U! = V ) do 6: choose point i ∈ U 7: mincost ← ∞ 8: for k ∈ V −U do 9: if cost(i, k) < mincost then 10: mincost ← cost(i, k) 11: j ← k 12: end if 13:</ns0:formula><ns0:p>end for 14:</ns0:p><ns0:p>T ∪ {(i, j)} return P 26: end function</ns0:p><ns0:formula xml:id='formula_6'>N i = round N i (<ns0:label>6</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>where i is rank of the particle in original population, and round() is rounding down function.</ns0:p><ns0:p>(3) Mutation: The mutation strategy adopts connection method mutation or topology mutation, and two strategies are shown in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_7'>4A</ns0:ref> shows the mutation process of connection method. The connection method of Line AB is changed from selection 3 to selection 0. Figure <ns0:ref type='figure' target='#fig_7'>4B</ns0:ref> shows the mutation process of topology. Line AB is selected to be disconnected and then connected to Line BC. Each cloned particle is assigned to a mutation strategy to form a mutated particle.</ns0:p><ns0:p>For particles that adopt connection method, randomly select a edges, and the value of a is determined according to the number of edges, as shown in Formula 7, where n is the number of pins. Then change the connection method of the selected edge.</ns0:p><ns0:formula xml:id='formula_8'>a = max 1, round n − 1 10 (7)</ns0:formula><ns0:p>For particles that adopt topology mutation, one edge is randomly disconnected in XSMT to form two sub-XSMTs, and then respectively select a point from the two sub-XSMTs to connect. This process adopts the idea of Disjoint Set Union (DSU) to ensure that a legal tree is obtained after mutation.</ns0:p><ns0:p>(4) Extinction: Select the trial elite particle m best with the best fitness value in the mutated population.</ns0:p><ns0:p>If f (m best ) is better than f (g best ), then m best will be added to the elite buffer, and all other particles will die, otherwise, all particles in the mutation population will die. If the elite buffer is full, the particle with the worst fitness value will be popped and new particle will be pushed.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Novel multiple mutation strategy</ns0:head><ns0:p>Property 5. The three novel mutation strategies proposed in this paper introduce the idea of set operations.</ns0:p><ns0:p>Under the premise of reasonable computing time, through adjusting edge set of current particle and edge set of other particle, some substructures in XSMT are changed to search for a better combination of substructures.</ns0:p><ns0:p>In DE algorithm, there are six commonly used mutation strategies <ns0:ref type='bibr' target='#b7'>(Epitropakis et al., 2011)</ns0:ref>, and each strategy uses different basis vectors and differential vectors. The mutation formulas are shown below.</ns0:p><ns0:formula xml:id='formula_9'>V g i = X r1 g + F(X r2 g − X r3 g ) (8) V g i = X r1 g + F 1 (X r2 g − X r3 g ) + F 2 (X r4 g − X r5 g ) (9) V g i = X best g + F(X r1 g − X r2 g ) (10) V g i = X best g + F 1 (X r1 g − X r2 g ) + F 2 (X r3 g − X r4 g ) (11) V g i = X g i + F(X best g − X g i ) (12) V g i = X r0 g + F 1 (X best g − X r0 g ) + F 2 (X r1 g − X r2 g )<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>where X g r represents a random particle in population, X g best represents the global optimal solution, and F represents learning factor. for i ← 1 to n do 14:</ns0:p><ns0:p>M ← / 0 15:</ns0:p><ns0:p>for j ← 1 to n/i do 16:</ns0:p><ns0:p>method ← random(0, 1) 17:</ns0:p><ns0:p>if method == 0 then m ← connection method mutation()</ns0:p><ns0:formula xml:id='formula_10'>18:</ns0:formula><ns0:p>else m ← topology mutation() return E 28: end function</ns0:p></ns0:div>
<ns0:div><ns0:head>Two operating rules</ns0:head><ns0:p>In XSMT-MoDDE algorithm, a particle represents a XSMT. Addition and subtraction operations in the above mutation formulas cannot be directly used in discrete problems. This paper defines two new calculation methods (Definition 6-7).</ns0:p><ns0:p>A is the edge set of particle X 1 , B is the edge set of particle X 2 , and the full set is A ∪ B. There are two definitions as follows: </ns0:p></ns0:div>
<ns0:div><ns0:head>Three mutation strategies</ns0:head><ns0:p>In Mutation Strategy 1, basis vector is selected as current particle, and there are two differential vectors.</ns0:p><ns0:p>The differential vector of the first stage is generated by the difference between the current particle and the corresponding local historical optimal particle, and Particle T is obtained by Formula 14. The differential vector in the second stage is generated by the difference between Particle T and the global optimal particle, and target mutated Particle V g i is obtained by Formula 15.</ns0:p><ns0:p>T = X g i ⊕ F X g pbest ⊙ X g i (14) Manuscript to be reviewed In Mutation Strategy 2, basis vector is still current particle, and there are two differential vectors.</ns0:p><ns0:formula xml:id='formula_11'>V g i = T ⊕ F X g gbest ⊙ T<ns0:label>(</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The differential vector in the first stage is generated by the difference between random particle and the corresponding local historical optimal particle, and Particle T is calculated by Formula 16. The differential vector in the second stage is generated by the difference between the random particle and global optimal particles, and target mutated Particle V g i is obtained by Formula 17.</ns0:p><ns0:p>T = X g i ⊕ F X g pbest ⊙ X g r (16)</ns0:p><ns0:formula xml:id='formula_12'>V g i = T ⊕ F X g gbest ⊙ X g r<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>In Mutation Strategy 3, basis vector is current particle, and the differential vector is generated by the difference between the current particle and random particle in the population, and the mutated Particle V g i is obtained by Formula 18.</ns0:p><ns0:formula xml:id='formula_13'>V g i = X g i ⊕ F X g i ⊙ X g r<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>Mutation Strategy 1 can make particles obtain the partial structure of global optimal particle and the historical local optimal particle, and inherit the characteristics of the two optimal particles, which is a greedy strategy. The implementation of Mutation Strategy 3 can expand the search space and make the mutation direction completely get rid of the structure of the optimal particles, which is suitable for the early stage of iteration and increases the exploration ability of the algorithm. The exploratory ability of Mutation Strategy 2 is between Mutation Strategy 1 and Mutation Strategy 3.</ns0:p><ns0:p>In multiple mutation strategy, the iterative process is divided into two stages by setting a threshold.</ns0:p><ns0:p>Three mutation strategies in the early stage are selected with equal probability, and the Mutation Strategy V ← / 0 3:</ns0:p><ns0:p>for i ← 1 to m do 4:</ns0:p><ns0:p>for j ← 1 to N do 5:</ns0:p><ns0:formula xml:id='formula_14'>if i <= t × N then s ← random(1, 2, 3) 6: else s ← random(1, 2) 7: end if 8: if s == 1 then v ← Mutation1(P[ j]) 9: else if s == 2 then v ← Mutation2(P[ j]) 10: else if s == 3 then v ← Mutation3(P[ j]) 11: end if 12: V [ j] ← v 13:</ns0:formula><ns0:p>end for 14:</ns0:p><ns0:p>end for 15:</ns0:p><ns0:p>return V 16: end function</ns0:p></ns0:div>
<ns0:div><ns0:head>Adaptive learning factor</ns0:head><ns0:p>Property 6. Learning factor is a key parameter to determine the performance of DDE algorithm, which has a decisive influence on the exploitation and exploration ability of algorithm. This paper proposes an adaptive learning factor based on set operation for the first time to effectively balance the search ability of XSMT-MoDDE algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Operating rule for learning factors</ns0:head><ns0:p>As shown in Formula 2, the learning factor F acts on the difference vector and controls the global search capability of DDE algorithm <ns0:ref type='bibr' target='#b38'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gong et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Brest et al., 2006)</ns0:ref>. In discrete problems, simple multiplication operation cannot be used. This paper redefines the * operation in Formula 2.</ns0:p><ns0:p>Definition 8. F * X g best ⊙ X g r F < 1. Randomly eliminate n edges {e 1 , e 2 , ..., e n } from the edge set of difference particles, where e i ∈ X g best and e i / ∈ X g i , and the value of n is calculated by Formula 19.</ns0:p><ns0:p>Definition 9. F * X g best ⊙ X g r F > 1. Randomly eliminate n edges {e 1 , e 2 , ..., e n } from the edge set of difference particles, where e i ∈ X g i and e i / ∈ X g best , and the value of n is calculated by Formula 20.</ns0:p><ns0:p>Definition 10. F * X g best ⊙ X g r F = 1. No changes are made to the edge set.</ns0:p><ns0:formula xml:id='formula_15'>n = round (1 − F) × |X g best | (19) n = round (F − 1) × |X g i | (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>)</ns0:formula><ns0:p>where |X| represents the number of edge of Particle X.</ns0:p></ns0:div>
<ns0:div><ns0:head>Adaptive update process</ns0:head><ns0:p>Each Particle X i corresponds to the adaptive learning factor F i , which is initialized to 1. After each selection operation, the Parameter F i is updated.</ns0:p><ns0:p>(1) Calculate reference Parameter r, r ← k × f best + 1, where k is 0.001 and f best is the fitness value of the global optimal particle;</ns0:p><ns0:p>(2) Calculate difference value ∆ between fitness value f i of X g i and fitness value f best of X g best ;</ns0:p><ns0:p>(3) Update F i , the update formula is as follows:</ns0:p><ns0:formula xml:id='formula_17'>F i = F i + 0.05 ∆ > r F i − 0.05 ∆ ≤ r (21)</ns0:formula><ns0:p>When the fitness value f i is close enough to f best , reduce F i to preserve its structure to a greater extent, otherwise, increase F i to expand the global search capability.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 4 Refining strategy Require: X, n Ensure: R 1: function REFINING(X, n)</ns0:p><ns0:formula xml:id='formula_18'>2:</ns0:formula><ns0:p>R ← / 0 3:</ns0:p><ns0:formula xml:id='formula_19'>for i ← 1 to n do 4: d ← CalculateDegree(X i ) 5:</ns0:formula><ns0:p>Length ← 0 6:</ns0:p><ns0:p>Substructure ← / 0 7:</ns0:p><ns0:p>for j ← 1 to 4 d do 8:</ns0:p><ns0:p>s ← GetSubstructure()</ns0:p><ns0:formula xml:id='formula_20'>9:</ns0:formula><ns0:p>l ← GetCommonWireLength() There may still be space for optimization for the optimal particles at the end of iteration. In order to search for a better result, a refining strategy is proposed. The steps of algorithm are as follows:</ns0:p><ns0:p>(1) Calculate degree of each Point p i in the optimal particle. The degree is defined as the number of edges connected to point, denoted as d i ;</ns0:p><ns0:p>(2) There are 4 kinds of edges in X-architecture. If the degree of Point p i is d i , there are 4 d i types of substructures corresponding to the point. The set of all substructures corresponding to Point p i is S, and edge Set E is obtained when the substructures corresponding to Points p 1 − p i−1 have been determined.</ns0:p><ns0:p>Calculate common wire length l between Substructure s i in Set S and Set E, select Substructure s i corresponding to the largest l, and add the edges of s i to the Set E. The algorithm ends until all points have been visited.</ns0:p><ns0:p>The pseudo code of the refining strategy algorithm is shown in Algorithm 4, where X represents the target particle obtained by the XSMT-MoDDE algorithm, n represents the point number of XSMT, and R represents the refined particle. Line 2 initializes Set R. Lines 3-20 search for the optimal substructure corresponding to each point. Line 4 calculates the degree of Point p i , Line 5 initializes maximum common wire length, and Line 6 initializes the optimal substructure set. Lines 7-14 calculate common wire length and update the largest common wire length. Lines 15-19 store the edges in the optimal substructure into Set R.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related parameters</ns0:head><ns0:p>The main parameters of the algorithm in this paper include population size n, iteration times m, threshold t, learning factor F, and crossover probability cr.</ns0:p><ns0:p>In the proposed algorithm, n is 50, m is 500, and t is 0.4. The adaptive strategy of learning factor F has been described in detail in Section 3.6. The crossover probability cr also adopts the adaptive strategy, which is as follows: </ns0:p><ns0:formula xml:id='formula_21'>cr i = cr l + (cr u − cr l ) f i − f min f max − f min f i > f cr l else<ns0:label>(22)</ns0:label></ns0:formula><ns0:p>where cr i =0.1, cr u =0.6, f i represents the fitness value of the current particle, f min represents the minimum historical fitness value, f max represents the maximum historical fitness value, and f represents the average historical fitness value.</ns0:p></ns0:div>
<ns0:div><ns0:head>The algorithm flow of XSMT-MoDDE</ns0:head><ns0:p>The algorithm flow chart of XSMT-MoDDE is shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, and the detailed flow is as follows:</ns0:p><ns0:p>(1) Initialize threshold, population size, adaptive learning factor F, and adaptive crossover probability cr.</ns0:p><ns0:p>(2) Use Prim algorithm to construct initial particles and generate initial population.</ns0:p><ns0:p>(3) Check the current stage: early stage or late stage of iteration.</ns0:p><ns0:p>(4) Select a mutation strategy from the corresponding mutation strategy pool according to the current stage. Obtain the mutated particles according to the mutation strategy.</ns0:p><ns0:p>(5) Obtain the trial particles according to the crossover operator.</ns0:p><ns0:p>(6) Obtain the next generation of particles according to the selection operator.</ns0:p><ns0:p>(7) Adopt elite selection and cloning strategy, and update the elite buffer after four steps of selection, clone, mutation, and extinction.</ns0:p><ns0:p>(8) Update adaptive learning factor and adaptive crossover probability by Formula 21 and Formula 22.</ns0:p><ns0:p>(9) Check the number of iterations, and end the iteration if the termination condition is met, otherwise, return to Step (3).</ns0:p><ns0:p>(10) At the end of XSMT-MoDDE algorithm, refining strategy is adopted to obtain the target solution.</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity analysis of XSMT-MoDDE algorithm</ns0:head><ns0:p>Property 8. When the population size is m and the number of pins is n, the time complexity of one iteration is O(mnlogn).</ns0:p></ns0:div>
<ns0:div><ns0:head>13/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity analysis of multiple mutation operator</ns0:head><ns0:p>The mutation process is divided into two stages. First, difference vector is constructed, and then difference vector and the basis vector are used to construct the trial particles.</ns0:p><ns0:p>Construction of difference vector: Sort the edges of two edge sets according to the number of edge start point, and use binary search to construct the non-common edges. The complexity of this process is O(nlog(n)), and the non-common edge set is the difference vector.</ns0:p><ns0:p>Construction of mutation particle: Construct the difference set of basis vector and difference vector according to the above-mentioned similar idea. Then the edges in the difference set are stored in DSU, and edges are randomly selected from difference vector to be added to DSU until a complete tree is constructed. The time complexity of this process is O(nlog(n)).</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity analysis of elite selection and cloning strategy</ns0:head><ns0:p>A minimum heap is established according to the fitness value of particles, and the heap top is selected for cloning each time. The time complexity required for this process is O(n).</ns0:p><ns0:p>The mutation process adopts connection method mutation and topology mutation. The connection method mutation selects two different edges randomly from the edge set to modify the connection method of the edges. The time complexity required is O(1). In topology mutation, one edge is randomly disconnected to form two sub-XSMTs, which are recorded using the DSU. It takes O(nlog(n)) time to construct two sub-XSMTs with DSU, and randomly select one point from each of two sub-XSMTs to establish connection, this process takes O(1) time.</ns0:p><ns0:p>The particles obtained by the elite selection and cloning strategy need to be stored in an elite buffer with a size of m. The population particles and the particles of elite buffer participate in mutation, crossover, and selection operations together.</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity analysis of refining strategy</ns0:head><ns0:p>The degree of Point i is recorded as d i . We always keep d i within 4, even if there is a minimum probability greater than 4, only four connected edges will be considered in refining strategy. The adjacent edges of a point select a connection method respectively to form a substructure. An X-architecture edge has four selection methods, so one point corresponds to 4 d i substructures, where 4 d i ≤ 256.</ns0:p><ns0:p>Refining strategy takes out the optimal particle constructed by XSMT-MoDDE algorithm, enumerates substructures for each point of the particle, and obtain the substructure with the largest common wire length. So for the case of n points, the required time is ∑ n i=1 d i × 4 d i .</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>The proposed XSMT-MoDDE has been implemented in C++ language on a windows computer with 3.5</ns0:p><ns0:p>GHz Intel CPU. To compare the experimental results fairly, we run all programs in the same experimental environment and use the same benchmarks from GEO and IBM. The population size and iteration size of all heuristic algorithms are set to 50 and 500 respectively. Calculation formula of optimization rate is shown in Formula 23.</ns0:p><ns0:formula xml:id='formula_22'>rate = b − a b × 100% (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>)</ns0:formula><ns0:p>where a is the experimental result of the XSMT-MoDDE algorithm, and b is the experimental result of other algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head>Verify the effectiveness of multi-strategy optimization</ns0:head><ns0:p>Experiment 1: In order to verify the effectiveness of the multi-strategy optimization DDE algorithm in constructing XSMT, this experiment will compare the results of XSMT-MoDDE algorithm and XSMT-DDE algorithm. Experimental results are shown in Table <ns0:ref type='table' target='#tab_6'>2 and Table 3</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Verify the effectiveness of refining strategy</ns0:head><ns0:p>Experiment 2: In order to verify the effectiveness of the refining strategy, this experiment will compare the results of refined XSMT-MoDDE algorithm and XSMT-MoDDE algorithm. The experiment result is shown in Table <ns0:ref type='table' target='#tab_9'>4 and Table 5</ns0:ref>. Table <ns0:ref type='table' target='#tab_8'>4</ns0:ref> is the optimization results of wire length, and Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>. As a typical heuristic search algorithm, Differential Evolution (DE) algorithm has shown good optimization effect in many practical engineering problems. Therefore, based on DE algorithm, this paper designs relevant strengthening strategies to PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021) Manuscript to be reviewed Computer Science construct XSMT.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Distribution of pins.</ns0:figDesc><ns0:graphic coords='4,226.12,138.26,244.81,163.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Four selections for connection method. (A)Selection 0; (B)Selection 1; (C)Selection 2; (D)Selection 3.</ns0:figDesc><ns0:graphic coords='5,204.52,63.78,288.00,189.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Definition 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pseudo-Steiner point. Except for pin points, other join points are called Pseudo-Steiner points, denoted as PS points. Definition 2. Selection 0. As shown in Figure 2A, draw the vertical edge from A to point PS, and then draw the X-architecture edge from PS to B. Definition 3. Selection 1. As shown in Figure 2B, draw the X-architecture edge from A to point PS, and then draw the vertical edge from PS to B. Definition 4. Selection 2. As shown in Figure 2C, draw the vertical edge from A to PS, and then draw the horizontal edge from PS to B. Definition 5. Selection 3. As shown in Figure 2D, draw the horizontal edge from A to PS, and then draw the vertical edge from PS to B.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Steiner tree.</ns0:figDesc><ns0:graphic coords='6,226.12,63.78,244.80,163.35' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021) Manuscript to be reviewed Computer Science Algorithm 1 Initialization strategy based on Prim algorithm Require: V , N Ensure: P 1: function PRIMALGORITHM(V ) 2:s ← random()/(maxnum + 1) × n +</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Two ways of mutation. (A)Connection method mutation; (B)Topology mutation.</ns0:figDesc><ns0:graphic coords='9,211.72,63.78,273.60,253.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Definition 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>A ⊙ B. ⊙ is expressed as finding the symmetric difference of A and B, which is (A ∪ B) − (A ∩ B), as shown in Figure 5A. Definition 7. A ⊕ B. First calculate Set C, C = A − B, and then add the edges of Set B to Set C until Set C can form a legal tree, as shown in Figure 5B.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Operation process of two new operators. (A)A ⊙ B; (B)A ⊕ B.</ns0:figDesc><ns0:graphic coords='11,204.52,63.78,288.00,222.55' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>is cancelled in the later stage. The pseudo-code of multiple mutation strategy is shown in Algorithm 3, where P represents population, N represents the size of the population, m represents the number of iterations, t represents threshold, and V represents mutated population. Line 5 judges whether the current iteration is in the early stage of the iteration. If it is in the early stage of the iteration, Mutation Strategy 1, Mutation Strategy 2, and Mutation Strategy 3 are adopted. Line 6 determines whether the current iteration is in the later stage of the iteration. If it is in the latter stage, Mutation Strategy 1 and Mutation Strategy 2 are adopted.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Algorithm flowchart.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Steiner tree generated by XSMT-MoDDE. (A)Steiner tree with 500 pins; (B)Steiner tree with 1000 pins.</ns0:figDesc><ns0:graphic coords='19,141.73,63.78,413.55,195.30' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,168.52,63.78,360.00,274.81' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Coordinate information of pins.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Pin</ns0:cell><ns0:cell>p 1</ns0:cell><ns0:cell>p 2</ns0:cell><ns0:cell>p 3</ns0:cell><ns0:cell>p 4</ns0:cell><ns0:cell>p 5</ns0:cell></ns0:row></ns0:table><ns0:note>2/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)Manuscript to be reviewedComputer ScienceCoordinate (01, 22) (05, 05) (12, 10) (18, 03)(22, 16) </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table 2 is the optimization results of wire length, and Table3is the optimization results of standard deviation. The results show that multi-strategy optimization can achieve an average wire length optimization rate of 2.35% and a standard deviation optimization rate of 95.69%. This experiment proves that multi-strategy optimization has a powerful effect on wire length reduction, and at the same time greatly increases the stability of DDE.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell /><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell cols='4'>Pins XSMT-DDE XSMT-MoDDE Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16956</ns0:cell><ns0:cell>16900</ns0:cell><ns0:cell>0.33%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>18083</ns0:cell><ns0:cell>18023</ns0:cell><ns0:cell>0.33%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>19430</ns0:cell><ns0:cell>19397</ns0:cell><ns0:cell>0.17%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>25728</ns0:cell><ns0:cell>25614</ns0:cell><ns0:cell>0.44%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>32434</ns0:cell><ns0:cell>32171</ns0:cell><ns0:cell>0.81%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>49103</ns0:cell><ns0:cell>48090</ns0:cell><ns0:cell>2.06%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>57386</ns0:cell><ns0:cell>56397</ns0:cell><ns0:cell>1.72%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>70407</ns0:cell><ns0:cell>68917</ns0:cell><ns0:cell>2.12%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>145183</ns0:cell><ns0:cell>139871</ns0:cell><ns0:cell>3.66%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>410</ns0:cell><ns0:cell>146680</ns0:cell><ns0:cell>141571</ns0:cell><ns0:cell>3.48%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>160031</ns0:cell><ns0:cell>154406</ns0:cell><ns0:cell>3.51%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>232057</ns0:cell><ns0:cell>220577</ns0:cell><ns0:cell>4.95%</ns0:cell></ns0:row><ns0:row><ns0:cell>Average</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1.97%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:cell><ns0:cell /><ns0:cell>14/21</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Average wire length optimization results of multi-strategy optimization.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell cols='4'>Pins XSMT-DDE XSMT-MoDDE Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>198</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>94.95%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>343</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>85.13%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>1036</ns0:cell><ns0:cell>147</ns0:cell><ns0:cell>85.81%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>1082</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>90.57%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>1905</ns0:cell><ns0:cell>279</ns0:cell><ns0:cell>85.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>3221</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>96.27%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>410</ns0:cell><ns0:cell>3222</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell>94.48%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>3193</ns0:cell><ns0:cell>139</ns0:cell><ns0:cell>95.65%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>3977</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>97.33%</ns0:cell></ns0:row><ns0:row><ns0:cell>Average</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>93.80%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Standard deviation optimization results of multi-strategy optimization.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Table5is the optimization results of standard deviation. The results show that refining strategy can achieve an average wire length optimization rate of 0.50% and a standard deviation optimization rate of 37.30%. From the experimental results and the above complexity analysis, it can be seen that after XSMT-MoDDE algorithm is over, refining strategy only takes a short time to obtain a lot of optimization of wire length and standard deviation. Regardless of whether refining strategy is added or not, both can always obtain accurate solutions in circuits with less than 10 pins. Refining strategy has more significant optimization effects in larger circuits.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithm comparison experiment</ns0:cell></ns0:row><ns0:row><ns0:cell>Experiment 3: To compare the performance of XSMT-MoDDE algorithm with other heuristic algorithms,</ns0:cell></ns0:row><ns0:row><ns0:cell>we compare the results of XSMT constructed by MoDDE algorithm, DDE algorithm, Artificial Bee</ns0:cell></ns0:row><ns0:row><ns0:cell>Colony (ABC) algorithm, and Genetic Algorithm (GA). The experimental results are shown in Table 6,</ns0:cell></ns0:row></ns0:table><ns0:note>, and Table8. XSMT-MoDDE compares with XSMT-DDE, XSMT-ABC, and XSMT-GA, the average wire length is reduced by 2.40%, 1.74%, and 1.77%, the optimal wire length is reduced by 1.26%, 1.55%, and 1.77%, and the standard deviation is reduced by 95.65%, 33.52%, and 28.61%. Experimental results show that XSMT-MoDDE is better than XSMT-DE, XSMT-ABC, and XSMT-GA in both the wire length and standard deviation indicators. Compared with other algorithms, this algorithm still has excellent stability on the basis of having better wire length results.15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Average wire length optimization results of refining strategy.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell cols='4'>Pins XSMT-DDE Refining Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>20.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>56.86%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>147</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>19.05%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>170</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>20.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>279</ns0:cell><ns0:cell>187</ns0:cell><ns0:cell>32.97%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>52.50%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>410</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>68.54%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>139</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>64.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>115</ns0:cell><ns0:cell>113</ns0:cell><ns0:cell>1.74%</ns0:cell></ns0:row><ns0:row><ns0:cell>Average</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>37.30%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Standard deviation optimization results of refining strategy.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell>Pins</ns0:cell><ns0:cell /><ns0:cell cols='2'>Mean value</ns0:cell><ns0:cell /><ns0:cell cols='2'>Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>DDE</ns0:cell><ns0:cell>ABC</ns0:cell><ns0:cell>GA</ns0:cell><ns0:cell>MoDDE</ns0:cell><ns0:cell>DDE</ns0:cell><ns0:cell>ABC</ns0:cell><ns0:cell>GA</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16956</ns0:cell><ns0:cell>16918</ns0:cell><ns0:cell>16918</ns0:cell><ns0:cell>16900</ns0:cell><ns0:cell cols='2'>0.33% 0.00% 0.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>18083</ns0:cell><ns0:cell>18041</ns0:cell><ns0:cell>18041</ns0:cell><ns0:cell>18023</ns0:cell><ns0:cell cols='2'>0.33% 0.10% 0.10%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>19430</ns0:cell><ns0:cell>19696</ns0:cell><ns0:cell>19696</ns0:cell><ns0:cell>19397</ns0:cell><ns0:cell cols='2'>0.17% 1.52% 1.52%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>25728</ns0:cell><ns0:cell>25919</ns0:cell><ns0:cell>25989</ns0:cell><ns0:cell>25624</ns0:cell><ns0:cell cols='2'>0.40% 1.14% 1.40%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>32434</ns0:cell><ns0:cell>32488</ns0:cell><ns0:cell>32767</ns0:cell><ns0:cell>32091</ns0:cell><ns0:cell cols='2'>1.06% 1.22% 2.06%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>49103</ns0:cell><ns0:cell>48940</ns0:cell><ns0:cell>48997</ns0:cell><ns0:cell>48090</ns0:cell><ns0:cell cols='2'>2.06% 1.74% 1.85%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>57386</ns0:cell><ns0:cell>57620</ns0:cell><ns0:cell>57476</ns0:cell><ns0:cell>56105</ns0:cell><ns0:cell cols='2'>2.23% 2.63% 2.39%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>70407</ns0:cell><ns0:cell>70532</ns0:cell><ns0:cell>70277</ns0:cell><ns0:cell>68457</ns0:cell><ns0:cell cols='2'>2.77% 2.94% 2.59%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='5'>400 145183 141835 141823 138512</ns0:cell><ns0:cell cols='2'>4.59% 2.40% 2.40%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='5'>410 146680 143642 143445 140359</ns0:cell><ns0:cell cols='2'>4.31% 2.29% 2.15%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='5'>500 160031 156457 156394 152649</ns0:cell><ns0:cell cols='2'>4.61% 2.43% 2.39%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='5'>1000 232057 222547 222487 217060</ns0:cell><ns0:cell cols='2'>5.90% 2.47% 2.44%</ns0:cell></ns0:row><ns0:row><ns0:cell>Average</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>2.40% 1.74% 1.77%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison results of average wire length in GEO dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell>Pins</ns0:cell><ns0:cell /><ns0:cell cols='2'>Best value</ns0:cell><ns0:cell /><ns0:cell cols='2'>Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>DDE</ns0:cell><ns0:cell>ABC</ns0:cell><ns0:cell>GA</ns0:cell><ns0:cell>MoDDE</ns0:cell><ns0:cell>DDE</ns0:cell><ns0:cell>ABC</ns0:cell><ns0:cell>GA</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16918</ns0:cell><ns0:cell>16918</ns0:cell><ns0:cell>16918</ns0:cell><ns0:cell>16900</ns0:cell><ns0:cell cols='2'>0.11% 0.11% 0.11%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>18041</ns0:cell><ns0:cell>18041</ns0:cell><ns0:cell>18041</ns0:cell><ns0:cell>18023</ns0:cell><ns0:cell cols='2'>0.10% 0.10% 0.10%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>19415</ns0:cell><ns0:cell>19696</ns0:cell><ns0:cell>19696</ns0:cell><ns0:cell>19397</ns0:cell><ns0:cell cols='2'>0.09% 1.52% 1.52%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>25627</ns0:cell><ns0:cell>25627</ns0:cell><ns0:cell>25897</ns0:cell><ns0:cell>25605</ns0:cell><ns0:cell cols='2'>0.09% 0.09% 1.13%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>32209</ns0:cell><ns0:cell>32344</ns0:cell><ns0:cell>32767</ns0:cell><ns0:cell>32091</ns0:cell><ns0:cell cols='2'>0.37% 0.78% 2.06%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>47987</ns0:cell><ns0:cell>48637</ns0:cell><ns0:cell>48783</ns0:cell><ns0:cell>47975</ns0:cell><ns0:cell cols='2'>0.03% 1.36% 1.66%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>56408</ns0:cell><ns0:cell>57227</ns0:cell><ns0:cell>57445</ns0:cell><ns0:cell>55919</ns0:cell><ns0:cell cols='2'>0.87% 2.29% 2.66%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>68829</ns0:cell><ns0:cell>70382</ns0:cell><ns0:cell>70092</ns0:cell><ns0:cell>68039</ns0:cell><ns0:cell cols='2'>1.15% 3.33% 2.93%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='5'>400 141967 141490 141467 138382</ns0:cell><ns0:cell cols='2'>2.53% 2.20% 2.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='5'>410 144033 143310 143282 140179</ns0:cell><ns0:cell cols='2'>2.68% 2.18% 2.17%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='5'>500 156950 156034 156110 152591</ns0:cell><ns0:cell cols='2'>2.78% 2.21% 2.25%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='5'>1000 226654 222262 222285 216824</ns0:cell><ns0:cell cols='2'>4.34% 2.45% 2.46%</ns0:cell></ns0:row><ns0:row><ns0:cell>Average</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>1.26% 1.55% 1.77%</ns0:cell></ns0:row></ns0:table><ns0:note>16/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison results of best wire length in GEO dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Circuit Pins</ns0:cell><ns0:cell /><ns0:cell cols='2'>Standard deviation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Reduction(%)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>DDE ABC GA MoDDE</ns0:cell><ns0:cell>DDE</ns0:cell><ns0:cell>ABC</ns0:cell><ns0:cell>GA</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100.00%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>198</ns0:cell><ns0:cell>148</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>95.96%</ns0:cell><ns0:cell>94.59%</ns0:cell><ns0:cell>82.61%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>343</ns0:cell><ns0:cell>118</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>93.59%</ns0:cell><ns0:cell>81.36%</ns0:cell><ns0:cell>51.11%</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>1036</ns0:cell><ns0:cell cols='2'>242 133</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>88.51%</ns0:cell><ns0:cell>50.83%</ns0:cell><ns0:cell>10.53%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>1082</ns0:cell><ns0:cell cols='2'>195 140</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>87.43%</ns0:cell><ns0:cell>30.26%</ns0:cell><ns0:cell>2.86%</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell cols='2'>100 1905</ns0:cell><ns0:cell>69</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>187</ns0:cell><ns0:cell cols='3'>90.18% -171.01% -66.96%</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='2'>400 3221</ns0:cell><ns0:cell cols='2'>200 170</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>98.23%</ns0:cell><ns0:cell>71.50%</ns0:cell><ns0:cell>66.47%</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='2'>410 3222</ns0:cell><ns0:cell cols='2'>146 122</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>98.26%</ns0:cell><ns0:cell>61.64%</ns0:cell><ns0:cell>54.10%</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell cols='2'>500 3193</ns0:cell><ns0:cell cols='2'>160 133</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>98.43%</ns0:cell><ns0:cell>68.75%</ns0:cell><ns0:cell>62.41%</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='2'>1000 3977</ns0:cell><ns0:cell cols='2'>131 107</ns0:cell><ns0:cell>113</ns0:cell><ns0:cell>97.16%</ns0:cell><ns0:cell>13.74%</ns0:cell><ns0:cell>-5.61%</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>95.65%</ns0:cell><ns0:cell>33.52%</ns0:cell><ns0:cell>28.61%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison results of standard deviation in GEO dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Circuit</ns0:cell><ns0:cell>Nets</ns0:cell><ns0:cell>Pins</ns0:cell><ns0:cell /><ns0:cell>Value</ns0:cell><ns0:cell>Reduction(%)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>SAT</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>MoDDE</ns0:cell><ns0:cell>SAT</ns0:cell><ns0:cell>KNN</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>ibm01 11507 44266</ns0:cell><ns0:cell>61005</ns0:cell><ns0:cell>61071</ns0:cell><ns0:cell>56080</ns0:cell><ns0:cell>8.07% 8.17%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm02 18429 78171 172518 167359 154868</ns0:cell><ns0:cell>10.23% 7.46%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm03 21621 75710 150138 147982 133999</ns0:cell><ns0:cell>10.75% 9.45%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm04 26263 89591 164998 164828 149727</ns0:cell><ns0:cell>9.26% 9.16%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm06 33354 124299 289705 280998 256674</ns0:cell><ns0:cell>11.40% 8.66%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm07 44394 164369 368015 368015 335556</ns0:cell><ns0:cell>8.82% 8.82%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm08 47944 198180 431879 413201 371948</ns0:cell><ns0:cell>13.88% 9.98%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm09 53039 187872 418382 417543 382282</ns0:cell><ns0:cell>8.63% 8.44%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>ibm10 64227 269000 588079 589102 532644</ns0:cell><ns0:cell>9.43% 9.58%</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>10.05% 8.86%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison results of wire length in IBM dataset.</ns0:figDesc><ns0:table /><ns0:note>17/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57834:1:1:NEW 5 Mar 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Rebuttal Letter
First of all, we want to express our thanks to the associate editor and referees for their valuable comments and questions. Their requests for clarification have considerably improved our manuscript.
Editor's Decision
Based on the reviewers' comments, a minor revision is needed to improve the quality of the paper. In particular, the presented algorithm should be further explained in terms of implementation. The flow of the algorithm should be given in detail. In addition, the results should be compared with existing algorithms and some figures in the paper should be further explained as well to improve the readability of the manuscript.
Answer:
Thanks. According to your suggestions, we have made the following modifications:
(1) The algorithm flow description of XSMT-MoDDE has been supplemented, which is described in Lines 358 to 371. The details have been marked in red font in the revision.
(2) Our algorithm is compared with discrete differential evolution algorithm, artificial bee colony algorithm, and genetic algorithm. The experimental results are shown in Table 6 to Table 8.
(3) The specific process of Figure 4 has been supplemented, which is described in Lines 231 to 233. The details have been marked in red font in the revision
Comments from the reviewers
Reviewer #1:
1.Basic reporting
This paper proposes an X-architecture Steiner minimal tree algorithm based on multi-strategy optimization discrete differential evolution. The layout of the article conforms to the norms, the language is fluent, and no grammatical errors have been found.
Answer:
Thank you for your recognition of our work.
2.Experimental design
The experimental design of this paper is complete and the experimental comparison is rich, and the experimental results well reflect the optimization degree of average wire length, optimal wire length, and standard deviation. Finally, the experimental part well reflects the effectiveness of the proposed algorithm.
Answer:
Thank you for your recognition of our work.
3.Validity of the findings
The proposed algorithm can effectively reduce the wire length of X-architecture Steiner minimal tree and can obtain the best results currently.
Answer:
Thank you for your recognition of our work.
4.Comments for the author
The construction of Steiner tree is a hot topic in related fields, and I give relevant revision opinions after reading this paper.
(1) The flow of XSMT-MoDDE algorithm can be more detailed.
(2) What is the future research work on this work? It is suggested to give future research work in Conclusion Section of the paper.
Answer:
Thanks. According to your suggestions, we have made the following modifications:
(1) The algorithm flow description of XSMT-MoDDE has been supplemented, which is described in Lines 358 to 371. The details have been marked in red font in the revision.
(2) In the future, we will study the construction of obstacle avoidance XSMT by multi-strategy optimization DDE. We have added the future research work to Conclusion Section. The specific content is in Lines 471 to 472 and the details have been marked in red font in the revision.
Reviewer #2:
1.Basic reporting
Global routing is an important link in Very Large Scale Integration (VLSI) design. As the best model of global routing, X-architecture Steiner Minimal Tree (XSMT) has a good performance in wire length optimization. For this reason, an X-architecture Steiner Minimal Tree algorithm based on Multi-strategy optimization Discrete Differential Evolution (XSMT-MoDDE) is proposed. This work proposes four strategies to optimize the DDE algorithm. The four strategies are somewhat innovative. As a whole, this paper has fluent grammar and rigorous logic, and has designed a wealth of experiments to verify the results.
After reading the paper, I give some additional revised opinions for reference: Firstly, are these optimization strategies only used for DDE algorithms, and can they be used by other evolutionary algorithms? Secondly, the mutation process in Figure 4 can be further explained. Thirdly, further investigation and citation for this paper are recommended for relevant references including VLSI routing.
Answer:
Thanks. According to your suggestions, we have made the following modifications:
(1) Novel multiple mutation strategy and adaptive learning factor strategy are designed for DDE algorithm, but they are not suitable for other evolutionary algorithms. Elite selection and cloning strategy can be applied to other evolutionary algorithms and can be used at the end of each iteration. Refining strategy is suitable for all evolutionary algorithms to solve XSMT problems.
(2) The specific process of Figure 4 has been supplemented, which is described in Lines 231 to 233. The details have been marked in red font in the revision.
(3) References related to VLSI routing are added to the Related Work Section. The specific content is in Lines 69 to 91 and the details have been marked in red font in the revision.
2.Experimental design
This paper has verification experiments and comparative experiments, and the experimental design is more reasonable. Results of the comparative experiments prove that XSMT-MoDDE can get the shortest wire length so far, and achieve the better optimization degree in the larger-scale problem.
Answer:
Thank you for your recognition of our work.
3.Validity of the findings
Under the premise of reasonable time complexity, the constructed XSMT obtains the current optimal wire length which is the most important objective of Steiner Minimal Tree.
Answer:
Thank you for your recognition of our work.
4.Comments for the author
none
" | Here is a paper. Please give your review comments after reading it. |
52 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices. One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model). In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models. To compare the performances of different techniques, we propose a new metric called distillation metric which compares different knowledge distillation solutions based on models' sizes and accuracy scores. Based on the survey, some interesting conclusions are drawn and presented in this paper including the current challenges and possible research directions.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Deep learning has succeeded in several fields such as Computer Vision (CV) and Natural Language Processing (NLP). This is due to the fact that deep learning models are relatively large and could capture complex patterns and features in data. But, at the same time, large model sizes lead to difficulties in deploying them on end devices. To solve this issue, researchers and practitioners have applied knowledge distillation on deep learning approaches for model compression. It should be emphasized that knowledge distillation is different from transfer learning. The goal of knowledge distillation is to provide smaller models that solve the same task as larger models <ns0:ref type='bibr' target='#b26'>(Hinton et al., 2015)</ns0:ref> (see figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), whereas the goal of transfer learning is to reduce training time of models that solve a task similar to the task solved by some other model (cf. <ns0:ref type='bibr' target='#b54'>Pan and Yang (2009)</ns0:ref>). Knowledge distillation accomplishes its goal by altering loss functions of models being trained (student models) to account for output of hidden layers of pre-trained models (teacher models). On the other hand, transfer learning achieves its goal by initializing parameters of a model by learnt parameters of a pre-trained model.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There are many techniques presented in the literature for knowledge distillation. As a result, there is a need to summarize them so that researchers and practitioners could have a clear understanding of the techniques. Also, it is worth noting here that knowledge distillation is one of the ways to compress a larger model into a smaller model with comparable performance. Other techniques for model compression include row-rank factorization, parameter sharing, transferred/compact convolutional filters, and parameter pruning as presented by <ns0:ref type='bibr' target='#b8'>(Cheng et al., 2017)</ns0:ref>. To the best of our knowledge, there is no separate published survey on knowledge distillation techniques which motivated us to present a comprehensive survey on recent knowledge distillation techniques for deep learning. Since there are many proposed knowledge distillation methods, we believe that they should be compared appropriately. Knowledge distillation approaches can be compared by several metrics such as reductions in model sizes, accuracy scores, processing times, and so on. Our main criteria are reductions in model sizes and accuracy scores.</ns0:p><ns0:p>Accordingly, we propose a metric-termed distillation metric-that takes into account the two criteria.</ns0:p><ns0:p>The main objectives of this work is to provide an outlook on the recent developments in knowledge distillations and to propose a metric for evaluating knowledge distillation approach in terms of reduction in size and performance. Also, the paper discuss some of the recent developments in the field in terms of understanding the knowledge distillation process and the challenges that need to be addressed. The rest of the paper is organized as follows: In Section 3, we provide a background on knowledge distillation.</ns0:p><ns0:p>In section 4, we present and discuss our proposed distillation metric. Section 5 contains the surveyed approaches and section 6 contains some applications of knowledge distillation. We provide our discussion on surveyed approaches and an outlook on knowledge distillation in section 7. Finally, we present our conclusions in section 8.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>SURVEY METHODOLOGY</ns0:head><ns0:p>We searched papers on the topic of knowledge distillation in Google Scholar and selected the ones that were recent and not covered in previous similar surveys in the field. Moreover, the papers were shortlisted based on the quality which was judged by the publication venue, i.e, reputable journals and conferences, and also based on their impact, i.e., citation count. Published works were searched using phrases containing the keywords such as 'Knowledge Distillation', 'Knowledge Distillation in Deep Learning', and 'Model compression'. Moreover, if a number of papers were retrieved in a specific topic, the papers that were published in less relevant journals and conferences or those having lower citation counts were excluded from the survey.</ns0:p><ns0:p>The available literature was broadly categorized into two sub areas: techniques using only soft labels to directly train the student models and techniques using knowledge from intermediate layers to train the student models which may or may not use the soft labels. Accordingly, the survey was structured into two major sections each dealing with one of the broad categories. These sections were further divided into subsections for ease of readability and comprehensibility.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>BACKGROUND</ns0:head><ns0:p>Knowledge distillation was first introduced by <ns0:ref type='bibr' target='#b26'>Hinton et al. (2015)</ns0:ref>. The main goal of knowledge distillation is to produce smaller models (student models) to solve the same task as larger models (teacher models) with the condition that the student model should perform better than the baseline model. Baseline models are similar to the student models but trained without the help of a teacher model. The distilling process can be achieved by using the soft labels, the probability distribution predicted by the teacher, in addition to the hard label, the one-hot vector ground truth, to train a student model. In this case, the student is trained with a loss function that minimizes the loss between it's predictions and the hard and soft labels. Furthermore, one may distill the knowledge from the logits and feature maps of the teacher's intermediate layers. Logits are the output of a fully connected intermediate layer while feature maps are the output of a convolution layer. In this case, the loss function can be defined to minimize the difference between selected intermediate layers between the teacher and the student. The feature extractor part of a network, i.e., the stack of convolution layers, are referred to as backbone. There are no conventions that guide student models' sizes. For example, two practitioners might have student models with different sizes although they use the same teacher model. This situation is caused by different requirements in different domains, e.g., maximum allowed model size on some device.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There exist some knowledge distillation methods that target teacher and student networks having the same size (e.g., <ns0:ref type='bibr' target='#b75'>Yim et al. (2017)</ns0:ref>). In such case, the knowledge distillation process is referred to as self-distillation and it's purpose is to further improve the performance by learning additional features that could be missing in the student model due to the random initialization Allen-Zhu and <ns0:ref type='bibr' target='#b1'>Li (2020)</ns0:ref>. Although an algorithm is developed to distill knowledge from a teacher model to a student model having the same sizes, the same algorithm might be used to distill knowledge from a teacher to a smaller student. This is because, based on our survey, there is no restriction on model sizes, and it is up to model designers to map teacher's activations to student's. So, in general settings, knowledge distillation is utilized to provide smaller student models that have good maintainability of their teacher models' accuracy scores.</ns0:p><ns0:p>Consequently, one could compare different knowledge distillation algorithms by their reductions in model sizes. In addition, algorithms might be compared by how much accuracy they maintain as compared to teacher models. There is no rule that governs how much reduction is best for all cases. For instance, if one needs to apply a knowledge distillation algorithm, they need to compare the algorithm's performance, in terms of reductions in size and accuracy, to their system's requirements. Based on the requirements, they can decide which algorithm best fits their situation. To ease the process of comparison, we develop distillation metric which compares knowledge distillation algorithms based on model sizes and accuracy scores. For a detailed description, please refer to section 4.</ns0:p><ns0:p>There are different knowledge distillation approaches applied to deep learning models. For example, there exist approaches that distill knowledge from a single teacher to a single student. Also, other approaches distill knowledge from several teachers to a single student. Knowledge distillation could also be applied to provide an ensemble of student networks. In section 5, we present recent knowledge distillation approaches that are applied on deep learning based architectures.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DISTILLATION METRIC</ns0:head><ns0:p>We propose distillation metric to compare different knowledge distillation methods and to select suitable model for deployment from a number of student models of various sizes. The metric considers ratios of student network's size (first ratio) and accuracy score (second ratio) to teacher's. To have a good reduction in size, first ratio should be as small as possible. For a distillation method to have a good maintainability of accuracy, second ratio should be as close to 1 as possible. To satisfy these requirements, we develop the following equation:</ns0:p><ns0:formula xml:id='formula_0'>DS = α * ( student s teacher s ) + (1 − α) * (1 − student a teacher a )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where DS stands for distillation score, student s and student a are student size and accuracy respectively, and teacher s and teacher a are teacher size and accuracy respectively. Parameter α ∈ [0, 1] is a weight to indicate importance of first and second ratio, i.e., size and accuracy. The weight is assigned by distillation designers based on their system's requirements. For example, if some system's requirements prefer small model sizes over maintaining accuracy, designers might have α > 0.5 that best satisfies their requirements.</ns0:p><ns0:p>It should be noted that when student accuracy is better than teacher's, then second ratio would be greater than 1. This causes the right operand of the addition operation (i.e., 1 -second ratio) to evaluate to a negative value. Hence, DS is decreased, and it could be less than zero especially if weight of second ratio is larger. This is a valid result since it indicates a very small value of first ratio compared to second ratio. On other words, this behaviour indicates a large reduction in model size while providing better accuracy scores than teacher model at the same time. As presented in section 5, a student model with a better accuracy is not a common case. It could be achieved, for example, by having an ensemble of student models.</ns0:p><ns0:p>Regarding the behaviour of distillation metric, it is as follows: The closer distillation score to 0, the better the knowledge distillation. To illustrate, an optimal knowledge distillation algorithm would provide a value that is very close to 0 for first ratio (e.g., student size is very small compared to teacher's), and it</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>would produce a value of 1 for second ratio (e.g., student and teacher networks have the same accuracy score). As a result, distillation score approaches 0 as the first ratio approaches 0, and the second ratio approaches 1.</ns0:p><ns0:p>To demonstrate the usage of distillation metric, we use the results reported in <ns0:ref type='bibr' target='#b68'>Walawalkar et al. (2020)</ns0:ref> using CIFAR100 dataset <ns0:ref type='bibr' target='#b36'>Krizhevsky (2009)</ns0:ref> and the Resnet44 architecture <ns0:ref type='bibr' target='#b23'>He et al. (2016)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>SURVEY</ns0:head><ns0:p>This section includes recent work that targets knowledge distillation in deep learning. It is divided into two categories. First category considers work that distills knowledge from the soft labels of the teacher model to train students. Soft labels refers to the output of the teacher model. In case of classification task, the soft labels represent the probability distribution among the classes for the input sample. Second category considers work that distills knowledge from other parts of the teacher model in addition or instead of the soft labels. Within each category, we further divide knowledge distillation methods into two sub-category:</ns0:p><ns0:p>1) offline distillation and 2) online distillation. In offline distillation, the knowledge distillation process is performed using a pre-trained teacher model. While online distillation is for methods that perform knowledge distillation while training the teacher model. The illustration of the two sub-category can be seen in figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. A summary can be found in figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. In this survey, our main criteria are reductions of sizes and accuracy scores of student models against the corresponding teacher models. Regarding experiment results for the surveyed work, they are presented in Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Techniques That Distills Knowledge from Teacher Soft Labels</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Offline Distillation</ns0:head><ns0:p>Fukuda et al. ( <ns0:ref type='formula'>2017</ns0:ref>) proposed a knowledge distillation approach by training a student model using multiple teacher models. Unlike other multi teacher approaches that average the output of the teacher models to create the soft labels and then used to train the student model <ns0:ref type='bibr' target='#b73'>(Wu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chebotar and Waters, 2016;</ns0:ref><ns0:ref type='bibr' target='#b49'>Markov and Matsui, 2016)</ns0:ref>, The approach proposed by Fukuda et al. was to opt out of combining the teachers output distribution and to train the student on the individual output distribution.</ns0:p><ns0:p>The authors argued that this would help the student model to observe the input data from different angles and would help the model to generalize better.</ns0:p><ns0:p>While deep learning has achieved great success across a wide range of domains, it remains difficult to identify the reasoning behind model predictions, especially if models are complex. To tackle this issue, <ns0:ref type='bibr' target='#b19'>Furlanello et al. (2018)</ns0:ref> proposed an ensemble knowledge distillation method called Born-Again Neural Networks. The method considered the issue of teacher and student models having the same architecture (self distillation). The method first trained a teacher model normally. Then, it trained a student model using the ground truth and teacher's predictions. After that, it trained a second student model using the ground truth and previous student's predictions, and so on (see figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>). For instance, student i was trained by utilizing training labels and predictions of</ns0:p><ns0:formula xml:id='formula_1'>student i−1 for i ∈ [1, n],</ns0:formula><ns0:p>where n is the number of student models. When student models were used for prediction, their results were averaged. Furlanello et al. claimed that the method would produce better models since it was based on ensemble models, and a model was trained on training labels and predictions of a previously trained model. <ns0:ref type='bibr' target='#b59'>Polino et al. (2018)</ns0:ref> developed a knowledge distillation approach for quantized models. Quantized models are models whose weights are represented by a limited number of bits such as 2-bit or 4-bit integers. Quantized models are used to develop hardware implementations of deep learning architectures as they provide lower power consumption and lower processing times compared to normal models (fullprecision models) <ns0:ref type='bibr' target='#b10'>(Courbariaux et al., 2015)</ns0:ref>. The distillation approach had 2 variants. When tackling problems where only few samples are available, it can make models overfit easily. <ns0:ref type='bibr' target='#b35'>Kimura et al. (2019)</ns0:ref> proposed a method that allowed training networks with few samples while avoiding overfitting using knowledge distillation. In their approach, they first trained a reference model with few samples using Gaussian processes (GP) instead of neural network. Then, the samples used for training were augmented using inducing point method via iterative optimization. Finally, the student model was trained with the augmented data using loss function defined in the paper with the GP teacher model to be imitated by the student model. <ns0:ref type='bibr' target='#b53'>Nayak et al. (2019)</ns0:ref> proposed a method to train the student model without using any dataset or metadata. The method worked by extracting data from the teacher model through modeling the data distribution in the SoftMax space. Hence, new samples could be synthesized from the extracted information and used to train the student model. Unlike generative adversarial networks (GANs)</ns0:p><ns0:p>where they generates data that is similar to the real data (by fooling a discriminative network), here the synthesized data were generated based on triggering the activation of the neurons before the SoftMax function.</ns0:p><ns0:p>Wu et al. ( <ns0:ref type='formula'>2019</ns0:ref>) developed a multi-teacher distillation framework for action recognition. Knowledge was transferred to student by taking a weighted average of three teachers soft labels (see figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>). The three teachers are fed different inputs. The first teacher is fed with the residual frame, while the second teacher is fed with motion vector. The last teacher is fed with the I-frame image, similar to the student model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Online Distillation</ns0:head><ns0:p>In The gradual distillation approach trains the teacher model and the student model simultaneously. The output from the teacher's network at each step is used to guide the student learning. Accordingly, the loss function for the student's network has two components: the cross-entropy loss between the output of the studnet's network and the hard labels, and the cross-entropy loss between the student output and the teacher's target.</ns0:p><ns0:p>Training a compact student network to mimic a well-trained and converged teacher model can be challenging. The same rationality can be found in school-curriculum, where students at early stages are taught easy courses and further increasing the difficulty as they approach later stages. From this observation, <ns0:ref type='bibr' target='#b32'>Jin et al. (2019)</ns0:ref> proposed that instead of training student models to mimic converged teacher models, student models were trained on different checkpoints of teacher models until teacher models converged.</ns0:p><ns0:p>For selecting checkpoints, a greedy search strategy was proposed that finds efficient checkpoints that are easy for the student to learn. Once checkpoints were selected, a student model's parameters were optimized sequentially across checkpoints, while splitting data used for training across the different stages</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science depending on it's hardness defined by a hardness metric that was proposed by the authors.</ns0:p><ns0:p>An ensemble knowledge distillation approach named Online Knowledge Distillation with Diverse peers (OKDDip) was proposed by <ns0:ref type='bibr' target='#b6'>Chen et al. (2020)</ns0:ref>. OKDDip uses an ensemble of models as a teacher (named auxiliary peer) and a single model within the group as a student (named group leader). Unlike ONE, the ensemble of models can be independent models or have shared layers. Each model is trained to reduce the cross entropy between it's predictions and the ground truth. Additionally, each model will take a weighted average of predictions of all models in the ensemble and uses Kullback Leibler divergence loss function between it's prediction distribution and the weighted average of predictions of the ensemble.</ns0:p><ns0:p>Each auxiliary peer will assign different weights to all other auxiliary peer in the group to determine how the prediction distribution is aggregated. For the group leader, it will just take the average of the prediction of all auxiliary peer. The weight assignment process for the auxiliary peers takes the feature extracted for each peer and project it to two subspaces by applying linear transformation with learned weights. The weights for each peer is then calculated similar to the self-attention mechanism using the two projected subspaces <ns0:ref type='bibr' target='#b66'>Vaswani et al. (2017)</ns0:ref>.</ns0:p><ns0:p>Another ensemble knowledge distillation methods was proposed by <ns0:ref type='bibr' target='#b21'>Guo et al. (2020)</ns0:ref> named knowledge distillation via collaborative learning (KDCL). KDCL trains on input data that is distorted differently for each student in the ensemble. The cross entropy loss function between prediction and hard labels is used to train each student model in addition to the Kullback Leibler divergence loss between the prediction and the soft labels. The authors proposed four different methods to generate the soft labels. The first methods selects a single student probability distribution in the ensemble as soft label that produces the minimum cross entropy loss. The second method finds the best linear combination of the students logtis that minimizes the cross entropy loss through convex optimization and use it to generate the soft labels via softmax function. The third method subtracts the logit that corresponds to the target class from all logits for each student. Then, it constructs the ensemble logits by selecting the minimum logit for each class from all students in the ensemble which later is fed to softmax to create the soft labels. The fourth method of producing the soft labels takes the weighted average of students' outputs. The weight for each student is assigned after every training epoch and it is based on it's performance on the validation set.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> provides a summary of the presented work. It shows that best achieved reduction in size is by <ns0:ref type='bibr' target='#b51'>Min et al. (2019)</ns0:ref> with a reduction of 99.44% in number of parameters. We can observe from the table that the best approach in terms of maintaining accuracy is proposed by <ns0:ref type='bibr' target='#b35'>Kimura et al. (2019)</ns0:ref> with an increase in accuracy by 10.526%. However, their work utilizes knowledge distillation to overcome overfitting when dealing small amount of training samples. Furthermore, they used a Gaussian process as a teacher model which can explain the increase in accuracy of the student CNN model. Additionally, <ns0:ref type='bibr' target='#b35'>Kimura et al. (2019)</ns0:ref> approach helped the student model to generalize better on small number of training samples and achieved the highest increase of accuracy compared to the baseline model which overfitted on the training data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Techniques That</ns0:head><ns0:p>Computer Science parameters on the backbone layers. The knowledge distillation process only happened during training of the backbone layers of the smaller student model, which allowed it to be trained on different dataset than the teacher model. <ns0:ref type='bibr' target='#b71'>Wang et al. (2019)</ns0:ref> proposed a distillation method for encoder-decoder networks that trained a student model by comparing its soft labels to a teacher's labels and the ground truth. Moreover, the student will also compare it's encoders outputs to that of the teacher. <ns0:ref type='bibr' target='#b24'>He et al. (2019)</ns0:ref> proposed to train an auto-encoder network to compress feature maps of the teacher.</ns0:p><ns0:p>The student is later trained to match the compressed feature maps of the teacher model. Additionally, the student was also trained to match it's feature map affinity matrix to the of the teacher model. This was needed because student network could not capture long-term dependencies due to its relatively small size. Using IRG of the teacher layers and student layers, a loss function was defined to help train the student model using the knowledge encapsulated in the IRG of the teacher. <ns0:ref type='bibr' target='#b78'>Zhou et al. (2018)</ns0:ref> proposed to train the teacher (named booster net) and the student (named lightweight net) together. This was done by sharing the backbone layers of the two models during training and then using a function where it contained the loss of the booster network, the loss of the lightweight network, and the mean square error between the logits before softmax activation of both networks. To prevent the objective function from hindering the performance of the booster network, a gradient block scheme was developed to prevent the booster network specific parameter from updating during the backpropagation of the objective function which would allow the booster network to directly learn from the ground truth labels. To improve their approach further, they used the knowledge distillation loss function from <ns0:ref type='bibr' target='#b26'>Hinton et al. (2015)</ns0:ref> in their objective function. normally with the ground truth label while the fusion classifier takes the feature maps of all models in the ensemble as an input and is also trained with the ground truth label. Furthermore, the ensemble models will distil it's knowledge to the fusion classifier in the form of the average of all predictions and to be used with Kullback Leibler divergence loss to transfer the knowledge of the ensemble to the fusion classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.2'>Online Distillation</ns0:head><ns0:p>Moreover, the fusion classifier will also distil it's knowledge back to the each model in the ensemble in the form of it's prediction distribution and to be used with Kullback Leibler divergence loss. This way, the knowledge distillation is mutual between the fusion classifier and the ensemble. After training, any model in the ensemble can be selected to be deployed or the whole ensemble with the fusion classifier can be deployed in case of lenient hardware constraints.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b68'>Walawalkar et al. (2020)</ns0:ref> proposed to train an ensemble of models that is broken down into four blocks, where all models share the first block of layers. The first model in the ensemble is considered the teacher (termed pseudo teacher in the paper). For each successive models (student), the number of channels in their convolution layers is reduced by an increasing ratio to the teacher model. During deployment, any model in the ensemble can be selected depending on the hardware constraints or in cases of lenient constraints the whole ensemble can be deployed. In addition to training each model using cross entropy between predictions and ground truth, an intermediate loss function is used to distill the knowledge of the intermediate block of layers (feature maps) of the teacher model to each student model. Moreover, Kullback Leibler divergence loss is used between the model prediction and the average predictions of the whole ensemble. Since the number of channels of the student models and the teacher models is not the same, an adaptation layer (1 × 1 convolution) is used to map the student channels to the teacher channels. The intermediate loss function is a mean squared error between the feature maps of the teacher and student pair. <ns0:ref type='bibr' target='#b9'>Chung et al. (2020)</ns0:ref> proposed online Adversarial Feature map Distillation (AFD) that trains two network to mimic each other feature maps through adversarial loss. Aside from training using cross entropy loss on the ground truth and Kullback Leibler divergence loss between the logits of the two network, AFD trains a discriminator for each network that distinguishes between the feature map produced by the accompany network and other network. Each network in AFD is trained to fool it's corresponding discriminator and minimize the adversarial loss. This in turns will let the model to learn the feature map distribution of the other network. In case of training two network, one can be considered as the student (model with less parameters) and the other as teacher model (with more parameters) and both student and teacher model will learn from each other. Due to the difference in the number of channels of the feature maps between the two networks, a transfer layer is used to converts the number of channel of the student network to that of the teacher network.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> provides a summary of presented work. It shows that best approach in terms of size reduction is proposed by <ns0:ref type='bibr'>Li et al. (2019)</ns0:ref> with a reduction of 95.86% in size. The table shows that best approach in terms of maintaining accuracy is proposed by <ns0:ref type='bibr' target='#b25'>Heo et al. (2019)</ns0:ref> with an increase in accuracy of 6.191%.</ns0:p><ns0:p>However, their experiment conducted on a teacher model that is trained on and evaluated on two different datasets. Their experiment focused on combining knowledge transfer with knowledge distillation. As for improvement compared to the baseline model, the Layer Selectivity Learning (LSL) proposed by Li et al.</ns0:p><ns0:p>(2019) achieved the best improvement by 16.89% increase in accuracy. Traditionally deep learning models use to run on the Cloud computing platforms delivering the results to the smart devices over a network. Although this model is feasible in some situations, it is not preferred in many other situations where delay is not tolerable or data privacy is a concern. Moreover, unpredictable network connections between the cloud and the device can also pose significant challenges. Thus, running the deep learning system on the local devices is an important requirement in many domains and has a</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>APPLICATIONS OF KNOWLEDGE DISTILLATION</ns0:head></ns0:div>
<ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Summary of knowledge distillation approaches that distills knowledge from parts other than or in addition to the soft labels of teacher models to be used for training the student model. In case of several students, results of student with largest size reduction are reported. In case of several datasets, dataset associated with lowest accuracy reduction is recorded. Baseline models had the same sizes as student models, but they were trained without teacher models. wide variety of applications including smart cities, self-driving cars, smart homes, medical devices, and entertainment <ns0:ref type='bibr' target='#b67'>Véstias et al. (2020)</ns0:ref>. Knowledge distillation allows developers to shrink down the size of deep learning models in order for them to fit into resource-limited devices having limited memory and power as illustrated in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. In this section we present some typical applications of knowledge distillation based on recent literature.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b7'>Chen et al. (2019)</ns0:ref>, knowledge distillation was used to train a lightweight model for pedestrian detection which will enable fast pedestrian detection in smart vehicles with autonomous driving functionality. <ns0:ref type='bibr' target='#b31'>Janveja et al. (2020)</ns0:ref> presented a smartphone-based system for detecting driver fatigue based on frequency of yawning and the frequency of eye closure. <ns0:ref type='bibr' target='#b74'>Yang et al. (2018)</ns0:ref> presented the use of MobileNets in addition to Batch Normalization and Swish activation function (cf. <ns0:ref type='bibr' target='#b62'>Ramachandran et al. (2017)</ns0:ref>) to estimate the steering angle for the self-driving cars.</ns0:p><ns0:p>In the domain of healthcare, <ns0:ref type='bibr' target='#b14'>Esteva et al. (2017)</ns0:ref> presented an end-to-end deep CNN based system to classify different types of skin cancer from skin images. The paper proposed the idea of deploying the system on smart phones so that a large population can easily access the diagnostic services. <ns0:ref type='bibr' target='#b0'>Ahn et al. (2018)</ns0:ref> presented a CNN based deep learning system to assist in capsule endoscopy. The idea is to adaptively control the capsule's image capturing frequency and quality based on detecting damaged areas in a patient's small intestine. To adaptively control the capsule moving through a patient's intestine, the authors suggest pairing the capsule with an external device attached to the patient's waist which can process the incoming images in real-time and direct the capsule in terms of image frequency and quality. The authors identified some of the challenges that need to be addressed in order for the system to be practically in use. Among the challenges identified were the need for the system to be low latency and efficient in battery usage. This can be achieved in part by developing light-weight models using knowledge distillation techniques. <ns0:ref type='bibr' target='#b58'>Plötz and Guan (2018)</ns0:ref> proposed the use of deep learning trained on the cloud to be deployed on smart phones for human activity recognition (HAR) using the data available from smartphone sensors. The </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DISCUSSION AND OUTLOOK</ns0:head><ns0:p>The distillation score proposed in this work can not be used as a fair comparison between the different methods mentioned in this work. Each reported method utilizes different datasets, architectures and uses knowledge distillation for different applications. Reporting the reduction in model size as well as change in accuracy for a student model as compared to the corresponding teacher model is useful in our opinion. Although most authors report this information, some authors do not report either of the two pieces of information. Moreover, comparing the performance of a student model to a baseline model (e.g., trained-from-scratch model of comparable size to the student model) is also very informative, and we believe that it should be reported by authors.</ns0:p><ns0:p>Regarding the future of knowledge distillation, most researchers did not provide comments. Nevertheless, <ns0:ref type='bibr' target='#b59'>Polino et al. (2018)</ns0:ref> suggested the use of reinforcement learning to enhance development of student models. According to Polino et al., it is not clear how to develop student models that meet memory and processing time constraints. Building a program based on reinforcement learning such that its objective is to optimize memory and processing time requirements would ease development of student models.</ns0:p><ns0:p>In addition, most researchers focus on computer vision tasks. For instance, out of the surveyed work, few considered NLP tasks. Recently, several language models based on transformer architecture <ns0:ref type='bibr' target='#b66'>(Vaswani et al., 2017)</ns0:ref> have been proposed such as Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b13'>(Devlin et al., 2018)</ns0:ref>. These models have parameters in the order of hundreds of millions.</ns0:p><ns0:p>This issue has motivated several researchers to utilize knowledge distillation <ns0:ref type='bibr' target='#b64'>(Sanh et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b65'>Sun et al., 2019)</ns0:ref>. However, knowledge distillation has not been well investigated yet. Transformer based language models provide better results, in terms of accuracy scores and processing times, than Recurrent</ns0:p><ns0:p>Neural Networks (RNNs) <ns0:ref type='bibr' target='#b13'>(Devlin et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b61'>Radford et al., 2019)</ns0:ref>. As a result, it is important to study knowledge distillation on such models so that relatively small and high performance models could be developed.</ns0:p><ns0:p>The idea that knowledge distillation is a one-way approach of improving the performance of a student model utilizing a teacher model has led some researchers (e.g., <ns0:ref type='bibr' target='#b72'>Wang et al. (2018)</ns0:ref> Based on some recent works such as <ns0:ref type='bibr' target='#b28'>Hooker et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b29'>Hooker et al. ( , 2020))</ns0:ref>, measures like top-1 and top-5 accuracy masks some of the pitfalls of model compression techniques. The impact of model compression on true generalization capability of the compressed models are hidden by reporting models' performances using such measures. In general, difficult-to-classify samples are the ones which are more prone to under-perform on the compressed models. Thus, it seems that the systems' bias get further amplified which can be a major concern in many sensitive domains where these technologies will eventually be deployed such as health care and hiring. In addition, compressed models are less robust to changes in data. Addressing these concerns will be an important research direction in the area of model compression including knowledge distillation. One implication of the work is to report class-level performances instead of comparing one overall performance measure for the system such as accuracy. Macro-averaged F1</ns0:p><ns0:p>scores across all the classes may be a more useful performance measure than accuracy. Other appropriate measures need to be used for evaluation which can compare fairness and bias across the models. The authors presented two such measures in their work. Furthermore, it will be important to investigate these issues on more domains as the current papers looked mainly on the image classification problems. One approach that might mitigate the above mentioned problems is to use a modified loss function during the distillation process that penalizes label misalignment between the teacher and the student models (e.g.</ns0:p></ns0:div>
<ns0:div><ns0:head>Joseph et al. (2020)).</ns0:head><ns0:p>Allen-Zhu and Li, in a recent paper Allen-Zhu and Li (2020), argues how knowledge distillation in neural networks works fundamentally different as compared to the traditional random feature mappings.</ns0:p><ns0:p>The authors put forward the idea of 'multiple views' of a concept in the sense that neural network, with its hierarchical learning, learns multiple aspects about a class. Some or all of these concepts are available in a given class sample. A distilled model is forced to learn most of these concepts from a teacher model using the soft labels or other intermediate representations during the distillation process. In addition, the student model learns its own concepts due to its random initialization. Now, in order to explain the findings of <ns0:ref type='bibr' target='#b28'>Hooker et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b29'>Hooker et al. ( , 2020))</ns0:ref>, it seems that some of the less prevalent concepts which were learnt by the teacher model are missed by the student model which gives rise to the biases in the student model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSIONS</ns0:head><ns0:p>We present several different knowledge distillation methods applied on deep learning architectures. Some of the methods produce more than 80% decrease in model sizes <ns0:ref type='bibr' target='#b24'>(He et al., 2019;</ns0:ref><ns0:ref type='bibr'>Li et al., 2019)</ns0:ref>. Some other methods provide around 50% size reductions, but they maintain accuracy scores of teacher models <ns0:ref type='bibr' target='#b59'>(Polino et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Gao et al., 2018)</ns0:ref>. In addition, there exist distillation approaches that result in student models with better accuracy scores than their teacher models <ns0:ref type='bibr' target='#b25'>(Heo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Furlanello et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Our criteria are reductions in models' sizes and accuracy scores. Consequently, we propose distillation metric which helps in comparing between multiple students of various sizes. We also highlight different contexts and objectives of some of the knowledge distillation methods such as limited or absence of the original dataset, improving interpretability, and combining transfer learning with knowledge distillation.</ns0:p><ns0:p>Moreover, knowledge distillation is a creative process. There are no rules that guide development of student models or mapping teacher's activations to student's although there have been some recent attempts to understand them is a deeper way. As a consequence, knowledge distillation highly depends on the domain where it is applied on. Based on requirements of the specific domain, model designers could develop their distillation. We advise designers to focus on simple distillation methods (or build a simpler version of some method) that target a relatively small number of student and teacher layers. This is an important step as it decreases time needed for designers to get familiar with different behaviours of different distillation methods on their domain. After that, they could proceed with more complex methods as they would have developed intuitions about how the methods would behave on their domain of application. As a result, they could eliminate some methods without having to try them. In addition, designers could utilize distillation metric to assess their evaluations. Moreover, other relevant measures should be used in evaluating a technique and using the accuracy measure may not be sufficient by itself. Some of the challenges in the area were also discussed in this paper in addition to possible future directions.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A Generic illustration of knowledge distillation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Illustration of knowledge distillation using a pre-trained teacher model (offline) and knowledge distillation while training the teacher model simultaneously (online).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. A tree diagram illustrating the different knowledge distillation categories of methods and the different branches within each category.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b44'>Liu et al. (2018)</ns0:ref> proposed a method of converting deep neural networks to decision trees via knowledge distillation. The proposed approach consisted of training a Convolutional Neural Network (CNN) first with the given dataset. Using the feature set from the training dataset as input and the logits from the trained model as output, they trained a classification and regression trees (CART) model, where logits are scores before the SoftMax activations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of different types of knowledge distillation depending on the number of teachers and students.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b39'>Lan et al. (2018)</ns0:ref>, the authors proposed the On-the-fly Native Ensemble (ONE) knowledge distillation. ONE takes a single model and creates multiple branches where each branch can be considered as individual models. All the models share the same backbone layers. The ensemble of models is viewed as the teacher while a single branch is selected to be the student model. During training, the model is trained with three loss functions. The first loss function is the cross entropy between the predictions of each individual branch and the ground truth. The second loss function is the cross entropy between the prediction distribution of the ensemble of all models and the ground truth. The third loss function is the Kullback Leibler divergence between the prediction distribution of the whole ensemble and the individual branches. The prediction distribution of the ensemble of models is produced using a gating mechanism.<ns0:ref type='bibr' target='#b51'>Min et al. (2019)</ns0:ref> presented a technique called gradual distillation arguing that quantized distillation's indirectly results in loss of accuracy and it is difficult to train directly from the hard and soft labels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>.(2017) proposed that instead of using the original dataset used to train a teacher for transferring knowledge to a student model, a metadata which holds a summary of activations of the teacher model during training on the original dataset. The metadata includes top layer activation statistics, all layer's activation statistics, all-layers spectral activation record, and layer-pairs spectral activation record. Then using one of the collected metadata, we can capture the view of the teacher model of the dataset and hence we can reconstruct a new dataset that can be used to train a compact student model.<ns0:ref type='bibr' target='#b75'>Yim et al. (2017)</ns0:ref> proposed a two-stage distillation for CNNs. The first stage defines two matrices between the activations of two non-consecutive layers. The first matrix corresponded to teacher network, and the second matrix corresponded to student network. Then, the student was trained to mimic the teacher's matrix. After that, the second stage began by training the student normally.<ns0:ref type='bibr' target='#b20'>Gao et al. (2018)</ns0:ref> proposed to only train the backbone of a student model to mimic the feature extraction output of a teacher model. After that, the student model is trained on ground truth data while fixing8/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Unlike other knowledge distillation methods where neuron responses of teacher model is the focus when transferring knowledge to students,<ns0:ref type='bibr' target='#b25'>Heo et al. (2019)</ns0:ref> proposed to focus on transferring activation boundaries of teacher instead. Activation boundary is a hyperplane that decides whether the neurons are active or not. In<ns0:ref type='bibr' target='#b55'>Pan and Srikumar (2016)</ns0:ref>, decision boundary of neural network classifier was proven to be a combination of activation boundaries, which made them an important knowledge to be transferredto student model. Based on this, Heo et al. proposed an activation transfer loss that penalized when neurons activations of teacher and student were different in hidden layers. Since both teacher and student model, most likely, would not have the same number of neurons, Heo et al. utilized a connector function that converts the vector of neurons of student model to the same size of the vector of neurons in teacher model. By applying the proposed loss function, activation boundaries of teacher model were transferred to student model. Li et al. (2019) introduced the Layer Selectivity Learning (LSL) framework for knowledge distillation. In LSL framework, some intermediate layers are selected in both the teacher and student network. The selection process is done by feeding data to the teacher model and calculating the inter-layered Gram matrix and the layered inter-class Gram matrix using the feature vectors to find layers that are the most informative and discriminative across the different classes. The selection process can be applied to the student model by training it on a dataset alone to select the same number of intermediate layers. Once 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science intermediate layers are selected from both networks and aligned, the student network is trained with an alignment loss function ,in addition with a loss function that minimizes the prediction loss, that minimizes the difference between the feature vectors of pairs of intermediate layers from the teacher and student network. The alignment loss function will force the student intermediate layers to mimic the intermediate layers of the teacher model. Since the feature vectors of a pair of intermediate layers of the teacher and student network will not have the same dimensions, the feature vector is fed to a fully connected layer that project the feature vectors to the same dimensions. Previous knowledge distillation approaches only considered the instance features (the soft output of the layer) to be transferred from the teacher model to the student model. This made it hard for student models to learn the relationship between the instance feature and the sample with different and compact model architecture. Liu et al. (2019) proposed representing the knowledge using an instance relation graph (IRG). For each layer in the model, an IRG was created where vertices represent the instance features and edges represent the instance relationship. Transformation function was defined to transform two IRG of adjacent layers into new IRG which contained the feature space knowledge of the two layers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b77'>Zhang et al. (2019)</ns0:ref> proposed an online self-distillation method that trains a single model. The model convolution layers is first divided into sections, where a branch is added after each shallow section that contains a bottleneck layer<ns0:ref type='bibr' target='#b23'>He et al. (2016)</ns0:ref>, fully connected layer and a classifier. The added branches is only used during training and it will let each section act as a classifier. The deepest classifier (original classifier after the last convolution layer) is considered the teacher model. The deepest classifer and each shallow classifier is trained using cross entropy between it's prediction and the hard labels. Additionally, each shallow classifier is trained to using Kullback Leibler divergence loss to minimizes between it's prediction and soft label of the deepest classifier. Moreoever, each shallow classifier is trained to using L2 loss between the feature maps of the deepest classifier and the feature maps of the bottleneck layer of each shallow classifier.<ns0:ref type='bibr' target='#b34'>Kim et al. (2019)</ns0:ref> proposed a learning framework termed Feature Fusion Learning (FFL) that can also act as a knowledge distillation framework. An ensemble of models with ether similar or different architecture is used in addition with a special model called fusion classifier. If FFL is used for knowledge distillation, we can consider any single individual model in the ensemble as a student model while the whole ensemble and the fusion classifier will act as the teacher. Each model in the ensemble is trained</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Use cases for knowledge distillation to deploy deep learning models on small devices with limited resources.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science authors identifies the challenge of dealing with resource constraints on these mobile devices and the use of knowledge distillation techniques to address some of these challenges. Czuszynski et al. (2018) presented hand-gesture recognition using recurrent neural networks deployed on smartphones. The idea of human activity recognition based on spatio-temporal features from IoT devices like a cup, a toothbrush and a fork was presented in Lopez Medina et al. (2019). Knowledge distillation was also used for training a small model for image classification which will help IoT-based security systems to detect intrusion (Wang et al. (2020)).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b3'>Blalock et al. (2020)</ns0:ref> discussed the difficulty of assessing the state-of-the-art in model pruning as a model compression technique. The authors also listed various reasons why it is difficult to compare different pruning techniques including the ambiguities related to the architecture used or the metrics used to report the result. The authors also presented a list of best practices and proposed an open source library as a benchmark to standardize the experiments and evaluations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>;<ns0:ref type='bibr' target='#b9'>Chung et al. (2020)</ns0:ref>;<ns0:ref type='bibr' target='#b34'>Kim et al. (2019)</ns0:ref>) to explore other collaborative learning strategies where learning is mutual between</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of knowledge distillation approaches that utilize soft labels of teachers to train student model. In case of several students, results of student with largest size reduction are reported. In case of several datasets, dataset associated with lowest accuracy reduction is recorded. Baseline models have the same size as student models, but they were trained without teacher models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Targeted Architecture</ns0:cell><ns0:cell cols='2'>Utilized Data</ns0:cell><ns0:cell>Reduction in Accu-</ns0:cell><ns0:cell cols='2'>Improvement</ns0:cell><ns0:cell>Reduction in</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy Compared to</ns0:cell><ns0:cell>in</ns0:cell><ns0:cell cols='2'>Accuracy</ns0:cell><ns0:cell>Size</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Teacher</ns0:cell><ns0:cell cols='2'>Compared</ns0:cell><ns0:cell>to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Baseline</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Offline Distillation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fukuda et al. (2017)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell cols='2'>Aurora (Hirsch and</ns0:cell><ns0:cell>0.782%</ns0:cell><ns0:cell>2.238%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Pearce, 2000)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Liu et al. (2018)</ns0:cell><ns0:cell>Decision tree</ns0:cell><ns0:cell cols='3'>MNIST (LeCun, 1998) 12.796%</ns0:cell><ns0:cell>1-5%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Furlanello et al. (2018) DenseNet (Huang et al.,</ns0:cell><ns0:cell>CIFAR-100</ns0:cell><ns0:cell /><ns0:cell>2.369% (increase)</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2017)</ns0:cell><ns0:cell cols='2'>(Krizhevsky, 2009)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Polino et al. (2018)</ns0:cell><ns0:cell>Wide ResNet (Zagoruyko</ns0:cell><ns0:cell>CIFAR-100</ns0:cell><ns0:cell /><ns0:cell>0.1813%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>52.87%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>and Komodakis, 2016)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kurata and Audhkhasi</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>SWB 1</ns0:cell><ns0:cell /><ns0:cell>2.655%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>55.07%</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mun'im et al. (2019)</ns0:cell><ns0:cell>Seq2Seq</ns0:cell><ns0:cell>WSJ 2</ns0:cell><ns0:cell /><ns0:cell>8.264%</ns0:cell><ns0:cell>8.97%</ns0:cell><ns0:cell /><ns0:cell>89.88%</ns0:cell></ns0:row><ns0:row><ns0:cell>Kimura et al. (2019)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>MNIST</ns0:cell><ns0:cell /><ns0:cell>10.526% (increase)</ns0:cell><ns0:cell cols='2'>16.359%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Nayak et al. (2019)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>MNIST</ns0:cell><ns0:cell /><ns0:cell>0.57%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>40%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wu et al. (2019)</ns0:cell><ns0:cell>ResNet (He et al., 2016)</ns0:cell><ns0:cell>HMDB51</ns0:cell><ns0:cell>(Kuehne</ns0:cell><ns0:cell>0.6193%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>58.31%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>et al., 2011)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Online Distillation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lan et al. (2018)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>6.64%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Min et al. (2019)</ns0:cell><ns0:cell>Micro CNN</ns0:cell><ns0:cell>Synthetic</ns0:cell><ns0:cell>Aperture</ns0:cell><ns0:cell>0.607%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>99.44%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Radar Images 3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jin et al. (2019)</ns0:cell><ns0:cell>MobileNetV2 (Sandler</ns0:cell><ns0:cell cols='2'>ImageNet (Deng et al.,</ns0:cell><ns0:cell>9.644%</ns0:cell><ns0:cell>6.246%</ns0:cell><ns0:cell /><ns0:cell>70.66%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>et al., 2018)</ns0:cell><ns0:cell>2009)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chen et al. (2020)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>5.39%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Guo et al. (2020)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>1.59%</ns0:cell><ns0:cell>6.29%</ns0:cell><ns0:cell /><ns0:cell>34.29%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b50'>Mathur et al. (2017)</ns0:ref> presented a wearable vision system powered by deep learning that can process the camera images in real-time locally in the device for tasks such as face recognition, scene recognition, object detection, age and gender assessment from the face images, and emotion detection. Another work on object recognition on smartphones using deep learning systems was presented by<ns0:ref type='bibr' target='#b17'>Fang et al. Fang et al. (2018)</ns0:ref>.<ns0:ref type='bibr' target='#b4'>Chauhan et al. (2018)</ns0:ref> presented a RNN based deep learning system for user authentication using breathing based acoustics data. The trained system is evaluated on smartphones, smartwatches, and Raspberry Pi. The authors show that model compression can help reduce the memory size by a factor of five without any significant loss in accuracy.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Lane et al. (2015) presented an audio-sensing deep learning framework for smartphones which can</ns0:cell></ns0:row><ns0:row><ns0:cell>infer a number of situations such as the current environment (voice, music, water, and traffic), stress</ns0:cell></ns0:row><ns0:row><ns0:cell>detection, emotion recognition (anger, fear, neutral, sadness, and happiness), and speaker identification</ns0:cell></ns0:row><ns0:row><ns0:cell>using a smartphone's audio input.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to the Reviewers’ Comments
We thank the editor and the reviewers for their efforts and for providing constructive
comments and feedback. We believe that the modification of the original manuscript to
address those remarks has significantly improved the manuscript. Following are the
highlights of the changes done in the revised manuscript. The detailed reply to reviewers’
comments follows:
1. Added a total of 8 figures grouped under 5 main figure captions.
2. Restructured the manuscript based on the reviewers’ feedback including addition of
subsections under the survey sections.
3. Added a new section titled “Applications of Knowledge Distillation in Deep Learning”.
4. Enhanced the section “Discussion and Outlook” to reflect on the recent advances in the field.
5. Added a total of 15 new papers to include recent works in the area in addition to those needed
to address the reviewers’ comments.
Reply to Reviewer #1:
This review studies techniques which
transfer the knowledge acquired by large
deep learning models to smaller models,
which can then be used in embedded and
mobile devices.
The level of English is more than
adequate, explanations are clear and
accessible to a broad range of readers.
It mostly respects the PeerJ standard
structure, with a few extra sections that
makes sense for the contents of the
overview. However, readers may
appreciate some more subdivisions in the
Survey section, which only has one
subheading and one sub-subheading (I
believe those should be at the same level
instead, this may be a formatting errata).
The acknowledgements section includes
funders.
The review applies and is accessible to
any deep learning practitioner, including
those who may not be specialized in the
topic but may want to embed a certain
level of intelligent behavior in a small
device, a situation where knowledge
distillation techniques are of interest. This
We thank the reviewer for his/her efforts
towards reviewing the manuscript and
providing valuable feedback.
Based on the feedback from the reviewer, we
have restructured the manuscript and have
added sub-divisions under the “Survey” section.
Now, the survey section is divided in two main
sub-sections and each sub-section is further
divided into two sub sections. We have also fixed
the numbering issues in the updated manuscript.
We would like to thank the reviewer for his/her
efforts towards reviewing the manuscript and
providing encouraging feedback.
We have created a new section titled
“Applications of Knowledge Distillation in Deep
Learning” where we discuss examples of the
field has been reviewed recently but none
of those reviews are published on a peerreviewed journal as of now, so this would
apparently be the first review of the topic
in a reliable source, since the topic itself is
also very recent.
The introduction of the manuscript
introduces the concepts appropriately,
but I think it is missing some examples as
to what tasks can be achieved with deep
learning in embedded/mobile devices
(e.g. fitness tracking, sensor data
compression?), since the main
justification for knowledge distillation is
the need of smaller deep learning models,
but there is no explanation for what
problems these models may solve
The content of the article is well within
the aims and scope of PeerJ Computer
Science. The described methodology in
order to collect studies and results seems
appropriate and rigorous. It is also
systematic, since it introduces an
objective metric for the fitness of different
algorithms to the problem, which takes
into account the reduction in size as well
as the preservation (or even
improvement) of accuracy. The value of
this metric is lower as performance in
both aspects improves. The metric is
relative to the sizes and accuracies of the
models, and does not directly depend on
the data used, but it is computed using
the metrics reported by the original
papers, so I am unsure about its ability to
compare those models. The authors could
justify briefly the level to which this
metric is independent of the datasets
used.
The survey seems diverse and
comprehensive, all methods are
sufficiently described and the
explanations are put together well,
including detailed information about the
experiments and results of each study.
There is, however, little to no visual aid to
complement the textual explanations. I
tasks and problems that can be solved with
knowledge distillation techniques.
The metric is independent of the datasets used as
it computes the relative reduction in the model
size and the relative change in the accuracy. We
agree with the reviewer that comparing two
different systems performing different tasks may
not be completely fair. The distillation metric
may be more useful to compare different subsolutions for a given task and select the best
compromise between compression and accuracy.
We have added more clarification in the updated
manuscript on the interpretation of the results
from the metric and have removed the direct
comparisons of different works based on the
metric.
We thank the reviewer for pointing this out. As
suggested by the reviewer, we have added a total
of 8 figures in the updated manuscript grouped
into 5 main figures captions.
think a simple diagram outlining the
main components of a deep learningbased knowledge distillation model (i.e.
teacher, student, the flow of data and
weights, or how the student is trained)
would be very helpful to give the reader
an intuition on what all these proposals
have in common.
The discussion of the results is sound,
and several guidelines are provided on
how to improve works in the topic. Some
possible future directions are also
mentioned and appropriately cited. The
conclusions summarize the manuscript
correctly and attempt to guide the novel
reader on how to use these models.
We thank the reviewer for his/her efforts
towards reviewing the manuscript and
providing encouraging feedback.
We have updated the manuscript based on the
suggestions as detailed above.
In summary, my overall opinion of this
paper is very good, but I believe some
improvements could be made that would
make it easier to read and comprehend.
My suggestions are as follows: to extend
the introduction with applications of
deep learning in embedded devices,
better subdivisions of the Survey section,
and a diagram or two explaining the
common points of the inner workings of
these models
Reply to Reviewer #2:
The introduction could include some
additional sentences to explain the main
contributions and findings of the survey.
Section numbers in the last paragraph of
the introduction do not appear.
For a survey, the background could be
more formal, introducing key concepts
and definitions. The authors could also
detail the categories or perspectives for
the survey analysis, such as inputs,
algorithms, distillation/compression
approaches, outputs, etc.
We thank the reviewer for his/her efforts
towards reviewing the manuscript and
providing valuable feedback.
We have updated the introduction to add
explanations about the main contributions and
findings of the survey.
We thank the reviewer for pointing this out.
We have fixed this issue in the updated
manuscript.
We have updated the background section to
address the concerns of the reviewer by
introducing the key concepts and definitions in
addition to the categories or perspectives for the
survey analysis.
The title mentions “applications”, so I
would expect a specific section
summarizing current applications and
others the authors suggest could be
explored in the future. Some information
is given, e.g., used datasets in each paper,
but a section from the application
perspective could be more practical for
readers interested in particular domains.
Some specific sentences that authors
should clarify are:
- Section “survey”, line 136. Did the
authors exclude papers not presenting
evaluation metrics or they were only
discarded from the comparison?
- Section “survey”, line 142. Every neural
network -> every deep neural network
The survey methodology to search, select
and summarize the papers should be
improved. The authors only use one
source (Google Scholar), so many relevant
papers could be missing. It is not clear if
the search strings are independent or not.
The number of papers found, filtered out
and finally selected should be also
indicated. Usually, exclusion and
inclusion criteria are established to clearly
state the reasons why papers are
discarded and selected. Current quality
criteria seem pretty subjective, i.e., which
are minimum acceptable citation count,
or which are the “relevant” journals and
conferences. All this information is
necessary for replicability.
Reporting of each paper is quite
complete, but it is not easy to understand
how the authors have organized the
paragraphs of each category (soft labels,
transformation). Both sections are large to
read, so the authors could think if a
subdivision would fit, e.g., based on the
application, specificity (agnostic or
depend on network architecture),
We have created a new section titled
“Applications of Knowledge Distillation in Deep
Learning” where we discuss examples of the
tasks and problems that can be solved with
knowledge distillation techniques based on the
recent publications.
We have clarified the mentioned sentences in the
updated manuscript.
We did not exclude papers that do not present
the evaluation metrics. We have removed the
direct comparisons between different work using
our proposed distillation metric.
We have selected papers from 2016 and beyond
to include works that were not covered by
previous surveys. Google Scholar indexes
multiple sources including IEEE Explore,
ScienceDirect, and Springer. Most of the scientific
papers in the area are covered by these sources.
We understand reviewer’s concern that the
current selection criteria are not completely
objective as done in a systematic literature
review or a mapping study.
We would like to point that the current paper is
not a comprehensive survey on the topic but an
outlook for the readers to have an overall
introduction to the topic and we have selected
representative works which covers the different
ideas within the topic. We have clarified this in
the updated manuscript and there is no mention
of phrases like “system literature survey”,
‘systematic mapping study”, and
“comprehensive survey”.
We thank the reviewer for his/her efforts
towards reviewing the manuscript and
providing encouraging feedback.
Based on the feedback of the reviewer, we have
restructured the survey. We have divided the
major sections in the Survey into further
subsections. We hope that the new organization
is both easy for the readers and more
meaningful, at the same time. We have also
purpose of the knowledge distillation
process…
The authors propose a metric to compare
knowledge distillation techniques, but it
is not evaluated for any of the surveyed
techniques. Having a new metric could be
very useful for researchers and adding a
short study showing how it is computed
and interpreted for a subset of techniques
would add value to the paper.
The authors compare and discuss the
distillation scores obtained by different
techniques as reported in the original
publications. However, it is not clear if all
these techniques are comparable, i.e, do
they comprise the same input deep
learning? I guess not, so averaging or
comparing achieved reduction and
accuracy improvement is a bit risky. The
authors could try to extract some
common behaviors among techniques
depending on the targeted architecture,
dataset/application, etc.
added some figures to further clarify the ideas
using illustrations.
Based on the feedback from the reviewer, we
have added a short study under the section
“Distillation Metric” which shows how the
metric can be computed and the results be
utilized in the updated manuscript,
We thank the reviewer for pointing this out. We
agree with the reviewer that comparing two
different systems performing different tasks may
not be completely fair. The distillation metric
may be more useful to compare different subsolutions for a given task and select the best
compromise between compression and accuracy
or when two different works are using the same
initial architecture and the task.
We have added more clarification in the updated
manuscript on the interpretation of the results
from the metric and have removed the direct
comparisons of different works based on the
metric. Also, the issues of comparing different
distillation techniques have been discussed
under the “Discussion and Outlook” section.
" | Here is a paper. Please give your review comments after reading it. |
53 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices. One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model). In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models. To compare the performances of different techniques, we propose a new metric called distillation metric which compares different knowledge distillation solutions based on models' sizes and accuracy scores. Based on the survey, some interesting conclusions are drawn and presented in this paper including the current challenges and possible research directions.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Deep learning has succeeded in several fields such as Computer Vision (CV) and Natural Language Processing (NLP). This is due to the fact that deep learning models are relatively large and could capture complex patterns and features in data. But, at the same time, large model sizes lead to difficulties in deploying them on end devices. To solve this issue, researchers and practitioners have applied knowledge distillation on deep learning approaches for model compression. It should be emphasized that knowledge distillation is different from transfer learning. The goal of knowledge distillation is to provide smaller models that solve the same task as larger models <ns0:ref type='bibr' target='#b26'>(Hinton et al., 2015)</ns0:ref> (see figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>); whereas, the goal of transfer learning is to reduce training time of models that solve a task similar to the task solved by some other model (cf. <ns0:ref type='bibr' target='#b54'>Pan and Yang (2009)</ns0:ref>). Knowledge distillation accomplishes its goal by altering loss functions of models being trained (student models) to account for output of hidden layers of pre-trained models (teacher models). On the other hand, transfer learning achieves its goal by initializing parameters of a model by learnt parameters of a pre-trained model. The feature extractor part of a network, i.e., the stack of convolution layers, are referred to as backbone.</ns0:p><ns0:p>There are no conventions that guide student models' sizes. For example, two practitioners might have student models with different sizes although they use the same teacher model. This situation is caused by different requirements in different domains, e.g., maximum allowed model size on some device.</ns0:p><ns0:p>There exist some knowledge distillation methods that target teacher and student networks having the same size (e.g., <ns0:ref type='bibr' target='#b75'>Yim et al. (2017)</ns0:ref>). In such cases, the knowledge distillation process is referred to as self-distillation and its purpose is to further improve the performance by learning additional features that could be missing in the student model due to the random initialization Allen-Zhu and <ns0:ref type='bibr' target='#b1'>Li (2020)</ns0:ref>. Although an algorithm is developed to distill knowledge from a teacher model to a student model having the same size, the algorithm can be used to distill knowledge from a teacher to a smaller student, as well. This is because, based on our survey, there is no restriction on model sizes, and it is up to model designers to map teacher's activations to student's. So, in general settings, knowledge distillation is utilized to provide smaller student models that have comparable accuracy scores to their corresponding teacher models. The distillation process can be performed in offline or online manner. In offline distillation, the knowledge distillation process is performed using a pre-trained teacher model. While online distillation is for methods that perform knowledge distillation while training the teacher model. The illustration of the two subcategories can be seen in figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>Consequently, one could compare different knowledge distillation algorithms by their reductions in model sizes. In addition, algorithms might be compared by how much accuracy they maintain compared to teacher models. There is no rule that governs how much reduction is best for all cases. For instance, if one needs to apply a knowledge distillation algorithm, they need to compare the algorithm's performance, in terms of reductions in size and accuracy, to their system's requirements. Based on the requirements, they can decide which algorithm fits best in their situation. To ease the process of comparison, we developed a distillation metric which can compare knowledge distillation results based on model sizes and accuracy scores. For more details on the distillation metric, please refer to section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There are different knowledge distillation approaches applied to deep learning models. For example, there exist approaches that distill knowledge from a single teacher to a single student. Other approaches distill knowledge from several teachers to a single student. Knowledge distillation could also be applied to provide an ensemble of student networks. In section 5, we present recent knowledge distillation approaches that are applied on deep learning based architectures.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DISTILLATION METRIC</ns0:head><ns0:p>We propose distillation metric to compare different knowledge distillation methods and to select suitable model for deployment from a number of student models of various sizes. The metric incorporates ratio of a student's size to teacher's size and student's accuracy score to teacher's accuracy score . To have a good reduction in size, first ratio should be as small as possible. For a distillation method to have a good maintainability of accuracy, second ratio should be as close to 1 as possible. To satisfy these requirements, we develop the following equation:</ns0:p><ns0:formula xml:id='formula_0'>DS = α * ( student s teacher s ) + (1 − α) * (1 − student a teacher a )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where DS stands for distillation score, student s and student a are student size and accuracy respectively, and teacher s and teacher a are teacher size and accuracy respectively. Parameter α ∈ [0, 1] is a weight to indicate importance of first and second ratio, i.e., size and accuracy. The weight is assigned by distillation designers based on their system's requirements. For example, if some system's requirements prefer small model sizes over maintaining accuracy, designers might have α > 0.5 that best satisfies their requirements.</ns0:p><ns0:p>It should be noted that when a student's accuracy is better than its teacher, the second ratio would be greater than 1. This causes the right operand of the addition operation (i.e., 1 -second ratio) to evaluate to a negative value. Hence, DS is decreased, and it could be less than zero especially if weight of the second ratio is larger. This is a valid result since it indicates a very small value for the first ratio as compared to the second ratio. In other words, this behaviour indicates a large reduction in model size while, at the same time, providing better accuracy scores than the teacher model. As presented in section 5, a student model with a better accuracy is not a common case. It could be achieved, for example, by having an ensemble of student models.</ns0:p><ns0:p>Regarding the behaviour of the distillation metric, it is as follows: The closer the distillation score is to 0, the better the knowledge distillation. To illustrate, an optimal knowledge distillation algorithm would provide a value that is very close to 0 for the first ratio (e.g., the student's size is very small as compared to the teacher's size), and it would produce a value of 1 for second ratio (e.g., the student and the teacher models have same accuracy score). As a result, the distillation score approaches 0 as the first ratio approaches 0 and the second ratio approaches 1.</ns0:p><ns0:p>To demonstrate the usage of distillation metric, we use the results reported in <ns0:ref type='bibr' target='#b69'>Walawalkar et al. (2020)</ns0:ref> using CIFAR100 dataset <ns0:ref type='bibr' target='#b36'>Krizhevsky (2009)</ns0:ref> and the Resnet44 architecture <ns0:ref type='bibr' target='#b23'>He et al. (2016)</ns0:ref>. In their experiment, they trained four student models having relative sizes of 62.84%, 35.36%, 15.25% and 3.74%</ns0:p><ns0:p>as compared to the teacher model. The teacher model achieved 71.76% accuracy, while the students achieved 69.12%, 67.04%, 62.87% and 43.11% accuracy, respectively. Considering that the model accuracy and size reductions are equally important, we set α = 0.5. Calculating the distillations metric for the four student models we get a score of 0.333, 0.210, 0.138 and 0.218 respectively. Based on these results, we can notice that the model with the relative size of 15.25% (100,650 parameter) has the best balance between size and accuracy as compared to the teacher model and the other student models. <ns0:ref type='table' target='#tab_3'>1 and 2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>SURVEY</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1'>Techniques That Distill Knowledge From Soft Labels of the Teacher Models</ns0:head><ns0:p>5.1.1 Offline Distillation <ns0:ref type='bibr' target='#b19'>Fukuda et al. (2017)</ns0:ref> proposed a knowledge distillation approach by training a student model using multiple teacher models. Unlike other multi-teacher approaches that average the output of the teacher models to create the soft labels and then used to train the student model <ns0:ref type='bibr' target='#b73'>(Wu et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chebotar and Waters, 2016;</ns0:ref><ns0:ref type='bibr' target='#b49'>Markov and Matsui, 2016)</ns0:ref>, the approach proposed by Fukuda et al. was to opt out of combining the teachers output distribution and to train the student on the individual output distribution.</ns0:p><ns0:p>The authors argued that this would help the student model to observe the input data from different angles and would help the model to generalize better.</ns0:p><ns0:p>While deep learning has achieved great success across a wide range of domains, it remains difficult to identify the reasoning behind model predictions, especially if models are complex. To tackle this issue, <ns0:ref type='bibr' target='#b20'>Furlanello et al. (2018)</ns0:ref> proposed an ensemble knowledge distillation method called Born-Again Neural Networks. The method considered the issue of teacher and student models having the same architecture (self distillation). The method first trained a teacher model using a standard approach. Then, it trained a student model using the ground truth and teacher's predictions. After that, it trained a second student model using the ground truth and previous student's predictions, and so on (see figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>). For instance, student i was trained by utilizing training labels and predictions of</ns0:p><ns0:formula xml:id='formula_1'>student i−1 for i ∈ [1, n],</ns0:formula><ns0:p>where n is the number of student models. When student models were used for prediction, their results were averaged. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science models are models whose weights are represented by a limited number of bits such as 2-bit or 4-bit integers.</ns0:p><ns0:p>Quantized models are used to develop hardware implementations of deep learning architectures as they provide lower power consumption and lower processing times compared to normal models (full-precision models) <ns0:ref type='bibr' target='#b10'>(Courbariaux et al., 2015)</ns0:ref>. The distillation approach had 2 variants. When tackling problems where only few samples are available, it can make models overfit easily. <ns0:ref type='bibr' target='#b35'>Kimura et al. (2019)</ns0:ref> proposed a method that allowed training networks with few samples while avoiding overfitting using knowledge distillation. In their approach, they first trained a reference model with few samples using Gaussian processes (GP) instead of neural networks. Then, the samples used for training were augmented using inducing point method via iterative optimization. Finally, the student model was trained with the augmented data using loss function defined in the paper with the GP teacher model to be imitated by the student model. <ns0:ref type='bibr' target='#b53'>Nayak et al. (2019)</ns0:ref> proposed a method to train the student model without using any dataset or metadata. The method worked by extracting data from the teacher model through modeling the data distribution in the softmax space. Hence, new samples could be synthesized from the extracted information and used to train the student model. Unlike generative adversarial networks (GANs) where they generate data that is similar to the real data (by fooling a discriminative network), here the synthesized data was generated based on triggering the activation of the neurons before the softmax function.</ns0:p><ns0:p>Wu et al. ( <ns0:ref type='formula'>2019</ns0:ref>) developed a multi-teacher distillation framework for action recognition. Knowledge was transferred to the student by taking a weighted average of three teachers soft labels (see figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>). The three teachers are fed different inputs. The first teacher is fed with the residual frame, while the second teacher is fed with motion vector. The last teacher is fed with the I-frame image, similar to the student model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Online Distillation</ns0:head><ns0:p>In Training a compact student network to mimic a well-trained and converged teacher model can be challenging. The same rationality can be found in school-curriculum, where students at early stages are taught easy courses and further increase the difficulty as they approach later stages. From this observation, <ns0:ref type='bibr' target='#b32'>Jin et al. (2019)</ns0:ref> proposed that instead of training student models to mimic converged teacher models, student models were trained on different checkpoints of teacher models until teacher models converged.</ns0:p><ns0:p>For selecting checkpoints, a greedy search strategy was proposed that finds efficient checkpoints that are easy for the student to learn. Once checkpoints were selected, a student model's parameters were optimized sequentially across checkpoints, while splitting data used for training across the different stages depending on its hardness defined by a hardness metric that was proposed by the authors.</ns0:p><ns0:p>An ensemble knowledge distillation approach named Online Knowledge Distillation with Diverse peers (OKDDip) was proposed by <ns0:ref type='bibr' target='#b6'>Chen et al. (2020)</ns0:ref>. OKDDip uses an ensemble of models as a teacher (named auxiliary peer) and a single model within the group as a student (named group leader). Unlike ONE, the ensemble of models can be independent models or have shared layers. Each model is trained to reduce the cross entropy between its predictions and the ground truth. Additionally, each model will take a weighted average of predictions of all models in the ensemble and uses Kullback Leibler divergence loss function between its prediction distribution and the weighted average of predictions of the ensemble.</ns0:p><ns0:p>Each auxiliary peer will assign different weights to all other auxiliary peers in the group to determine how the prediction distribution is aggregated. For the group leader, it will just take the average of the prediction of all the auxiliary peers. The weight assignment process for the auxiliary peers takes the feature extracted for each peer and projects it to two sub-spaces by applying linear transformation with learned weights. The weights for each peer is then calculated similar to the self-attention mechanism using the projected sub-spaces <ns0:ref type='bibr' target='#b67'>Vaswani et al. (2017)</ns0:ref>.</ns0:p><ns0:p>Another ensemble knowledge distillation method was proposed by <ns0:ref type='bibr' target='#b22'>Guo et al. (2020)</ns0:ref> named Knowledge Distillation via Collaborative Learning (KDCL). KDCL trains on input data that is distorted differently for each student in the ensemble. The cross-entropy loss function between prediction and hard labels is used to train each student model in addition to the Kullback Leibler divergence loss between the prediction and the soft labels. The authors proposed four different methods to generate the soft labels. The first method selects a single student's probability distribution in the ensemble as soft label that produces the minimum cross entropy loss. The second method finds the best linear combination of the students logtis that minimizes the cross-entropy loss through convex optimization and use it to generate the soft labels via softmax function. The third method subtracts the logit that corresponds to the target class from all logits for each student. Then, it constructs the ensemble logits by selecting the minimum logit for each class from all the students in the ensemble which later is fed to softmax to create the soft labels. The fourth method of producing the soft labels takes the weighted average of students' outputs. The weight for each student is assigned after every training epoch and it is based on its performance on the validation set. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='formula'>2019</ns0:ref>) proposed a distillation method for encoder-decoder networks that trained a student model by comparing its soft labels to the teacher's labels and the ground truth. Moreover, the student will also compare its encoders outputs to that of the teacher. <ns0:ref type='bibr' target='#b24'>He et al. (2019)</ns0:ref> proposed to train an auto-encoder network to compress feature maps of the teacher.</ns0:p><ns0:p>The student is later trained to match the compressed feature maps of the teacher model. Additionally, the student was also trained to match its feature map affinity matrix to the of the teacher model. This was needed because student network could not capture long-term dependencies due to its relatively small size.</ns0:p><ns0:p>Unlike other knowledge distillation methods where neuron responses of teacher model is the focus when transferring knowledge to students, <ns0:ref type='bibr' target='#b25'>Heo et al. (2019)</ns0:ref> proposed to focus on transferring activation boundaries of teacher instead. Activation boundary is a hyperplane that decides whether the neurons are active or not. In <ns0:ref type='bibr' target='#b55'>Pan and Srikumar (2016)</ns0:ref>, decision boundary of neural network classifier was proven to be a combination of activation boundaries, which made them an important knowledge to be transferred In LSL framework, some intermediate layers are selected in both the teacher and the student network.</ns0:p><ns0:p>The selection process is done by feeding data to the teacher model and calculating the inter-layer Gram matrix and the layer inter-class Gram matrix using the feature vectors in order to find layers that are the most informative and discriminative across the different classes. The selection process can be applied <ns0:ref type='bibr' target='#b78'>Zhou et al. (2018)</ns0:ref> proposed to train the teacher (named booster net) and the student (named lightweight net) together. This was done by sharing the backbone layers of the two models during training and then using a function where it contained the loss of the booster network, the loss of the lightweight network, and the mean square error between the logits before softmax activation of both the networks. To prevent the objective function from hindering the performance of the booster network, a gradient block scheme was developed to prevent the booster network's specific parameter from updating during the backpropagation of the objective function which would allow the booster network to directly learn from the ground truth labels. To improve their approach further, the authors used the knowledge distillation loss function from <ns0:ref type='bibr' target='#b26'>Hinton et al. (2015)</ns0:ref> in their objective function. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.2'>Online Distillation</ns0:head><ns0:p>Computer Science distillation, we can consider any single individual model in the ensemble as a student model while the whole ensemble and the fusion classifier will act as the teacher. Each model in the ensemble is trained normally with the ground-truth labels while the fusion classifier takes the feature maps of all the models in the ensemble as an input in addition to the ground-truth labels. Furthermore, the ensemble models will distill its knowledge to the fusion classifier in the form of the average of all predictions to be used with Kullback Leibler divergence loss in order to transfer the knowledge of the ensemble to the fusion classifier.</ns0:p><ns0:p>Moreover, the fusion classifier will also distill its knowledge back to each model in the ensemble in the form of its prediction distribution to be used with Kullback Leibler divergence loss. This way, the knowledge distillation is mutual between the fusion classifier and the ensemble. After training, any model in the ensemble can be selected to be deployed or the whole ensemble with the fusion classifier can be deployed in case of lenient hardware constraints. both the student and the teacher model will learn from each other. Due to the difference in the number of channels of the feature maps between the two networks, a transfer layer is used to convert the number of channels of the student network to that of the teacher network.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> provides a summary of the presented works. It shows that the best approach in terms of size reduction is proposed by <ns0:ref type='bibr'>Li et al. (2019)</ns0:ref> with a reduction of 95.86% in size. The table also shows that the best approach in terms of maintaining accuracy is proposed by <ns0:ref type='bibr' target='#b25'>Heo et al. (2019)</ns0:ref> with an increase in accuracy of 6.191%. However, their experiments were conducted on a teacher model that is trained and evaluated on two different datasets. Their experiments focused on combining knowledge transfer with knowledge distillation. As for the improvement compared to the baseline model, the Layer Selectivity Learning (LSL) proposed by <ns0:ref type='bibr'>Li et al. (2019)</ns0:ref> achieved the best improvement of 16.89% increase in accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>APPLICATIONS OF KNOWLEDGE DISTILLATION</ns0:head><ns0:p>Traditionally, deep learning models typically run on Cloud computing platforms delivering the results to the smart devices over a network. Although this model is feasible in some situations, it is not preferred in many other situations where delay is not tolerable or data privacy is a concern. Moreover, unpredictable network connections between the cloud and the device can also pose significant challenges. Thus, running the deep learning system on local devices is an important requirement in many domains and has a wide variety of applications including smart cities, self-driving cars, smart homes, medical devices, and entertainment <ns0:ref type='bibr' target='#b68'>Véstias et al. (2020)</ns0:ref>. Knowledge distillation allows developers to shrink down the size of deep learning models in order for them to fit into resource-limited devices having limited memory and power</ns0:p></ns0:div>
<ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Summary of knowledge distillation approaches that distills knowledge from parts other than or in addition to the soft labels of the teacher models to be used for training the student models. In case of several students, results of student with largest size reduction are reported. In case of several datasets, dataset associated with the lowest accuracy reduction is recorded. Baseline models have the same size as the corresponding student models, but they were trained without the teacher models. as illustrated in Figure <ns0:ref type='figure' target='#fig_13'>5</ns0:ref>. In this section we present some typical applications of knowledge distillation based on the recent literature.</ns0:p><ns0:p>In The authors identified some of the challenges that need to be addressed in order for the system to be of practical use. Among the challenges identified were the need for the system to have low latency and be efficient in battery usage. This can be achieved in part by developing light-weight models using knowledge distillation techniques. <ns0:ref type='bibr' target='#b58'>Plötz and Guan (2018)</ns0:ref> presented the use of deep learning system trained on the cloud to be deployed on smart phones for human activity recognition (HAR) using the data available from smartphone sensors.</ns0:p><ns0:p>The authors identified the challenge of dealing with resource constraints on these mobile devices and the use of knowledge distillation techniques to address some of these challenges. <ns0:ref type='bibr' target='#b11'>Czuszynski et al. (2018)</ns0:ref> presented hand-gesture recognition using recurrent neural networks deployed on smartphones. The idea </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DISCUSSION AND OUTLOOK</ns0:head><ns0:p>The distillation score proposed in this work can not be used as a fair comparison between the different methods mentioned in this work. Each reported method utilizes different datasets, architectures and uses knowledge distillation for different applications. <ns0:ref type='bibr' target='#b3'>Blalock et al. (2020)</ns0:ref> discussed the difficulty of assessing the state-of-the-art in model pruning as a model compression technique. The authors also listed various reasons why it is difficult to compare different pruning techniques including the ambiguities related to the architecture used or the metrics used to report the result. The authors also presented a list of best practices</ns0:p><ns0:p>and proposed an open source library as a benchmark to standardize the experiments and evaluations.</ns0:p><ns0:p>Reporting the reduction in model size as well as change in accuracy for a student model as compared to the corresponding teacher model is useful in our opinion. Although most authors report this information, some authors do not report either of the two pieces of information. Moreover, comparing the performance of a student model to a baseline model (e.g., trained-from-scratch model of comparable size to the student model) is also very informative, and we believe that it should be reported by authors.</ns0:p><ns0:p>Regarding the future of knowledge distillation, most researchers did not provide comments. Nevertheless, <ns0:ref type='bibr' target='#b59'>Polino et al. (2018)</ns0:ref> suggested the use of reinforcement learning to enhance development of student models. According to the authors, it is not clear how to develop student models that meet memory and processing time constraints. Building a program based on reinforcement learning such that its objective is to optimize memory and processing time requirements would ease development of student models.</ns0:p><ns0:p>In addition, most researchers focus on computer vision tasks. For instance, out of the surveyed work, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science few considered NLP tasks. Recently, several language models based on transformer architecture <ns0:ref type='bibr' target='#b67'>(Vaswani et al., 2017)</ns0:ref> have been proposed such as Bidirectional Encoder Representations from Transformers (BERT) <ns0:ref type='bibr' target='#b14'>(Devlin et al., 2018)</ns0:ref>. These models have parameters in the order of hundreds of millions. This issue has motivated several researchers to utilize knowledge distillation <ns0:ref type='bibr' target='#b65'>(Sanh et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b66'>Sun et al., 2019)</ns0:ref>. However, knowledge distillation has not been well investigated yet. Transformer based language models provide better results, in terms of accuracy scores and processing times as compared to Recurrent</ns0:p><ns0:p>Neural Networks (RNNs) <ns0:ref type='bibr' target='#b14'>(Devlin et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b61'>Radford et al., 2019)</ns0:ref>. As a result, it is important to study knowledge distillation on such models so that relatively small and high performance models could be developed.</ns0:p><ns0:p>The idea that knowledge distillation is a one-way approach of improving the performance of a student model utilizing a teacher model has led some researchers (e.g., <ns0:ref type='bibr' target='#b72'>Wang et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>Chung et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Kim et al. (2019)</ns0:ref>) to explore other collaborative learning strategies where learning is mutual between teachers and students.</ns0:p><ns0:p>Based on some recent works such as <ns0:ref type='bibr' target='#b28'>Hooker et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b29'>Hooker et al. ( , 2020))</ns0:ref>, measures like top-1 and top-5 accuracy masks some of the pitfalls of model compression techniques. The impact of model compression on true generalization capability of the compressed models are hidden by reporting models' overall performances using such measures. In general, difficult-to-classify samples are the ones which are more prone to under-perform on the compressed models. Thus, it seems that the systems' bias get further amplified which can be a major concern in many sensitive domains where these technologies will eventually be deployed such as healthcare and hiring. In addition, compressed models are less robust to changes in data. Addressing these concerns will be an important research direction in the area of model compression including knowledge distillation. One implication of the work is to report class-level performances instead of comparing one overall performance measure for the system such as accuracy. Macro-averaged F1</ns0:p><ns0:p>scores across all the classes may be a more useful performance measure than accuracy. Other appropriate measures need to be used for evaluation which can compare fairness and bias across the models. The authors presented two such measures in their work. Furthermore, it will be important to investigate these issues on more domains as the current papers looked mainly on the image classification problems. One approach that might mitigate the above mentioned problems is to use a modified loss function during the distillation process that penalizes label misalignment between the teacher and the student models (e.g.</ns0:p></ns0:div>
<ns0:div><ns0:head>Joseph et al. (2020)).</ns0:head><ns0:p>Allen-Zhu and Li, in a recent paper Allen-Zhu and Li (2020), argues how knowledge distillation in neural networks works fundamentally different as compared to the traditional random feature mappings.</ns0:p><ns0:p>The authors put forward the idea of 'multiple views' of a concept in the sense that neural network, with its hierarchical learning, learns multiple aspects about a class. Some or all of these concepts are available in a given class sample. A distilled model is forced to learn most of these concepts from a teacher model using the soft labels or other intermediate representations during the distillation process. In addition, the student model learns its own concepts due to its random initialization. Now, in order to explain the findings of <ns0:ref type='bibr' target='#b28'>Hooker et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b29'>Hooker et al. ( , 2020))</ns0:ref>, it seems that some of the less prevalent concepts which were learnt by the teacher model are missed by the student model which gives rise to increased biases in the student model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSIONS</ns0:head><ns0:p>We present several different knowledge distillation methods applied on deep learning architectures. Some of the methods produce more than 80% decrease in model sizes <ns0:ref type='bibr' target='#b24'>(He et al., 2019;</ns0:ref><ns0:ref type='bibr'>Li et al., 2019)</ns0:ref>. Some other methods provide around 50% size reductions, but they maintain accuracy scores of teacher models <ns0:ref type='bibr' target='#b59'>(Polino et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Gao et al., 2018)</ns0:ref>. In addition, there exist distillation approaches that result in student models with better accuracy scores than their corresponding teacher models <ns0:ref type='bibr' target='#b25'>(Heo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Furlanello et al., 2018)</ns0:ref>. Our criteria in the present study are based on reductions in models' sizes and accuracy scores. Consequently, we propose distillation metric which helps in comparing between multiple students of various sizes. We also highlight different contexts and objectives of some of the knowledge distillation methods such as limited or absence of the original dataset, improving interpretability, and combining transfer learning with knowledge distillation.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Moreover, knowledge distillation is a creative process. There are no rules that guide development of student models or mapping teacher's activations to student's although there have been some recent attempts to understand them in a deeper way. As a consequence, knowledge distillation highly depends on the domain where it is applied on. Based on requirements of the specific domain, model designers could develop their distillation. We advise designers to focus on simple distillation methods (or build a simpler version of some method) that target a relatively small number of student and teacher layers. This is an important step as it decreases the time needed for designers to get familiar with different behaviour of different distillation methods in their domain. After that, they could proceed with more complex methods as they would have developed intuitions about how the methods would behave on their domain of application. As a result, they could eliminate some methods without having to try them. In addition, designers could utilize distillation metric to assess their evaluations. Moreover, other relevant measures should be used in evaluating a technique and using the accuracy measure may not be sufficient by itself.</ns0:p><ns0:p>Some of the challenges in the area were discussed in this paper in addition to possible future directions.</ns0:p><ns0:p>Last but not the least, we also discussed in this paper some of the practical applications of knowledge distillation in real-world problems.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A Generic illustration of knowledge distillation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Illustration of knowledge distillation using a pre-trained teacher model (offline) and knowledge distillation while training the teacher model simultaneously (online).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Liu et al. (2018) proposed a method of converting deep neural networks to decision trees via knowledge distillation. The proposed approach consisted of training a Convolutional Neural Network (CNN) first with the given dataset. Using the feature set from the training dataset as input and the logits from the trained model as output, they trained a classification and regression trees (CART) model, where logits are scores before the Softmax activations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of different types of knowledge distillation depending on the number of teachers and students.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>First variant was called quantized distillation, and it trained a quantized student model and a full-precision student model. The two models were trained according to true labels and the teacher's predictions. The main purpose of the full-precision model was to compute gradients and update the quantized model accordingly. As claimed byPolino et al., the reason behind this process was that there was no objective function that accounted for quantized weights. This issue motivated Polino et al. to develop the second variant of their knowledge distillation approach, and they called it differentiable quantization. They defined an objective function to address the issue of quantized weights. As a result, there would be no need for a full-precision student model.<ns0:ref type='bibr' target='#b39'>Kurata and Audhkhasi (2018)</ns0:ref> developed a distillation approach that targeted sequence models<ns0:ref type='bibr' target='#b2'>(Bahdanau et al., 2016)</ns0:ref> for speech recognition. The distillation goal was to transfer knowledge of a Bidirectional Long Short-Term Memory (BiLSTM) model to an LSTM model. This was achieved by considering teacher's soft labels and comparing outputs of three time steps of the teacher network to a single time step output of the student network. Furthermore,<ns0:ref type='bibr' target='#b52'>Mun'im et al. (2019)</ns0:ref> proposed a distillation approach for Seq2Seq speech recognition. The approach trained a student network to match teacher k-best outputs generated with beam search, where k is a hyper-parameter.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b40'>Lan et al. (2018)</ns0:ref>, the authors proposed the On-the-fly Native Ensemble (ONE) knowledge distillation. ONE takes a single model and creates multiple branches where each branch can be considered as individual models. All the models share the same backbone layers. The ensemble of models is viewed as the teacher while a single branch is selected to be the student model. During training, the model is trained with three loss functions. The first loss function is the cross entropy between the predictions of each individual branch and the ground truth. The second loss function is the cross entropy between the prediction distribution of the ensemble of all models and the ground truth. The third loss function is the Kullback Leibler divergence between the prediction distribution of the whole ensemble and the individual branches. The prediction distribution of the ensemble of models is produced using a gating mechanism.Min et al. (2019) presented a technique called gradual distillation arguing that quantized distillationindirectly results in loss of accuracy and it is difficult to train directly from the hard and soft labels.The gradual distillation approach trains the teacher model and the student model simultaneously. The output from the teacher's network at each step is used to guide the student learning. Accordingly, the7/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)Manuscript to be reviewed Computer Science loss function for the student's network has two components: the cross-entropy loss between the output of the student's network and the hard labels, and the cross-entropy loss between the student output and the teacher's target.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>( 2019 )</ns0:head><ns0:label>2019</ns0:label><ns0:figDesc>with an increase in accuracy by 10.526%. However, their work utilizes knowledge distillation to overcome overfitting when dealing with small amount of training samples. Furthermore, they used a Gaussian process as a teacher model which can explain the increase in accuracy of the student CNN model. Additionally,<ns0:ref type='bibr' target='#b35'>Kimura et al. (2019)</ns0:ref> approach helped the student model to generalize better on small number of training samples and achieve the highest increase in accuracy compared to the baseline model which overfitted on the training data.5.2 Techniques That Distills Knowledge from Other Parts of the Teacher Model With orWithout Soft Labels5.2.1 Offline Distillation<ns0:ref type='bibr' target='#b47'>Lopes et al. (2017)</ns0:ref> proposed transferring knowledge to a student model using a metadata which holds a8/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>summary of activations of the teacher model during training on the original dataset, instead of using the original dataset used to train the teacher. The metadata includes top layer activation statistics, all layers activation statistics, all-layers spectral activation record, and layer-pairs spectral activation record. Then using one of the collected metadata, we can capture the view of the teacher model of the dataset and hence we can reconstruct a new dataset that can be used to train a compact student model.<ns0:ref type='bibr' target='#b75'>Yim et al. (2017)</ns0:ref> proposed a two-stage distillation for CNNs. The first stage defines two matrices between the activations of two non-consecutive layers. The first matrix corresponded to the teacher network, and the second matrix corresponded to the student network. Then, the student was trained to mimic the teacher's matrix. After that, the second stage began by training the student normally.<ns0:ref type='bibr' target='#b21'>Gao et al. (2018)</ns0:ref> proposed to only train the backbone of a student model to mimic the feature extraction output of a teacher model. After that, the student model is trained on ground truth data while freezing the parameters of the backbone layers. The knowledge distillation process only happened during training of the backbone layers of the smaller student model, which allowed it to be trained on different dataset than the teacher model.Wang et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>to the student model. Based on this, Heo et al. proposed an activation transfer loss that penalized when 9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021) Manuscript to be reviewed Computer Science neurons activations of teacher and student were different in the hidden layers. Since both the teacher and the student model, most likely, would not have the same number of neurons, the authors utilized a connector function that converts the vector of neurons of the student model to be the same size as the vector of neurons in the teacher model. By applying the proposed loss function, activation boundaries of the teacher model were transferred to the student model. Li et al. (2019) introduced the Layer Selectivity Learning (LSL) framework for knowledge distillation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>to the student model by training it on a dataset alone in order to select the same number of intermediate layers. Once intermediate layers are selected from both the networks and aligned, the student network is trained with an alignment loss function, in addition to a loss function that minimizes the prediction loss, that minimizes the difference between the feature vectors of pairs of intermediate layers from the teacher and the student network. The alignment loss function will force the student's intermediate layers to mimic the intermediate layers of the teacher model. Since the feature vectors of a pair of intermediate layers of the teacher and student network will not have the same dimensions, the feature vector is fed to a fully-connected layer that projects the feature vectors to the same dimensions.Previous knowledge distillation approaches only considered the instance features (the soft output of the layer) to be transferred from the teacher model to the student model. This made it hard for student models to learn the relationship between the instance feature and the sample with a different and compact model architecture.<ns0:ref type='bibr' target='#b46'>Liu et al. (2019)</ns0:ref> proposed representing the knowledge using an instance relation graph (IRG). For each layer in the model, an IRG was created where vertices represent the instance features and edges represent the instance relationship. Transformation function was defined to transform two IRG of adjacent layers into new IRG which contained the feature-space knowledge of the two layers. Using IRG of the teacher layers and the student layers, a loss function was defined to help train the student model using the knowledge encapsulated in the IRG of the teacher.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b77'>Zhang et al. (2019)</ns0:ref> proposed an online self-distillation method that trains a single model. The model convolution layers is first divided into sections, where a branch is added after each shallow section that contains a bottleneck layer<ns0:ref type='bibr' target='#b23'>He et al. (2016)</ns0:ref>, fully connected layer and a classifier. The added branches are only used during training and it will let each section act as a classifier. The deepest classifier (original classifier after the last convolution layer) is considered the teacher model. The deepest classifier and each shallow classifier is trained using cross entropy between its prediction and the hard labels. Additionally, each shallow classifier is trained using Kullback Leibler divergence loss to minimize between its prediction and the soft labels of the deepest classifier. Moreover, each shallow classifier is trained using L2 loss between the feature maps of the deepest classifier and the feature maps of the bottleneck layer of each of the shallow classifiers.Kim et al. (2019) proposed a learning framework termed Feature Fusion Learning (FFL) that can also acts as a knowledge distillation framework. An ensemble of models with either similar or different architecture is used in addition to a special model called fusion classifier. If FFL is used for knowledge 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b69'>Walawalkar et al. (2020)</ns0:ref> proposed to train an ensemble of models that is broken down into four blocks, where all models share the first block of layers. The first model in the ensemble is considered the teacher (termed pseudo teacher in the paper). For each successive model (students), the number of channels in their convolution layers is reduced by an increasing ratio to the teacher model. During deployment, any model in the ensemble can be selected depending on the hardware constraints or, in cases of lenient constraints, the whole ensemble can be deployed. In addition to training each model using cross entropy between predictions and ground truth, an intermediate loss function is used to distill the knowledge of the intermediate block of layers (feature maps) of the teacher model to each of the student models. Moreover, Kullback Leibler divergence loss is used between the model prediction and the average predictions of the whole ensemble. Since the number of channels of the student models and the teacher model is not the same, an adaptation layer (1 × 1 convolution) is used to map the student channels to the teacher channels. The intermediate loss function is a mean squared error between the feature maps of the teacher and student pair.<ns0:ref type='bibr' target='#b9'>Chung et al. (2020)</ns0:ref> proposed online Adversarial Feature map Distillation (AFD) that trains two networks to mimic each other's feature maps through adversarial loss. Aside from training using the cross-entropy loss on the ground truth and Kullback Leibler divergence loss between the logits of the two networks, AFD trains a discriminator for each network that distinguishes between the feature map produced by the accompanying network and the other network. Each network in AFD is trained to fool its corresponding discriminator and minimize the adversarial loss. This in-turn will let the model learn the feature map distribution of the other network. In case of training two networks, one can be considered as the student (model with less parameters) and the other as the teacher model (with more parameters) and</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b7'>Chen et al. (2019)</ns0:ref>, knowledge distillation was used to train a lightweight model for pedestrian detection which will enable fast pedestrian detection in smart vehicles with autonomous driving functionality.<ns0:ref type='bibr' target='#b31'>Janveja et al. (2020)</ns0:ref> presented a smartphone-based system for detecting driver fatigue based on frequency of yawning and frequency of eye closure.<ns0:ref type='bibr' target='#b74'>Yang et al. (2018)</ns0:ref> presented the use of MobileNets in addition to Batch Normalization and Swish activation function (cf.<ns0:ref type='bibr' target='#b63'>Ramachandran et al. (2017)</ns0:ref>) to estimate the steering angle for self-driving cars.In the domain of healthcare,<ns0:ref type='bibr' target='#b15'>Esteva et al. (2017)</ns0:ref> presented an end-to-end deep CNN based system to classify different types of skin cancer from skin images. The paper proposed the idea of deploying the system on smart phones so that a large population can easily access the diagnostic services.<ns0:ref type='bibr' target='#b0'>Ahn et al. (2018)</ns0:ref> presented a CNN based deep learning system to assist in capsule endoscopy. The idea is to adaptively control the capsule's image capturing frequency and quality based on detecting damaged areas in a patient's small intestine. To adaptively control the capsule moving through a patient's intestine, the authors suggest pairing the capsule with an external device attached to the patient's waist which can process the incoming images in real-time and direct the capsule in terms of image frequency and quality.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Use cases for knowledge distillation to deploy deep learning models on small devices with limited resources.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Furlanello et al. claimed that the method would produce better models since it was based on</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>ensemble learning, and a model was trained on training labels and predictions of a previously trained</ns0:cell></ns0:row><ns0:row><ns0:cell>model.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Polino et al. (2018) developed a knowledge distillation approach for quantized models. Quantized</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021)</ns0:cell><ns0:cell>5/18</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>provides a summary of the presented work. It shows that the best achieved reduction in size is by<ns0:ref type='bibr' target='#b51'>Min et al. (2019)</ns0:ref> with a reduction of 99.44% in the number of parameters. We can also observe from the table that the best approach in terms of maintaining accuracy is proposed byKimura et al. </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of knowledge distillation approaches that utilize soft labels of teacher to train student model. In case of several students, results of student with largest size reduction are reported. In case of several datasets, dataset associated with the lowest accuracy reduction is recorded. Baseline models have the same size as the corresponding student models, but they were trained without the teacher models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Targeted Architecture</ns0:cell><ns0:cell cols='2'>Utilized Data</ns0:cell><ns0:cell>Reduction in Accu-</ns0:cell><ns0:cell cols='2'>Improvement</ns0:cell><ns0:cell>Reduction in</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>racy Compared to</ns0:cell><ns0:cell>in</ns0:cell><ns0:cell cols='2'>Accuracy</ns0:cell><ns0:cell>Size</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Teacher</ns0:cell><ns0:cell cols='2'>Compared</ns0:cell><ns0:cell>to</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Baseline</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Offline Distillation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fukuda et al. (2017)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell cols='2'>Aurora (Hirsch and</ns0:cell><ns0:cell>0.782%</ns0:cell><ns0:cell>2.238%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Pearce, 2000)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Liu et al. (2018)</ns0:cell><ns0:cell>Decision tree</ns0:cell><ns0:cell cols='3'>MNIST (LeCun, 1998) 12.796%</ns0:cell><ns0:cell>1-5%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Furlanello et al. (2018) DenseNet (Huang et al.,</ns0:cell><ns0:cell>CIFAR-100</ns0:cell><ns0:cell /><ns0:cell>2.369% (increase)</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2017)</ns0:cell><ns0:cell cols='2'>(Krizhevsky, 2009)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Polino et al. (2018)</ns0:cell><ns0:cell>Wide ResNet (Zagoruyko</ns0:cell><ns0:cell>CIFAR-100</ns0:cell><ns0:cell /><ns0:cell>0.1813%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>52.87%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>and Komodakis, 2016)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kurata and Audhkhasi</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>SWB 1</ns0:cell><ns0:cell /><ns0:cell>2.655%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>55.07%</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mun'im et al. (2019)</ns0:cell><ns0:cell>Seq2Seq</ns0:cell><ns0:cell>WSJ 2</ns0:cell><ns0:cell /><ns0:cell>8.264%</ns0:cell><ns0:cell>8.97%</ns0:cell><ns0:cell /><ns0:cell>89.88%</ns0:cell></ns0:row><ns0:row><ns0:cell>Kimura et al. (2019)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>MNIST</ns0:cell><ns0:cell /><ns0:cell>10.526% (increase)</ns0:cell><ns0:cell cols='2'>16.359%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Nayak et al. (2019)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>MNIST</ns0:cell><ns0:cell /><ns0:cell>0.57%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>40%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wu et al. (2019)</ns0:cell><ns0:cell>ResNet (He et al., 2016)</ns0:cell><ns0:cell>HMDB51</ns0:cell><ns0:cell>(Kuehne</ns0:cell><ns0:cell>0.6193%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>58.31%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>et al., 2011)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Online Distillation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lan et al. (2018)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>6.64%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Min et al. (2019)</ns0:cell><ns0:cell>Micro CNN</ns0:cell><ns0:cell>Synthetic</ns0:cell><ns0:cell>Aperture</ns0:cell><ns0:cell>0.607%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell>99.44%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Radar Images 3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jin et al. (2019)</ns0:cell><ns0:cell>MobileNetV2 (Sandler</ns0:cell><ns0:cell cols='2'>ImageNet (Deng et al.,</ns0:cell><ns0:cell>9.644%</ns0:cell><ns0:cell>6.246%</ns0:cell><ns0:cell /><ns0:cell>70.66%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>et al., 2018)</ns0:cell><ns0:cell>2009)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chen et al. (2020)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>5.39%</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Guo et al. (2020)</ns0:cell><ns0:cell>ResNet</ns0:cell><ns0:cell>CIFAR100,</ns0:cell><ns0:cell /><ns0:cell>1.59%</ns0:cell><ns0:cell>6.29%</ns0:cell><ns0:cell /><ns0:cell>34.29%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53968:2:0:NEW 26 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to the Reviewers’ Comments
We would like to thank the editor and the reviewers for their efforts and for providing
constructive comments and feedback throughout the review process. We believe that the
modification of the manuscript to address those remarks has further improved it.
Detailed replies to reviewers’ comments follows:
Reply to Reviewer #1:
This overview paper studies techniques which
transfer the knowledge acquired by large deep
learning models to smaller models, which can then be
used in embedded and mobile devices.
The level of English is adequate, apart from some
minor grammatical errata that should be edited out:
e.g. 'The main objectives of this work' -> 'The main
objective of this work'; 'Also, the paper discuss' ->
'The paper also discusses'; 'deep learning models use
to run' -> 'deep learning models are used to run' (or
just 'typically run').
This study fits within the scope of the journal and
there is no recent peer-reviewed review of the topic,
to my knowledge. The overall structure of the paper
has been improved and is now easier to follow.
Diagrams have been added to complement method
descriptions.
Overall, the paper has received many improvements
and the contents are now well organized and present
the whole picture notably better. The only
modification I would recommend is a quick revision
for grammar mistakes such as those I marked above,
for greater clarity. Apart from that, I would consider
that this overview reaches publication quality.
We thank the reviewer for his/her
efforts towards reviewing the
manuscript and providing valuable
feedback.
We thank the reviewer for pointing
these errors. We fixed them in the
updated manuscript. In addition, we
proofread the manuscript to fix
grammar mistakes elsewhere, as
well.
We would like to thank the reviewer
for his/her efforts towards
reviewing the manuscript and
providing encouraging feedback. We
did a revision of the manuscript to
fix the grammar mistakes and to
increase the readability of the paper.
Reply to Reviewer #2:
As suggested, the authors have extended the
background to include some of the concepts used along
the paper. I would also move the definition of online
and offline distillation (including figure 2) that now
appears at the beginning of section 5 (lines 173-176) to
the background.
We thank the reviewer for his/her
efforts towards reviewing the
manuscript and providing valuable
feedback.
We have updated the background
to move the definition of online
The authors have also included a new section
summarizing the applications, as requested. This
section contributes making the paper more complete.
and offline distillation including
figure 2.
We would like to thank the
reviewer for providing
encouraging feedback.
New figures are quite useful to understand the
background concepts and the categories used to classify
the papers.
The manuscript still contains some grammar mistakes
(a few are listed below), so proof-reading is highly
recommended before publication.
- Section 1: The main objectives of this works is => are
- Section1: Also, the paper discuss => discusses
- Section 3: It’s purposes => its purposes
- Section 5: the two sub-category => subcategories
- Section 6: deep learning models use to run => are
usually run ?
- Section 6: To be practically in use => To be of practical
use
- Section 6: To be low latency => To have low latency
I am not fully satisfied with the answer given by the
authors about the survey methodology. Even if they do
not want to conduct a systematic literature search, the
process followed to find and select the papers should
be better explained in the manuscript. It seems that the
survey is focused on recent works not included in
previous surveys, so the covered period of time should
be given. The names of the journals and conferences
considered as “relevant”, as well as the minimum
citation count, should be reported as well. Even though
these criteria might not be valid for a systematic review,
the reader has the right to know how the authors
choose papers. Otherwise, the “overview” of the area is
strongly biased by the authors’ interest on certain
papers, but the reader is not aware of it.
The new organization of the survey section has greatly
contributed readability.
The authors have successfully addressed my comments
about the validity of findings.
We thank the reviewer for pointing
these errors. We fixed them in the
updated manuscript. In addition,
we proofread the manuscript to fix
grammar mistakes elsewhere, as
well.
Based on the feedback from the
reviewer, we have updated the
survey methodology by adding
details on the paper selection
criteria including the publication
year and the venue.
We would like to thank the
reviewer for his/her efforts
towards reviewing the manuscript
and providing encouraging
feedback.
" | Here is a paper. Please give your review comments after reading it. |
54 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cyber-attacks have become one of the biggest problems of the world. They cause serious financial damages to countries and people every day. The increase in cyber-attacks also brings along cyber-crime. The key factors in the fight against crime and criminals are identifying the perpetrators of cyber-crime and understanding methods of attack.</ns0:p><ns0:p>Detecting and avoiding cyber-attacks are difficult tasks. However, researchers have recently been solving these problems by developing security models and making predictions through artificial intelligence methods. Although a high number of methods of crime prediction are available in the literature, it suffers from the deficiency to predict cyber-crime and cyber-attack methods. This problem is tackled to identify an attack and the perpetrator of such an attack, using actual data such as the type of crime, gender of perpetrator, damage and methods of attack acquired from the applications made by persons who were exposed to cyber-attacks to the forensic units. In this paper, we analyze cyber-crimes in two different models with machine-learning methods and predict the effect of the defined features on the detection of cyber-attack method and the perpetrator. We used eight machine-learning methods in our approach and concluded that their accuracy ratios were close. The Support Vector Machine Linear was found out to be the most successful in the cyber-attack method with an accuracy rate of 95.02%. In the first model, we could predict with high accuracy which attacks the victims were likely to be exposed to.</ns0:p><ns0:p>The Logistic Regression was the leading method in detecting attackers with an accuracy rate of 65.42%. In the second model, we predicted whether the perpetrators could be identified by comparing their characteristics. Our results reveal that the probability of cyber-attack decreases as the education and income level of victim increases. We believe that cyber-crime units will use the model we propose. It will also facilitate the detection of cyber-attacks and make the fight against these attacks more effective and easier.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>I. INTRODUCTION</ns0:head><ns0:p>Nowadays, it has become exceedingly difficult to ensure the security of our systems including both corporate and personal data. Major countries, such as the United States and the United Kingdom, struggle with cyber-attacks and crimes by producing various security strategies <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Countries are striving to ensure security in cyber space and adapt to this field <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. It is of vital importance for countries to protect their critical infrastructures such as chemical, financial, health, energy sectors and even nuclear power plants in some countries <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. Due to millions of cyberattacks, financial losses significantly increase day by day <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. In 2020, data stolen from the information system of Airbus Company were put on the dark web market. Medical data of millions of people have been stolen and even state of emergency has been declared due to attacks on some cities <ns0:ref type='bibr'>[5]</ns0:ref>. The most important elements that ensure cyber security are integrity, confidentiality, authentication, authorization, nonrepudiation, and availability <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>With each passing day, work force is insufficient in fighting against cyber incidents and new solutions are sought. In the fight against cyber-attacks and crimes, solutions such as autonomous cyber defense systems <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, smart cyber security assistant architecture <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> and intrusion detection systems <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> are investigated. Researchers use machine-learning methods to detect power outages due to cyber-attacks <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>, prevent vulnerabilities of the Internet of things <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>, determine spam and network attacks <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>, detect the phishing attacks against the banking sector <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> and reduce sexual crimes on social media <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. These methods have been implemented in fields such as stock prediction <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, risk mapping by crimes <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>, cyber profiling <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>, predicting crime trend and pattern <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>, criminal identity detection <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> and crime prevention <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> as well.</ns0:p><ns0:p>The main objectives of our study are to use actual cyber-crime data as input to predict a cybercrime method and compare the accuracy results; to measure whether cyber-crime perpetrators can be predicted based on the available data and to understand the effect of victim profiles on cyberattacks.</ns0:p><ns0:p>In this paper, real cyber-crime data of 5-years in Elazığ / Turkey was used. By using machinelearning methods, the method of attack was predicted, and the perpetrator was detected based on the features such as age, gender, income, education, marital status, occupation, and the damage of the crime. By working on certain features such as age, gender, etc., we predicted the kind of attack methods and the victims of these attacks. These results will be used in cyber-crime modelling and attack detection by the police forces dealing with cybercrimes. Major contributions of the proposed approach are:  It will provide advantages to cybercrime department as it allows using actual data,  It will allow predicting which attacks victims may be exposed to,  It will allow determining the optimum performance by comparing machine-learning algorithms.</ns0:p><ns0:p>In Section II, the literature is reviewed, and the current studies and the missing parts are revealed. Section III explains the machine-learning methods to be used in research. Section IV presents the predictions and accuracy rate from the dataset and a comparison with previous studies. Section V includes results and future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>II. RELATED WORKS</ns0:head><ns0:p>The importance of the fight against such cyber-attacks and cybercrimes and cyber security is highlighted in various studies. Cyber security is the protection of physical-digital data, networks, and technological systems through various processes, applications, and applied technologies from cyber-attacks, unauthorized accesses, disruptions, modifications, destructions, and damages <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. Cyber-attacks such as distributed denial of service attacks by sending malicious packets <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, phishing attacks for banking and shopping sites that deceive the user <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>, and malicious attack software (virus, worms, trojans, spyware and ransomware) that are installed into the user's computer without any consent of the user <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> have increased significantly. Again, the most common of these attacks and one of the attacks that are most difficult to be prevented is the social engineering attacks based on technical skill, cunning and persuasion, made by taking advantage of the weakness of the victim. Kevin Mitnick, one of the world's famous hackers in social engineering attacks, penetrated most systems he attacked with this method <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. In the work by Breda F et al., this attack is mentioned as one of the biggest security vulnerabilities in the system no matter how secure a technical system is <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. Likewise, attacks against IoT devices, which have increased rapidly in recent years, affect the society to a large extent and thus, attacks and threats to the IoT structure should be understood for security purposes <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>. Studies conducted to understand and combat cyber-attacks reveal the importance of crime prediction as discussed in this study.</ns0:p><ns0:p>The attacks described above are defined as prohibited criminal acts within the legal framework of many governments. The duty of fighting against crime and criminals is given to law enforcement departments. Researchers assist the institutions conducting the investigation with various analysis and prediction methods. For example, big data <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref> and machine-learning <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> methods have been used to analyze crimes in many studies. They have contributed to crime and crime fighting institutions with artificial intelligence models, such as determining the areas where the crime can be committed and its story <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>, predicting the crime using spatial, temporal and demographic data <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>, and analyzing crime with literacy, unemployment and development index data <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>. Time series of crime data in San Francisco, Chicago and Philadelphia were used for predicting crimes in the following years. Decision Tree classification model performed better than KNN and Naive Bayes <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref>. Crime data were passed through two different data processing procedures in Canada and a crime prediction was made with an accuracy between 39% and 44% using the KNN and Decision Trees <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>. Data such as location, type, date, time, latitude, and longitude of the crimes that took place in the USA were used as input. As a result of the crime predictions made with KNN Classification, Logistic Regression, Decision Trees, Random Forest, Support Vector Machine, and Bayesian methods, it is seen that the KNN classification was the most successful with an accuracy of 78.9% <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>.</ns0:p><ns0:p>Thirty-nine different categories of crime data happened in the city of San Francisco were used. By using Gradient Boosted Trees and Support Vector Machines, a model dividing crimes into two classes, blue/white collar crime and violent/non-violent crime, was created. High accuracy was achieved in the classification of blue-& white-collar crimes but the study did not yield significant results in the classification of violent & non-violent crimes <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. Data from a 10-year murder that occurred in Brazil were used. To measure the effect of non-Gaussian residuals and urban metrics on murders, predictions with 97% accuracy were made using the random forest method. The results of this study revealed that unemployment and ignorance were important variables in murder, and the order of importance in predicting the crime was determined <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref>. The type, time and location of the crime data were used to predict crime in certain regions in India. The KNN prediction method was used. The method predicted when and where robbery, gambling, accident, violence, murder and kidnapping crimes would occur. It was found to be more successful than a similar study conducted previously <ns0:ref type='bibr' target='#b39'>[37]</ns0:ref>.</ns0:p><ns0:p>Big data and machine-learning framework were implemented by using crime data collected from social media platforms through Volunteered Geographic Information, web, and mobile crime reporting applications. Crime predictions were produced from these data collected using the Naive Bayes algorithm. These predictions aimed to determine the location of possible crimes and prevent them <ns0:ref type='bibr' target='#b40'>[38]</ns0:ref>. The demographic and geographical data of the events that took place in previous years were used to predict the terrorist incidents in India. This model predicted terrorist incidents using artificial intelligence algorithms and achieved results with relatively high accuracy <ns0:ref type='bibr' target='#b41'>[39]</ns0:ref>.</ns0:p><ns0:p>The cybercrime data analyzed were publicly available data from social media platforms, including Facebook and Twitter. Researcher compared the algorithms according to the F-measure value, which is the degree of accuracy and precision. With an accuracy of 80%, RF algorithm was found the best fit in the scenario. Threats were automatically determined through the model analyzing cybercrimes <ns0:ref type='bibr' target='#b42'>[40]</ns0:ref>. Real-time crime data that published in internet news were used through the screening program. SVM, Multinomial NB and RF classification methods were used. The data were separated as crime and non-crime. The most important part is that it currently provides analysis to the news <ns0:ref type='bibr' target='#b43'>[41]</ns0:ref>. Cybercrime incident data occurring in India were classified using machine-learning techniques. The model, which predicted crimes with 99% of accuracy, reduced amount of time spent in analysis and manual reporting <ns0:ref type='bibr' target='#b45'>[42]</ns0:ref>. A universally compared intrusion detection dataset, which was obtained from Kaggle, was used. DNN was used to develop an effective and efficient intrusion detection system in the IoMT environment to classify and predict unforeseen cyber-attacks. DNN outperformed machine-learning algorithms <ns0:ref type='bibr' target='#b46'>[43]</ns0:ref>. When the related works are examined, a summary is presented in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Based on the review of the literature, we can suggest that cyber-attacks and crimes are important to study as they cause substantial damages to individuals and states. As we could observe the fields where machine-learning methods were used and these methods were successful in predictions, they were used in our study. Studies made significant contributions particularly to criminal units conducting investigations and to the literature. Although general crimes, cybercrimes and attacks are used as a dataset, a machine learning-based model using the dataset of our study is proposed since the actual dataset based on personal attributes is studied to a lower extent. Cyber-attack and perpetrator estimation method is tackled due to the importance of fields that are studied.</ns0:p></ns0:div>
<ns0:div><ns0:head>III. MATERIALS & METHODS</ns0:head><ns0:p>When people become a victim of crime, they resort to cops that deal with that particular type of crime. These data are recorded in detail in the database of this unit. Police units report these crimes by their type, method, year, etc. They prepare statistics according to their characteristics, analyze, and visualize them. When multiple cyberattacks are made simultaneously against a victim, this is recorded in police records as a single attack. Therefore, it was necessary to look at the details of the event, rather than the statistical data, in order to understand whether multiple methods are used. Although a large number of crimes exist in the database, the focus is cybercrimes as in recent years; cybercrimes have caused considerable material and moral damages and are still not prevented. Cybercrime is chosen as the subject matter since most studies on this field do not employ actual data. In the proposed model, the objective is to take preventive measures against future crimes based on the characteristics of the victim. In addition, it will provide advantages to the police department in predicting cyber-crimes, profiling crime, perpetrator and victim, and consequential suffering will be prevented. The outcome of the study will enable to take tailored measures and facilitate informing people of crimes they may be exposed to. Our dataset was real cybercrime data that occurred in the province of Elazığ between 2015 and 2019. Accessing to actual data and preparing these data for processing with machine-learning methods is a challenging process. When the dataset was obtained, all cybercrime details were examined, and redundant areas were removed using various data science methods. The number of crimes, damages, attacks and methods of attack in the dataset are shown in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>. With these data, predictions were made using various libraries in the Python 3.7 program. Main libraries of this program such as Numpy, Pandas, Matplotlib were used and the data were visualized through this program. The key advantages of using machine-learning methods in the paper are the possibility to recognize multiple structured and unstructured data patterns, high-level success in detecting crime tactics as they change, extraction of relationships between complex data and the ability to produce results that are unpredictable by humans.</ns0:p></ns0:div>
<ns0:div><ns0:head>A. Features</ns0:head><ns0:p>Feature selection is the process of selecting interrelated and useful features in the dataset. It saves time and memory during data training in machine learning. If the features are improperly selected, time required for training may increase, making interpretation of the model difficult and increasing the error rate in the model. The attributes of the features in our dataset were determined. Each entry was related to a real crime shown in Table <ns0:ref type='table'>2</ns0:ref>. These features were used as inputs in training data. In Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>, the criteria of the features in our dataset are determined as median, maximum, minimum.</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Preprocessing</ns0:head><ns0:p>Standardization is the rescaling of features in a normal distribution. This needs to be completed before using machine-learning methods. Firstly, all fields identified as strings were made numerical with the label encoder method. After the data were made suitable, numbers from 1 to 10 were given according to the variety of data in the columns. In order to optimize the data to be used in algorithms, the StandardScaler () in the Python library was used. The bidirectional relationship between type of crime and damages, attack and method of attack is shown in Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. The data were divided into 80% training data and 20% test data. Cross-validation method was PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science used to ensure that our data were divided properly and correctly every time. The method prevented underfitting and overfitting of the data and a value of k=10 were given.</ns0:p><ns0:p>In the first model, the method of attack was tried to be predicted by giving the features of crime, gender, age, job, income, marital status, education, attack, harm, and perpetrator.</ns0:p><ns0:p>In the second model, the perpetrator of crime was tried to be predicted by giving the features of crime, gender, age, income, job, marital status, education, attack, harm, and attack method.</ns0:p></ns0:div>
<ns0:div><ns0:head>C. Logistic Regression (LR)</ns0:head><ns0:p>It is an equation allowing one to predict the value of one from the other, based on the relationship between two or more variables. Assuming that the variance of y is constant, the coefficients can be solved using the least squares method. This minimizes the error between the actual data point and the regression line. The equation for the Logistic Regression line is given as:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>y ' = b + 𝑤 1 𝑥 1 + 𝑤 1 𝑥 1 + .. + 𝑤 𝑛 𝑥 𝑛</ns0:formula><ns0:p>Where, y′ is the desired output, b is bias value, x is the property of the input, w is the weight of the features.</ns0:p></ns0:div>
<ns0:div><ns0:head>D. K-Nearest Neighbors (K-NN)</ns0:head><ns0:p>The KNN classifier is based on the distance function that measures the difference and similarity between two or more samples. The Euclidean distance d (x, y) between two samples is defined as:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>d(x,y) = ∑ 𝑛 𝑘 = 1 (𝑥 𝑘 + 𝑦 𝑘 ) 2</ns0:formula><ns0:p>Where, and are the element. 𝑥 𝑘 𝑦 𝑘 𝑛 𝑡ℎ n is the property of the dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑛 𝑡ℎ</ns0:head><ns0:p>First, the k parameter is determined and the distance of the new data to be included in the dataset is calculated one by one according to the existing data. The closest neighbour is found and assigned to the neighbour class k.</ns0:p></ns0:div>
<ns0:div><ns0:head>E. Support Vector Machine (SVM)</ns0:head><ns0:p>This method includes support vector classification and support vector regression. SVM is based on the concept of decision limits, supporting both binary and more classifications. Considering the training data D:</ns0:p><ns0:formula xml:id='formula_2'>(3) D = {(𝑥 𝑖 , 𝑦 𝑖 ) | 𝑥 𝑖 ∈ R p , 𝑦 𝑖 ∈ { -1, 1}} 𝑛 𝑖 = 1</ns0:formula><ns0:p>Here is 1 or -1 and determines the class to which the point belongs. Every is a p-𝑦 𝑖 𝑥 𝑖 𝑥 𝑖 dimensional real vector. The support vector is at the closest point to the hyperplane of optimum separation. In the classification process, mapping input vectors on the separation hyperplane side </ns0:p></ns0:div>
<ns0:div><ns0:head>F. Naive Bayes (NB)</ns0:head><ns0:p>The naive bayes classifier is a simple probability classifier based on applying Bayes' theorem with strong independence assumptions between properties. A naive bayes based on the multi-label classification model can be applied. Given a dataset { , ,…, } labelled { , ,…, }, the</ns0:p><ns0:formula xml:id='formula_3'>𝑎 1 𝑎 2 𝑎 𝑗 𝑣 1 𝑣 2 𝑣 𝑗</ns0:formula><ns0:p>results are predicted using the following equation:</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_4'>𝑣 𝑁𝐵 = arg 𝑚𝑎𝑥 𝑣 𝑗 ∈ V 𝑃(𝑣 𝑗 ) ∏ 𝑖 𝑃𝑎 𝑖 |𝑣 𝑗</ns0:formula></ns0:div>
<ns0:div><ns0:head>G. Decision Tree (DT)</ns0:head><ns0:p>Decision tree is a classification method that creates a tree-like model consisting of decision nodes and leaf nodes by classification, feature, and target. A decision tree or classification tree is a tree in which each internal node is labelled with an input property. Arcs from this tagged node are tagged with each of the possible values of the target attribute or lead to a sub-decision node in a different input attribute. A tree can learn by dividing the set of resources into subsets according to an attribute value test. This process is repeated in a recursive manner, called recursive partitioning, on each derived subset. The iteration is complete when the subset in a node has the full value of the target variable or the division no longer adds value to the predictions.</ns0:p><ns0:formula xml:id='formula_5'>) (5) (x, Y) = (𝑥 1 , 𝑥 2 , 𝑥 3 , …, 𝑥 𝑘 , Y</ns0:formula><ns0:p>The dependent variable Y is the target variable that we are trying to understand, classify or generalize. The vector x consists of input variables such as used for this task. 𝑥 1 , 𝑥 2 , 𝑥 3</ns0:p></ns0:div>
<ns0:div><ns0:head>H. Random Forest (RF)</ns0:head><ns0:p>Random forest is an algorithm that creates classifier on training data and combines outputs to make the best predictions on test data. Randomness minimizes variance to avoid over-learning according to training data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>I. eXtreme Gradient Boosting (XGBoost)</ns0:head><ns0:p>The first step in XGBoost is to make the first prediction (base score). This prediction can be any number, as the correct result will be reached by converging with the operations to be done in the next steps. This number is 0.5 by default.</ns0:p><ns0:p>First, the loss function L( F(x)) is created. is the observed value, F(x) is the predicted value:</ns0:p><ns0:formula xml:id='formula_6'>𝑦 𝑖 , 𝑦 𝑖 (7) 𝐹 0 (𝑥) = argmin γ ∑ 𝑛 𝑖 = 1 (L(𝑦 𝑖 ,𝛾)</ns0:formula><ns0:p>Here, the constant variable is determined. It is the value loss function in sigma in the formula. 𝛾 (gamma) is the predicted value.</ns0:p><ns0:formula xml:id='formula_7'>i = 1, …. , n<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>𝑟 𝑖𝑚 = -[ ∂𝐿(𝑦 𝑖 , F(𝑥 𝑖 )) ∂ F(𝑥 𝑖 ) ] F(x) = 𝐹 𝑚 -1 (𝑥)</ns0:formula><ns0:p>Where, r means residual, i is observation number, m denotes the number of the established tree.</ns0:p><ns0:p>The basic learning terminal node in tree growth is the regression tree. It is formulated below:</ns0:p><ns0:formula xml:id='formula_9'>(9) γ 𝑖𝑚 = argmin γ ∑ 𝑥 𝑖 € 𝑅 𝑖𝑗 L(𝑦 𝑖 , 𝐹 𝑚 -1 ( 𝑥 𝑖 ) + γ ) j = 1 … J 𝑚 (10) 𝐹 𝑚 (𝑥) = 𝐹 𝑚 -1 (𝑥) + ∑ 𝑗 𝑚 𝑗 = 1 𝛾 𝑗𝑚 𝐼 (𝑥 ∈ 𝑅 𝑗𝑚 )</ns0:formula></ns0:div>
<ns0:div><ns0:head>J. Accuracy, Precision, Recall, F1-Score</ns0:head><ns0:p>Accuracy (Acc) score is a method used to evaluate the performance of the model made by comparing the predictions made after running the algorithm with the test data. A value between 0 and 1 is produced according to the ratio of the entire predicted value for a prediction to match with the real values. To determine the accuracy of the forecast: </ns0:p></ns0:div>
<ns0:div><ns0:head>IV. RESULTS & DISCUSSION</ns0:head><ns0:p>The study aims to analyze the data collected about incidents correctly, to avoid crimes and to catch the perpetrators. The main subject of this paper is to draw conclusions from the analyzed data and combat crimes based on the outcome. These results will reveal and shed light on the investigations carried out by law enforcement officers and any concealed facts. Based on the information on against which victim the cyber-crime was committed, the method of crime and whether the perpetrator is identified or not, machine-learning methods may be used to determine whether the same perpetrator carried out the cyber-attack. The damages suffered by the victims in cyber incidents in Elazığ province have been discovered over the years through various methods. The sum of monetary damages suffered by each victim in the dataset was obtained by summing over the years. It is thought that the decrease in such incidents, observed especially after 2017, results from deterrence secured by the laws and awareness activities. The amount of economic losses because of cyber-attacks is profoundly serious in Elazığ, as shown in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. The damage mentioned above is enough to show the importance of dealing with cyber security and attack methods.</ns0:p><ns0:p>In this section, results obtained by use of Support Vector Machine (Linear), Random Forest, Logistic Regression, eXtreme Gradient Boosting, Support Vector Machine (Kernel), Decision Tree, K-Nearest Neighbors, Naive Bayes algorithms are presented. We can evaluate the Pearson correlation coefficient between these data as shown in Figure5. This correlation matrix shows that there are substantial correlations between practically all pairs of variables.</ns0:p><ns0:p>During the experiment, the dataset was first trained and tested in all algorithms and accuracy and evaluation criteria were adopted. Accuracy, precision, recall and F1 score values were obtained by comparing the predicted values with the test data.</ns0:p><ns0:p>The prediction accuracy, training accuracy, precision, recall and F1-score data among the algorithms of the first model that predicts the method of attack are shown in Table <ns0:ref type='table'>3</ns0:ref>. The comparison of accuracy is shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>. When the results were compared, SVML showed the best performance with an accuracy rate of 95.02% in terms of prediction accuracy. SVML algorithm outperformed RF, LR, XGBoost, SVMK, DT, KNN, and NB algorithms by 0.54%, 1.1%, 1.66%, 2.2%, 2.76%, 5.52% and 13.25% margin, respectively. NB showed the poorest performance with a rate of 81.7%. The performances of algorithms other than NB were close to each other. The distribution graph of the actual values and the predicted values obtained by the SVML algorithm are shown in Figure <ns0:ref type='figure' target='#fig_10'>7a</ns0:ref> and the error matrix is shown in Figure <ns0:ref type='figure' target='#fig_10'>7b</ns0:ref>.</ns0:p><ns0:p>As for the training accuracy of the algorithms, NB showed the poorest performance, whereas the training accuracy of the other algorithms was over 95%. Comparing the model in terms of precision, recall and F1-scores, the best result was also obtained with SVML algorithm, albeit a small margin. While LR, SVMK, DT, RF and XGBoost gave results above 92%, their performances were close to each other. Out of these three scores, a difference of approximately 10% was found between the underperforming KNN and NB. In general, all algorithms produced sufficient results. These results showed that the machine-learning approach could be successfully applied to predict the cyber-attack method. In the model to be created, when the features of a person (Table <ns0:ref type='table'>2</ns0:ref>) are entered, it can be predicted which crime such person will be exposed and warning systems can be created for groups of persons.</ns0:p><ns0:p>The prediction accuracy, training accuracy, precision, recall and F1-scores data among the algorithms of the second model that predicts the attacker are given in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref> and the comparison of accuracy is shown in Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>. LR showed the best performance in this model with 65.42% and SVML, KNN, SVMK, XGBoost, RF, DT and NB algorithms achieved accuracy with a margin of 0.78%, 0.85%, 1.33%, 1.39%, 1.88%, 2.44% and 3.34%, respectively. Even though NB showed the poorest performance, the performances of algorithms were very close to each other. The distribution graph of the actual values and the predicted values obtained by the SVML algorithm are shown in Figure <ns0:ref type='figure' target='#fig_12'>9a</ns0:ref> and the error matrix is shown in Figure <ns0:ref type='figure' target='#fig_12'>9b</ns0:ref>.</ns0:p><ns0:p>As for training data, NB achieved the lowest accuracy with 64.17% and RF achieved the highest learning accuracy with 83.75%, but the margins between learning rates were close. In terms of precision, recall, and F1-scores, the results of the algorithms varied between 55% and 65%. The results were not satisfactory. Based on the known/unknown feature of the perpetrator, we aimed to find out whether the same perpetrator committed the crime after comparing the features of the attacker who carried out the incident. However, the results of the model indicated that a new model should be created by adding new attributes.</ns0:p><ns0:p>When the papers shown the table in section II are compared with our model, eight of the datasets of these works include general street-crimes, one of them works with terrorist crimes, and two of them work with cyber-crimes. Cyber-crime data is one of the less studied crime types in the literature so that we use these data for our proposed study.</ns0:p><ns0:p>In the other predictions, it is seen that the data such as the type, time, place, region, attacker, demographic and regional factors of the crime are mostly used as features. In our study, predictions are made according to the features of age, gender, income, occupation, harm, and attack methods because there are very few studies using these features.</ns0:p><ns0:p>Although most of the machine-learning methods have been studied separately in other works, we combine these eight different algorithms and use all of them simultaneously in our model. Although the datasets and parameters were different from each other, when the studies conducted by Arora et al. and Ch et al. and our study were compared, it was found out that some features were similar and all studies focused on cybercrime. The accuracy rate was found to be 80% in the study conducted by Arora et al. and 99.3% in the study conducted by Ch et al.</ns0:p><ns0:p>The limitation of our study is the quantity of data set since our data set consists of actual data. The temporal data enables estimation of time series; however, these data are not available in our dataset. Likewise, if the technical details of the attacks were available in the police records, estimates that are more detailed could make it easier to catch the perpetrator.</ns0:p><ns0:p>Even though many studies have predicted where and when crimes will occur in the future, many of them have not touched upon the method by which crimes occur, how they can be prevented, and what the features of the perpetrator are. One of the key advantages of this study is using actual data and it is a preliminary step toward profiling for people who have similar features with the victims who were attacked. Another advantage of the proposed study is predicting what the cyberattack method will be and whether its perpetrator can be detected. Our results show that any exposure to cyber-crimes reduces as the level of education and income increases.</ns0:p></ns0:div>
<ns0:div><ns0:head>V. CONCLUSIONS & FUTURE WORK</ns0:head><ns0:p>This paper suggests a method that predicts and detects cyber-attacks by using both machinelearning algorithms and the data from previous cyber-crime cases. In the model, the characteristics of the people who may be attacked and which methods of attack they may be exposed to are predicted. It has been observed that machine-learning methods are successful enough, the Support Vector Machines linear method being the most successful of these methods. The success rate of predicting the attacker who will make a cyber-attack in the model is around 60%. Other artificial intelligence methods may be able to try to increase this ratio. In our approach, it is concluded that it is necessary to draw attention to especially malware and social engineering attacks, and the higher the levels of the victim's education and income are, the less the probability of cyber-attack is. The primary focus of this study is to lead law enforcement agencies in the fight against cybercrime and provide faster and more effective solutions in detecting crime and criminals. New training and warning systems can be created for people with similar characteristics by the evaluation of the characteristics of the attack victims emerged in our analysis study.</ns0:p><ns0:p>For future work: crime, criminal, victim profiling, and cyber-attacks can be predicted using deep learning algorithms and the results can be compared. Based on the talks with other authorized units having crime databases, cybercrime data of other provinces may also be obtained to use for comparison with this study. The data of other provinces can be compared to similar studies. Intelligent criminal-victim detection systems that can be useful to law enforcement agencies in the fight against crime and criminals can be created to reduce crime rates. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021) Manuscript to be reviewed Computer Science of the feature space fall into one class, and locations fall into the class on the other side of the plane.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>h(x│θ 𝑗 ) = 𝑝 ))} Where, is a classification tree, h(x│θ)k is the number of the tree selected from a pattern random vector.If D(x, y) represents the training data, each classification tree in the ensemble is constructed using a different subset ) of the training data. Each tree then works like normal 𝐷 θ 𝑘 (𝑥, 𝑦) ⊂ 𝐷(𝑥,𝑦) decision trees. Data is segmented by randomly selected value until it is fully partitioned, or the maximum allowed depth is reached. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>TP = Prediction is positive(normal) and actual is positive(normal). FP = Prediction is positive(normal) and actual is negative(abnormal). FN = Prediction is negative(abnormal) and actual is positive(normal). TN = Prediction is negative(abnormal) and actual is negative(abnormal)The other evaluation metrics for the proposed model are precision, recall and F1-score. Precision (P) is the rate of correctly classified positive instances to the total number of positive instances. Recall (R) shows how successfully positive instances are predicted. F1-Score (F1) is the weighted average of the Precision and Recall values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>+ 𝐹𝑁 + 𝐹𝑃 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021) Manuscript to be reviewed Computer Science formulas are used.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Crime, cyber-attack, and damage numbers in the dataset</ns0:figDesc><ns0:graphic coords='24,42.52,178.87,525.00,268.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Measure of features</ns0:figDesc><ns0:graphic coords='25,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Pairwise relationship between crime, harm, attack, and attack method</ns0:figDesc><ns0:graphic coords='26,42.52,178.87,525.00,374.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Damage caused by cyber-attacks in Elazığ province</ns0:figDesc><ns0:graphic coords='27,42.52,178.87,525.00,342.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Pearson's linear correlations matrix</ns0:figDesc><ns0:graphic coords='28,42.52,178.87,525.00,448.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy rate of algorithms applied in attack method prediction</ns0:figDesc><ns0:graphic coords='29,42.52,178.87,525.00,366.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. A) Model 1 comparison values (Results are colored redder as they approach the actual values and more purple as they move away). B) Confusion matrix of predicted values</ns0:figDesc><ns0:graphic coords='30,42.52,219.37,525.00,184.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Accuracy rate of algorithms applied in perpetrator prediction method</ns0:figDesc><ns0:graphic coords='31,42.52,178.87,525.00,364.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. A) Model 2 comparison values (Results are colored redder as they approach the actual values and more purple as they move away). B) Confusion matrix of predicted values</ns0:figDesc><ns0:graphic coords='32,42.52,219.37,525.00,184.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance of machine learning algorithms for model 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:1:0:CHECK 31 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Dear Prof. Haider Abbas and reviewers:
First, we would like to thank the editor and reviewers the constructive comments and for taking the time to review our manuscript. We have edited the manuscript to address their concerns. We have finished revising the paper entitled “Cyber-attack method and perpetrator prediction using machine learning algorithms”. Our responses to comments of reviewers are listed below in detail. I believe that it is a better work with your comments and contributions. I hope the revisions and explanations are acceptable. Thanks for the notices from the journal and the comments from the reviewers again with my best regards.
Yours Sincerely
Abdulkadir Bilen
Reviewer 1
Basic reporting
The paper utilizes different Machine Learning techniques to predict the attack method and attacker. A dataset from of attacks (2015 - 2019) is used.
The paper in its current form suffers from these shortcomings that the authors should consider.
The paper is well written and easy to follow.
The English language needs some minor revisions and edits.
The authors provide their code and data.
-We thank the reviewer for appreciating our work and raising some concerns.
-We appreciate the positive feedback from the reviewer
- We have revised the English language
1- The related work is missing context and insights. This section is a group of relevant papers, however, there is no link or discussion between them.
The authors report multiple studies that do similar work, however, no insights are reported. The authors are advised to give the reader some context to understand their approach afterwards.
- We highlighted the relationship and insight between related works and ours in detail. (Line 80-81, 150-157)
2- In the abstract, the authors should stress on the problem and what they are trying to solve, how they solve it, etc. The dataset should also be mentioned. In line 25, SVM with a linear kernel is said to be the best, however, the authors do not provide any discussion about this. What exactly is being predicted.
-In the abstract, we emphasized the problem and how we tried to solve it. We mentioned about the dataset. We discussed the prediction results between SVM and algorithms. (Line 22-33)
3- Table 2 should be moved to the related work section.
-We moved table 2 to the related work section. (page 18 Table1)
4- It is advised that the authors be more precise when reporting other people research. What dataset do they use? What are they predicting/detecting? etc.
-We revised the related works again. We provided with precise expressions which dataset they used and what they predicted. (Line 116-119, 124-125, 130-131, 137-138)
5- Some references are old. For example in line 36, a recent supporting reference about UK and US figures is needed.
Lines 70-73 require a supporting reference. The paper claims that social engineering attacks are the most difficult. Is this based on their findings? Is it in one place or is this a global finding?
-We added new reference [2] in line 43.
-We added new reference on social engineering attack. It was emphasized that it was difficult to prevent this attack based on the findings of the work. (Line 90-96)
6- In Figure
1, are these insights based on the dataset? If so, the authors are advised to make this clear. Also, why is there a peak in 2017? Were measures put in place to reduce the damage in 2018 and 2019?
-Yes, it is based on the figure data set and we made it clear in the paper. (line 320-324) We also highlighted the details of the attack damage in the paper. (Figure 1 changed to figure 4.)
Experimental design
1- The aim and objective of this paper are vague. I recommend that the authors clarify the problem they are tackling. Also, who will this model benefit and how? Placing this paper in the wider research context is required.
-We highlighted the aim and objective. We detailed the people and institutions that the model will benefit from. We placed it in as wider research context as possible. (314-320)
2- The dataset is comparably small (~900 records). Is there any way the validate the model on a larger dataset? Also, I was expecting validating the model on a benchmark dataset.
- Our dataset was small as it was real data from police records. We verified with precision and training accuracy data according to scientific methods. (line 300-312, 335-337) The precision and recall results are added to table 3 and table 4. (Line 351-360)
3- The authors are advised to add a description of how the dataset is generated.
- We added a description of the details of the dataset. (line175-181, 189-192, 194-195) We added figure 1 - 2.
4- In the dataset, did the authors consider overlapping attacks? what about cases where multiple methods are used? How is this handled?
-We could not consider it because there were no overlapping attacks on the data. (Line 161-174)
5- The authors provide the background of each of the algorithms they use, however, no discussion/investigation of the results is provided. Not enough details of the experiments are mentioned in the paper.
-We thoroughly discussed the results of all the algorithms. We used by adding new tables (Table 3- Table 4). (Line 340-348, 362-369)
Validity of the findings
1- The paper barely discusses the results. The authors are advised to provide more results and comment on them. Also, the limitations of the work are not mentioned in the paper. I recommend adding a section for limitations and discussion.
-We provided more results in the paper and added our comments about them. (Line 341-349, Fig 7, Table 3) (Line 351-360, 362-369 Fig 8, Table 4) (Line 399-401, 402-403)
We also added a section on limitations. (Line 393-396)
2- The authors claim that they predict the perpetrator, however, the dataset comprises 'Known/Unknown' categories. how is this prediction evaluated?
-Thank you for pointing this detail. We detailed how to predict the perpetrator from the known/unknown data and emphasized what we want to tell. (Line 370-378)
3- Label encoder is good when encoding ordinal data, however, since some of the columns comprise nominal data, One-Hot encoding is better to use in this case.
The year should be dropped from Table 1, the authors do not use it.
-We only used label encoder in the first step, we used the standardscaler () parameter in the second step and provided in the paper. (Line 199-200, 202-204) We added figure 3.
We dropped the year from table 2. (Table 1 changed to table 2.) (Page 20-21)
4- In lines 269-282, how are the results comparable? The paper says that its performance is better than seven studies. However, the datasets are not the same and the parameters are different, how is the comparison conducted The authors are advised to recheck this and validate their findings.
- Thank you for pointing this detail. We checked and reorganized our comparison findings with other works. We just shared the results. (Line 389-392)
Comments for the author
1- In lines 59 - 62, the authors should refer to the sections and not chapters.
- We changed chapters to sections. (Line 75-78)
2- The authors do not consider any discusses IoT attacks in section I.
- We added and discussed a new reference on IoT attacks. (line93-96)
3- In line 92, it should be 78.9\%.
-Done (Line 113)
4- In Table 2, it would be beneficial to add the year and the country. Also, adding features relations would add to this table.
-We added the year and the country. (Table2 Page 20-21)
Reviewer 2
Basic reporting
In this paper, the analyze cyber-crime in two different models with machine-learning methods and predict what effect the determined features have on the detection of cyber-attack method and the perpetrator. In their approach, it is seen that the Support Vector Machine Linear is the most successful in the cyber-attack method with 95% of accuracy. The Logistic Regression is the leading method in detecting the attacker with a rate of 65.4% of accuracy. The results reveal that the higher the level of victim’s education and income are, the less the probability of cyber-attack is.
Experimental design
The Support Vector Machine Linear is the most successful in the cyber-attack method with 95% of accuracy. The Logistic Regression is the leading method in detecting the attacker with a rate of 65.4% of accuracy. The results reveal that the higher the level of victim’s education and income are, the less the probability of cyber-attack is.
Validity of the findings
Good
-We thank the reviewer for appreciating our work and raising some concerns.
Comments for the author
1. The main objectives and contributions are to be highlighted in the paper.
-We highlighted the main objectives and contributions. (Line 60-63, 70-74)
2. The related works section should be summarized by the gaps identified in the existing literature and how the authors are going to address them in this paper (atleast one of them).
-We summarized by the gaps in the literature and how we address them in related works. (Line 154-157)
3. Some of the recent and relevant works such as the following can be discussed in the paper:
a) An effective feature engineering for DNN using hybrid PCA-GWO for intrusion detection in IoMT architecture
b) A Novel PCA-Firefly based XGBoost classification model for Intrusion Detection in Networks using GPU
c) A Review on Cyber Crimes on the Internet of Things
- We added the recent works in a) and c) as new references and discussed. (Line93-95, 143-146)
4) The authors should justify why they chose the machine learning algorithms they have used in the paper.
- We justified why chose the machine learning algorithms. (Line 181-185)
" | Here is a paper. Please give your review comments after reading it. |
55 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cyber-attacks have become one of the biggest problems of the world. They cause serious financial damages to countries and people every day. The increase in cyber-attacks also brings along cyber-crime. The key factors in the fight against crime and criminals are identifying the perpetrators of cyber-crime and understanding methods of attack.</ns0:p><ns0:p>Detecting and avoiding cyber-attacks are difficult tasks. However, researchers have recently been solving these problems by developing security models and making predictions through artificial intelligence methods. A high number of methods of crime prediction are available in the literature. On the other hand, they suffer from deficiency to predict cyber-crime and cyber-attack methods. This problem is tackled to identify an attack and the perpetrator of such attack, using actual data. The data include the type of crime, gender of perpetrator, damage and methods of attack. The data were acquired from the applications of the persons who were exposed to cyber-attacks to the forensic units. In this paper, we analyze cyber-crimes in two different models with machine-learning methods and predict the effect of the defined features on the detection of cyber-attack method and the perpetrator. We used eight machine-learning methods in our approach and concluded that their accuracy ratios were close. The Support Vector Machine Linear was found out to be the most successful in the cyber-attack method with an accuracy rate of 95.02%. In the first model, we could predict the types of attacks that the victims were likely to be exposed to with a high accuracy. The Logistic Regression was the leading method in detecting attackers with an accuracy rate of 65.42%. In the second model, we predicted whether the perpetrators could be identified by comparing their characteristics.</ns0:p><ns0:p>Our results have revealed that the probability of cyber-attack decreases as the education and income level of victim increases. We believe that cyber-crime units will use the proposed model. It will also facilitate the detection of cyber-attacks and make the fight against these attacks easier and more effective.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>I. INTRODUCTION</ns0:head><ns0:p>Nowadays, it has become exceedingly difficult to ensure the security of our systems including both corporate and personal data. Major countries, such as the United States and the United Kingdom, struggle with cyber-attacks and crimes by producing various security strategies <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Countries are striving to ensure security in cyber space and adapt to this field <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. Protecting the critical infrastructures has a vital importance for countries. Chemical, financial, health, and energy sectors, even nuclear power plants in some countries can be counted among these <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. Due to millions of cyber-attacks, financial losses significantly increase day by day <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. In 2020, data stolen from the information system of Airbus Company were put on the dark web market. Medical data of millions of people have been stolen and even state of emergency has been declared due to attacks on some cities <ns0:ref type='bibr'>[5]</ns0:ref>. The most important elements ensuring cyber security are integrity, confidentiality, authentication, authorization, nonrepudiation, and availability <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>.</ns0:p><ns0:p>With each passing day, work force becomes insufficient in fighting against cyber incidents and new solutions are sought. Solutions such as autonomous cyber defense systems <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, smart cyber security assistant architecture <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> and intrusion detection systems <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> are investigated in the fight against cyber-attacks and crimes. Researchers use machine-learning methods to detect power outages due to cyber-attacks <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref> and to prevent vulnerabilities of the Internet of things <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>. Other areas of use are to determine spam and network attacks <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>, to detect the phishing attacks against the banking sector <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref> and to reduce sexual crimes on social media <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. These methods have been implemented in fields as stock prediction <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, risk mapping by crimes <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> and cyber profiling <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>. Predicting crime trend and pattern <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>, criminal identity detection <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> and crime prevention <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> are also areas of implementation.</ns0:p><ns0:p>There are three main objectives in our study. First is to use actual cyber-crime data as input to predict a cyber-crime method and compare the accuracy results. Second is to measure whether cyber-crime perpetrators can be predicted based on the available data. The third objective is to understand the effect of victim profiles on cyber-attacks.</ns0:p><ns0:p>In this paper, real cyber-crime data of 5-years in Elazığ / Turkey were used. By using machinelearning methods, the method of attack was predicted, and the perpetrator was detected. The detection was based on features like age, gender, income, education, marital status, occupation, and the damage of the crime. By working on certain features such as age, gender, etc., we predicted the kind of attack methods and the victims of these attacks. These results will be used in cybercrime modelling and attack detection by the police forces dealing with cyber-crimes. Major contributions of the proposed approach are:  Providing advantages to cyber-crime department as it allows using actual data,  Enabling prediction of attacks that victims may be exposed to,  Allowing determination of the optimum performance by comparing machine-learning algorithms.</ns0:p><ns0:p>In Section II, the literature is reviewed, and the current studies and the missing parts are revealed. Section III explains the machine-learning methods to be used in research. Section IV presents the predictions and accuracy rate from the dataset and a comparison with previous studies. Section V includes results and future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>II. RELATED WORKS</ns0:head><ns0:p>The importance of the fight against such cyber-attacks, cyber-crimes and cyber security is highlighted in various studies. Cyber security is the protection of physical-digital data, networks, and technological systems from cyber-attacks, unauthorized accesses, disruptions, modifications, destructions, and damages through various processes, applications, and applied technologies <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. Cyber-attacks such as distributed denial of service attacks by sending malicious packets <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>, phishing attacks to banking and shopping sites that deceive the user <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> have increased significantly. In addition, attackers have been using malicious attack software (virus, worms, trojans, spyware and ransomware) that is installed into the user's computer without any consent of the user <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> increasingly. Again, the most common of these attacks and one of the attacks that are most difficult to be prevented is the social engineering attacks. They are based on technical skill, cunning and persuasion, made by taking advantage of the weakness of the victim. Kevin Mitnick, one of the world's famous hackers in social engineering attacks, penetrated most systems he attacked with this method <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>. In the work by Breda F et al., this attack is mentioned as one of the biggest security vulnerabilities in the system no matter how secure a technical system is <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. Likewise, attacks against IoT devices, which have increased rapidly in recent years, affect the society considerably. Thus, attacks and threats to the IoT structure should be understood for security purposes <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. Studies conducted to understand and combat cyber-attacks reveal the importance of crime prediction as discussed in this study.</ns0:p><ns0:p>The attacks described above are defined as prohibited criminal acts within the legal framework of many governments. The duty of fighting against crime and criminals is given to law enforcement departments. Researchers assist the institutions conducting the investigation with various analysis and prediction methods. For example, big data <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> and machine-learning <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> methods have been used to analyze crimes in many studies. They have contributed to crime and crime fighting institutions with artificial intelligence models. Among these are determining the areas where the crime can be committed and its story <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>, predicting the crime using spatial, temporal and demographic data <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>, and analyzing crime with literacy, unemployment and development index data <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>. Time series of crime data in San Francisco, Chicago and Philadelphia were used for predicting crimes in the following years. Decision Tree classification model performed better than KNN and Naive Bayes <ns0:ref type='bibr' target='#b34'>[32]</ns0:ref>. Crime data were passed through two different data processing procedures in Canada. A crime prediction was made with an accuracy between 39% and 44% using the KNN and Decision Trees <ns0:ref type='bibr' target='#b35'>[33]</ns0:ref>. Data such as location, type, date, time, latitude, and longitude of the crimes taking place in the USA were used as input. The result of the crime predictions made with KNN Classification, Logistic Regression, Decision Trees, Random Forest, Support Vector Machine, and Bayesian methods was that the KNN classification was the most successful with an accuracy of 78.9% <ns0:ref type='bibr' target='#b36'>[34]</ns0:ref>.</ns0:p><ns0:p>Thirty-nine different categories of crime data happened in the city of San Francisco were used. By using Gradient Boosted Trees and Support Vector Machines, a model dividing crimes into two classes, blue/white collar crime and violent/non-violent crime, was created. High accuracy was achieved in the classification of blue-& white-collar crimes. However, the study did not yield significant results in the classification of violent & non-violent crimes <ns0:ref type='bibr' target='#b37'>[35]</ns0:ref>. Data from a 10-year murder occurred in Brazil were used. Predictions with 97% accuracy were made using the random forest method in order to measure the effect of non-Gaussian residuals and urban metrics on murders. The results of this study revealed that unemployment and ignorance were important variables in murder. Also, the order of importance in predicting the crime was determined <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref>. In another study, the type, time and location of the crime data were used to predict crime in certain regions in India. The KNN prediction method was used. The method predicted when and where robbery, gambling, accident, violence, murder, and kidnapping crimes would occur. It was found to be more successful than a similar study conducted previously <ns0:ref type='bibr' target='#b39'>[37]</ns0:ref>.</ns0:p><ns0:p>Big data and machine-learning framework were implemented by using crime data collected from social media platforms. The data were gathered through Volunteered Geographic Information, web, and mobile crime reporting applications. Crime predictions were produced from the collected data using the Naive Bayes algorithm. The purpose of these predictions is to determine the location of possible crimes and prevent them <ns0:ref type='bibr' target='#b40'>[38]</ns0:ref>. The demographic and geographical data of the events that took place in the previous years were used to predict the terrorist incidents in India. This model predicted terrorist incidents using artificial intelligence algorithms and achieved results with relatively high accuracy <ns0:ref type='bibr' target='#b42'>[39]</ns0:ref>.</ns0:p><ns0:p>The cyber-crime data analyzed were publicly available data from social media platforms, including Facebook and Twitter. Researcher compared the algorithms according to the F-measure value, which is the degree of accuracy and precision. With an accuracy of 80%, RF algorithm was found the best fit in the scenario. Threats were automatically determined through the model analyzing cyber-crimes <ns0:ref type='bibr' target='#b43'>[40]</ns0:ref>. Real-time crime data published in the internet news were used through the screening program. SVM, Multinomial NB and RF classification methods were used. The data were separated into two; crime and non-crime. The most important part is that it currently provides analysis to the news <ns0:ref type='bibr' target='#b44'>[41]</ns0:ref>. Cyber-crime incident data occurring in India were classified using machine-learning techniques. The model, which predicted crimes with 99% of accuracy, reduced amount of time spent in analysis and manual reporting <ns0:ref type='bibr' target='#b45'>[42]</ns0:ref>. A universally compared intrusion detection dataset obtained from Kaggle was used. DNN was used to develop an effective and efficient intrusion detection system in the IoMT environment to classify and predict unforeseen cyber-attacks. DNN outperformed machine-learning algorithms <ns0:ref type='bibr' target='#b46'>[43]</ns0:ref>. When the related works are examined, a summary is presented in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Based on the review of the literature, we can suggest that cyber-attacks and crimes are important to study since they cause substantial damages to individuals and states. As we could observe the fields where machine-learning methods were used and these methods were successful in predictions, they were used in our study. Studies made significant contributions to the literature and particularly to the criminal units conducting investigations. In these studies, general crimes, cyber-crimes and attacks are generally used as a dataset. The actual dataset based on personal attributes is studied to a lower extent and therefore a machine learning-based model using the dataset of our study is proposed. Cyber-attack and perpetrator estimation method is tackled due to the importance of fields that are studied.</ns0:p></ns0:div>
<ns0:div><ns0:head>III. MATERIALS & METHODS</ns0:head><ns0:p>When people become a victim of a crime, they resort to cops dealing with that particular type of crime. These data are recorded in detail in the database of this unit. Police units report these crimes by their type, method, year, etc. They prepare statistics according to their characteristics, analyze, and visualize them. When multiple cyber-attacks are made simultaneously against a victim, this is recorded in police records as a single attack. Therefore, it is necessary to look at the details of the event, rather than the statistical data, in order to understand whether multiple methods are used. Although a large number of crimes exist in the database, the focus is cyber-crimes as in recent years. Cyber-crimes have caused considerable material and moral damages and they have not been prevented yet. Cyber-crime is chosen as the subject matter since most studies on this field do not employ actual data. In the proposed model, the objective is to take preventive measures against future crimes based on the characteristics of the victim. In addition, it will provide advantages to the police department in predicting cyber-crimes, profiling these crimes, perpetrators and victims. In addition, thanks to the model, consequential suffering will be prevented. The outcome of the study will enable to take tailored measures and facilitate informing people of the crimes they may be exposed to. Our dataset was real cyber-crime data that occurred in the province of Elazığ between 2015 and 2019. Accessing to actual data and preparing these data for processing with machine-learning methods was a challenging process. When the dataset was obtained, all cybercrime details were examined. The redundant areas were removed using various data science methods. The number of crimes, damages, attacks and methods of attack in the dataset are shown in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. In addition, the details of these four features are divided into color groups. With these data, predictions were made using various libraries in the Python 3.7 program. Main libraries of this program such as Numpy, Pandas, Matplotlib were used and the data were visualized through this program. The key advantages of using machine-learning methods in the paper are; the possibility to recognize multiple structured and unstructured data patterns, high-level success in detecting crime tactics as they change, extraction of relationships between complex data and the ability to produce results unpredictable by humans.</ns0:p></ns0:div>
<ns0:div><ns0:head>A. Features</ns0:head><ns0:p>Feature selection is the process of selecting interrelated and useful features in the dataset. It saves time and memory during data training in machine learning. If the features are improperly selected, time required for training may increase. This makes interpretation of the model difficult and increases the error rate in the model. The attributes of the features in our dataset were determined. Each entry was related to a real crime shown in Table <ns0:ref type='table'>2</ns0:ref>. These features were used as inputs in training data. In Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>, the criteria of the features in our dataset are determined as median, maximum, and minimum.</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Preprocessing</ns0:head><ns0:p>Standardization is the rescaling of features in a normal distribution. This needs to be completed before using machine-learning methods. The data were made suitable, numbers from 1 to 10 were given according to the variety of data in the columns. The StandardScaler () in the Python library was used to optimize the data to be used in algorithms. The bidirectional relationship between type of crime and damages, attack and method of attack is shown in Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>. The data were divided into 80% training data and 20% test data.</ns0:p><ns0:p>In the first model, the method of attack was tried to be predicted by giving the features of crime, gender, age, job, income, marital status, education, attack, harm, and perpetrator.</ns0:p><ns0:p>In the second model, the perpetrator of crime was tried to be predicted by giving the features of crime, gender, age, income, job, marital status, education, attack, harm, and attack method.</ns0:p></ns0:div>
<ns0:div><ns0:head>C. Logistic Regression (LR)</ns0:head><ns0:p>It is an equation allowing one to predict the value of one from the other, based on the relationship between two or more variables. Assuming that the variance of y is constant, the coefficients can be solved using the least squares method. This minimizes the error between the actual data point and the regression line. The equation for the Logistic Regression line is given as:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>y ' = b + 𝑤 1 𝑥 1 + 𝑤 1 𝑥 1 + .. + 𝑤 𝑛 𝑥 𝑛</ns0:formula><ns0:p>Where, y′ is the desired output, b is bias value, x is the property of the input, w is the weight of the features.</ns0:p></ns0:div>
<ns0:div><ns0:head>D. K-Nearest Neighbors (K-NN)</ns0:head><ns0:p>The KNN classifier is based on the distance function that measures the difference and similarity between two or more samples. The Euclidean distance d (x, y) between two samples is defined as:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>d(x,y) = ∑ 𝑛 𝑘 = 1 (𝑥 𝑘 + 𝑦 𝑘 ) 2</ns0:formula><ns0:p>Where, and are the element. 𝑥 𝑘 𝑦 𝑘 𝑛 𝑡ℎ n is the property of the dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑛 𝑡ℎ</ns0:head><ns0:p>First, the k parameter is determined and the distance of the new data to be included in the dataset is calculated one by one according to the existing data. The closest neighbour is found and assigned to the neighbour class k.</ns0:p></ns0:div>
<ns0:div><ns0:head>E. Support Vector Machine (SVM)</ns0:head><ns0:p>This method includes support vector classification and support vector regression. SVM is based on the concept of decision limits, supporting both binary and more classifications. Considering the training data D: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here is 1 or -1 and determines the class to which the point belongs. Every is a p-𝑦 𝑖 𝑥 𝑖 𝑥 𝑖 dimensional real vector. The support vector is at the closest point to the hyperplane of optimum separation. In the classification process, mapping input vectors on the separation hyperplane side of the feature space fall into one class, and locations fall into the class on the other side of the plane.</ns0:p></ns0:div>
<ns0:div><ns0:head>F. Naive Bayes (NB)</ns0:head><ns0:p>The naive bayes classifier is a simple probability classifier based on applying Bayes' theorem with strong independence assumptions between properties. A naive bayes based on the multi-label classification model can be applied. Given a dataset { , ,…, } labelled { , ,…, }, the</ns0:p><ns0:formula xml:id='formula_2'>𝑎 1 𝑎 2 𝑎 𝑗 𝑣 1 𝑣 2 𝑣 𝑗</ns0:formula><ns0:p>results are predicted using the following equation:</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_3'>𝑣 𝑁𝐵 = arg 𝑚𝑎𝑥 𝑣 𝑗 ∈ V 𝑃(𝑣 𝑗 ) ∏ 𝑖 𝑃𝑎 𝑖 |𝑣 𝑗</ns0:formula></ns0:div>
<ns0:div><ns0:head>G. Decision Tree (DT)</ns0:head><ns0:p>Decision tree is a classification method that creates a tree-like model consisting of decision nodes and leaf nodes by classification, feature, and target. A decision tree or classification tree is a tree where each internal node is labelled with an input property. Arcs from this tagged node are tagged with each of the possible values of the target attribute or lead to a sub-decision node in a different input attribute. A tree can learn by dividing the set of resources into subsets according to an attribute value test. This process is repeated in a recursive manner, called recursive partitioning, on each derived subset. The iteration is complete when the subset in a node has the full value of the target variable or the division no longer adds value to the predictions.</ns0:p><ns0:formula xml:id='formula_4'>) (5) (x, Y) = (𝑥 1 , 𝑥 2 , 𝑥 3 , …, 𝑥 𝑘 , Y</ns0:formula><ns0:p>The dependent variable Y is the target variable that we are trying to understand, classify or generalize. The vector x consists of input variables such as used for this task. 𝑥 1 , 𝑥 2 , 𝑥 3</ns0:p></ns0:div>
<ns0:div><ns0:head>H. Random Forest (RF)</ns0:head><ns0:p>Random forest is an algorithm that creates classifier on training data and combines outputs to make the best predictions on test data. Randomness minimizes variance to avoid over-learning according to training data.</ns0:p><ns0:formula xml:id='formula_5'>(6) 𝑦 = argmax p ∈ {ℎ(𝑥 1 )..ℎ(𝑥 𝑘 )} {∑ 𝑘 𝑗 = 1 (I(h(x│θ 𝑗 ) = 𝑝 ))}</ns0:formula><ns0:p>Where, is a classification tree, h(x│θ) k is the number of the tree selected from a pattern random vector. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>I. eXtreme Gradient Boosting (XGBoost)</ns0:head><ns0:p>The first step in XGBoost is to make the first prediction (base score). This prediction can be any number, as the correct result will be reached by converging with the operations to be done in the next steps. This number is 0.5 by default.</ns0:p><ns0:p>First, the loss function L( F(x)) is created. is the observed value, F(x) is the predicted value:</ns0:p><ns0:formula xml:id='formula_6'>𝑦 𝑖 , 𝑦 𝑖 (7) 𝐹 0 (𝑥) = argmin γ ∑ 𝑛 𝑖 = 1 (L(𝑦 𝑖 ,𝛾)</ns0:formula><ns0:p>Here, the constant variable is determined. It is the value loss function in sigma in the formula. 𝛾 (gamma) is the predicted value. i = 1, …. , n ( <ns0:ref type='formula'>8</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_7'>𝑟 𝑖𝑚 = -[ ∂𝐿(𝑦 𝑖 , F(𝑥 𝑖 )) ∂ F(𝑥 𝑖 ) ] F(x) = 𝐹 𝑚 -1 (𝑥)</ns0:formula><ns0:p>Where, r means residual, i is observation number, m denotes the number of the established tree.</ns0:p><ns0:p>The basic learning terminal node in tree growth is the regression tree. It is formulated below:</ns0:p><ns0:formula xml:id='formula_8'>(9) γ 𝑖𝑚 = argmin γ ∑ 𝑥 𝑖 € 𝑅 𝑖𝑗 L(𝑦 𝑖 , 𝐹 𝑚 -1 ( 𝑥 𝑖 ) + γ ) j = 1 … J 𝑚 (10) 𝐹 𝑚 (𝑥) = 𝐹 𝑚 -1 (𝑥) + ∑ 𝑗 𝑚 𝑗 = 1 𝛾 𝑗𝑚 𝐼 (𝑥 ∈ 𝑅 𝑗𝑚 )</ns0:formula></ns0:div>
<ns0:div><ns0:head>J. Accuracy, Precision, Recall, F1-Score</ns0:head><ns0:p>Accuracy (Acc) score is a method used to evaluate the performance of the model made by comparing the predictions made after running the algorithm with the test data. A value between 0 and 1 is produced according to the ratio of the entire predicted value for a prediction to match with the real values. To determine the accuracy of the forecast: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>IV. RESULTS & DISCUSSION</ns0:head><ns0:p>The study aims to analyze the data collected about incidents correctly, to avoid crimes and to catch the perpetrators. The main subject of this paper is to draw conclusions from the analyzed data and combat crimes based on the outcome. These results will reveal and shed light on the investigations carried out by law enforcement officers and any concealed facts. Based on the information on the victim and the method of the cyber-crime, and whether the perpetrator is identified or not, machine-learning methods may be used to determine if the same perpetrator carried out the cyber-attack. The damages suffered by the victims in cyber incidents in Elazığ province have been discovered over the years through various methods. The sum of monetary damages suffered by each victim in the dataset was obtained by summing over the years. It is thought that the decrease in such incidents, observed especially after 2017, results from deterrence secured by the laws and awareness activities. The amount of economic losses due to cyber-attacks is profoundly serious in Elazığ, as shown in Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>. The damage mentioned above is enough to show the importance of dealing with cyber security and attack methods.</ns0:p><ns0:p>In this section, results obtained by use of Support Vector Machine (Linear), Random Forest, Logistic Regression, eXtreme Gradient Boosting, Support Vector Machine (Kernel), Decision Tree, K-Nearest Neighbors, Naive Bayes algorithms are presented. We can evaluate the Pearson correlation coefficient between these data as shown in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. This correlation matrix shows that there are substantial correlations between practically all pairs of variables.</ns0:p><ns0:p>During the experiment, the dataset was first trained and tested in all algorithms. Accuracy and evaluation criteria were also adopted. Accuracy, precision, recall and F1 score values were obtained by comparing the predicted values with the test data.</ns0:p><ns0:p>The prediction accuracy, precision, recall and F1-score data among the algorithms of the first model predicting the method of attack are shown in Table <ns0:ref type='table'>3</ns0:ref>. The comparison of accuracy is shown in Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>. When the results were compared, SVML showed the best performance with an accuracy rate of 95.02% in terms of prediction accuracy. SVML algorithm outperformed RF, LR, XGBoost, SVMK, DT, KNN, and NB algorithms by 0.54%, 1.1%, 1.66%, 2.2%, 2.76%, 5.52% and 13.25% margin, respectively. NB showed the poorest performance with a rate of 81.7%. The performances of algorithms other than NB were close to each other. The distribution graph of the actual values and the predicted values obtained by the SVML algorithm are shown in Figure <ns0:ref type='figure' target='#fig_11'>7a</ns0:ref> and the error matrix is shown in Figure <ns0:ref type='figure' target='#fig_11'>7b</ns0:ref>.</ns0:p><ns0:p>Comparing the model in terms of precision, recall and F1-scores, the best result was also obtained with SVML algorithm, albeit a small margin. While LR, SVMK, DT, RF and XGBoost gave results above 92%, their performances were close to each other. Out of these three scores, a difference of approximately 10% was found between the underperforming KNN and NB. In general, all algorithms produced sufficient results. These results showed that the machine-learning approach could be successfully applied to predict the cyber-attack method. In the model to be created, when the features of a person (Table <ns0:ref type='table'>2</ns0:ref>) are entered, it can be predicted which crime such person will be exposed to. Also, warning systems can be created for groups of persons.</ns0:p><ns0:p>The prediction accuracy, precision, recall and F1-scores data among the algorithms of the second model that predicts the attacker are given in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>. The comparison of accuracy is shown in Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref>. LR showed the best performance in this model with 65.42% and SVML, KNN, SVMK, XGBoost, RF, DT and NB algorithms achieved accuracy with a margin of 0.78%, 0.85%, 1.33%, 1.39%, 1.88%, 2.44% and 3.34%, respectively. Even though NB showed the poorest performance, the performances of algorithms were very close to each other. The distribution graph of the actual values and the predicted values obtained by the SVML algorithm are shown in Figure <ns0:ref type='figure' target='#fig_13'>9a</ns0:ref> and the error matrix is shown in Figure <ns0:ref type='figure' target='#fig_13'>9b</ns0:ref>.</ns0:p><ns0:p>In terms of precision, recall, and F1-scores, the results of the algorithms varied between 55% and 65%. The results were not satisfactory. Based on the known/unknown feature of the perpetrator, we aimed to find out whether the same perpetrator committed the crime after comparing the features of the attacker who carried out the incident. However, the results of the model indicated that a new model should be created by adding new attributes.</ns0:p><ns0:p>When the papers shown in the table in section II are compared with our model, eight of the datasets of these works include general street-crimes. One of them works with terrorist crimes, and two of them work with cyber-crimes. Cyber-crime data is one of the less studied crime types in the literature so that we use these data for our proposed study.</ns0:p><ns0:p>In the other predictions, it is seen that the data such as the type, time, place, region, attacker, demographic and regional factors of the crime are mostly used as features. In our study, predictions are made according to the features of age, gender, income, occupation, harm, and attack methods because there are very few studies using these features.</ns0:p><ns0:p>When the studies conducted by Arora et al. and Ch et al. were compared with our study, it was found out that some features were similar. All studies focused on cyber-crime, the datasets and parameters were different from each other, though. The accuracy rate was found to be 80% in the study conducted by Arora et al. and 99.3% in the study carried out by Ch et al.</ns0:p><ns0:p>The limitation of our study is the quantity of data set since our data set consists of actual data. The temporal data enables estimation of time series; however, these data are not available in our dataset. Likewise, if the technical details of the attacks were available in the police records, detailed estimates could make it easier to catch the perpetrator.</ns0:p><ns0:p>Many studies have predicted where and when crimes will occur in the future. However, many of them have not touched upon the method by which crimes occur, how they can be prevented, and what the features of the perpetrator are. One of the key advantages of this study is using actual data and it is a preliminary step towards profiling for people having similar features with the attacked victims. Another advantage of the proposed study is predicting what the cyber-attack method will be and whether its perpetrator can be detected. Our results show that any exposure to cyber-crimes reduces as the level of education and income increases. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>V. CONCLUSIONS & FUTURE WORK</ns0:head><ns0:p>This paper suggests a method that predicts and detects cyber-attacks by using both machinelearning algorithms and the data from previous cyber-crime cases. In the model, the characteristics of the people who may be attacked and which methods of attack they may be exposed to are predicted. It has been observed that machine-learning methods are successful enough. The Support Vector Machines linear method is the most successful of these methods. The success rate of predicting the attacker who will make a cyber-attack in the model is around 60%. Other artificial intelligence methods may be able to try to increase this ratio. In our approach, it is concluded that it is necessary to draw attention to especially malware and social engineering attacks. It is found that the higher the levels of the victim's education and income are, the less the probability of cyberattack is. The primary focus of this study is to lead law enforcement agencies in the fight against cyber-crime and provide faster and more effective solutions in detecting crime and criminals. New training and warning systems can be created for people with similar characteristics by the evaluation of the characteristics of the attack victims emerged in our analysis study.</ns0:p><ns0:p>For future works; crime, criminal, victim profiling, and cyber-attacks can be predicted using deep learning algorithms and the results can be compared. Based on the talks with other authorized units having crime databases, cyber-crime data of other provinces may also be obtained to use for comparison with this study. The data of other provinces can be compared to similar studies. Intelligent criminal-victim detection systems that can be useful to law enforcement agencies in the fight against crime and criminals can be created to reduce crime rates. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>( 3 )</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>D = {(𝑥 𝑖 , 𝑦 𝑖 ) | 𝑥 𝑖 ∈ R p , 𝑦 𝑖 ∈ { -1, 1}} 𝑛 𝑖 = 1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>If D(x, y) represents the training data, each classification tree in the ensemble is constructed using a different subset ) of the training data. Each tree then works like normal 𝐷 θ 𝑘 (𝑥, 𝑦) ⊂ 𝐷(𝑥,𝑦) decision trees. Data is segmented by randomly selected value until it is fully partitioned, or the maximum allowed depth is reached. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>TP = Prediction is positive(normal) and actual is positive(normal). FP = Prediction is positive(normal) and actual is negative(abnormal). FN = Prediction is negative(abnormal) and actual is positive(normal). TN = Prediction is negative(abnormal) and actual is negative(abnormal)The other evaluation metrics for the proposed model are precision, recall and F1-score. Precision (P) is the rate of correctly classified positive instances to the total number of positive instances. Recall (R) shows how successfully positive instances are predicted. F1-Score (F1) is the weighted average of the Precision and Recall values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>+ 𝐹𝑁 + 𝐹𝑃 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1.The number of crimes, damages, attacks and methods of attack in the dataset</ns0:figDesc><ns0:graphic coords='24,42.52,178.87,525.00,250.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Measure of features</ns0:figDesc><ns0:graphic coords='25,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Pairwise relationship between crime, harm, attack, and attack method</ns0:figDesc><ns0:graphic coords='26,42.52,178.87,525.00,374.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Damage caused by cyber-attacks in Elazığ province</ns0:figDesc><ns0:graphic coords='27,42.52,178.87,525.00,342.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Pearson's linear correlations matrix</ns0:figDesc><ns0:graphic coords='28,42.52,178.87,525.00,448.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy rate of algorithms applied in attack method prediction</ns0:figDesc><ns0:graphic coords='29,42.52,178.87,525.00,366.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. A) Model 1 comparison values (Results are colored redder as they approach the actual values and more purple as they move away). B) Confusion matrix of predicted values</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Accuracy rate of algorithms applied in perpetrator prediction method</ns0:figDesc><ns0:graphic coords='31,42.52,178.87,525.00,364.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. A) Model 2 comparison values (Results are colored redder as they approach the actual values and more purple as they move away). B) Confusion matrix of predicted values</ns0:figDesc><ns0:graphic coords='32,42.52,244.87,525.00,184.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance of machine learning algorithms for model 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54129:2:0:NEW 2 Mar 2021)</ns0:note>
</ns0:body>
" | "Dear Prof. Haider Abbas and reviewers:
Thanks to the editor and reviewers who took the time to improve our article. Our responses to reviewers' comments are below. I hope revisions and clarifications are accepted.
Yours Sincerely
Abdulkadir Bilen
Reviewer 1
Basic reporting
- I thank the authors for their efforts in modifying the paper and replying to the concerns.
Thank you for your nice comment, I believe there is a better work with your contribution.
- The authors improved the language of the paper, however, I recommend using shorter sentences and having another round of language revision.
We shortened the sentences and revised the language again.
Experimental design
- I thank the authors for reporting other matrices which better demonstrate the performance of their proposed model.
Thank you
- In section III.B, the authors mention the use of cross-validation with k = 3 and a split of 80% for training and 20% for testing. This is inconsistent, please revise.
We removed the cross-validation phrase from the paper and revised the inconsistent. Line 210-211
- Reporting the training performance doesn't add much to the discussion. The authors should focus on validation and testing performance.
We removed the training performance out of the discussions and tables. We focused on validation and testing data. Line 353 and line 372.
Validity of the findings
- It is unclear which fold is reported in Figure 7 and Figure 9.
We made an explanation about the folds and added them to figure 7 and figure 9 separately. Page 31 and page 33
- How are the models used simultaneously, the paper doesn't provide a hybrid model or an aggregation method for the eight models. The authors should review this claim.
This sentence has been removed from the paper. Line 385.
- The authors should not use label encoder for nominal data
You are right about this. We corrected it in the previous revision and removed it from the paper. Line 207.
Comments for the author
- There is an extra sentence in line 312 'formulas are used.'
This sentence has been removed. Line 315.
- I recommend improving the resolution of Figure 1. Also, is there ant relevance to the colours/grouping? This should be added to the text.
We improved the resolution and added the grouping to the text. An explanation has been added regarding the colored grouping here. Line 186.
- The authors should add the labels (percentages) to Figure 6 and Figure 8 to improve readability.
We added the percentages to figures 6 and 8. Page 30 and page 32.
Reviewer 2
Basic reporting
The authors have addressed all my comments. The paper can be accepted for publication in its current form now.
Experimental design
Satisfactory
Validity of the findings
Satisfactory
Comments for the author
The authors have addressed all my comments. The paper can be accepted for publication in its current form now.
Thank you for your nice comments and for accepting my paper.
" | Here is a paper. Please give your review comments after reading it. |
56 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>14</ns0:head><ns0:p>Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands.</ns0:p><ns0:p>In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the high-dimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of non-linearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed.</ns0:p><ns0:p>The combination of these new ingredients results in the Utility metric for Unsupervised feature selection (U2FS) algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters. 38 this often leads to problems related to the curse of dimensionality (Verleysen and Franc ¸ois, 2005) and 39 thus, to irrelevant and redundant data for machine learning algorithms (Maindonald, 2007). Therefore, it 40 is crucial to perform dimensionality reduction before analyzing the data. 41 There are two types of dimensionality reduction techniques. So-called feature selection techniques 42 directly select a subset of the original features. On the other hand, transformation techniques compute 43 a new (smaller) set of features, each of which are derived from all features of the original set. Some 44 examples of these are Principal Component Analysis (PCA) (Wold et al., 1987), Independent Component 45 Analysis (ICA) (Jiang et al., 2006) or the Extended Sammon Projection (ESP) (Ahmad et al., 2019).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands. In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the highdimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of nonlinearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed. The combination of these new ingredients results in the Utility metric for Unsupervised feature selection U2FS algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION 36</ns0:head><ns0:p>Many applications of data science require the study of highly multi-dimensional data. A high number of 37 dimensions implies a high computational cost as well as a large amount of memory required. Furthermore, While these methods lead to a reduction in the number of dimensions, results are less interpretable, since their direct relationship with the original set of features is lost.</ns0:p><ns0:p>In this work, the focus is on unsupervised feature selectors. Since these methods do not rely on the availability of labels or annotations in the data, the information comes from the learning of the underlying structure of the data. Despite this challenge, the generalization capabilities of these methods are typically better than for supervised or semi-supervised methods <ns0:ref type='bibr' target='#b13'>(Guyon and Elisseeff, 2003)</ns0:ref>. Within unsupervised feature selectors, sparse learning based methods have gained attention in the last 20 years <ns0:ref type='bibr' target='#b19'>(Li et al., 2017)</ns0:ref>.</ns0:p><ns0:p>These methods rely on graph theory and manifold learning to learn the underlying structures of the data <ns0:ref type='bibr' target='#b21'>(Lunga et al., 2013)</ns0:ref>, and they apply sparsity inducing techniques to perform subset selection. However, to the best of our knowledge, none explores specifically the behavior of these methods with data presenting non-linear relationships between the features (i.e., dimensions). While the graph definition step can make use of kernels to tackle non-linearities, these can be heavily affected by the curse of dimensionality, since they are often based on a distance metric <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>.</ns0:p><ns0:p>After the manifold learning stage, sparse regression is applied to score the relevance of the features in the structures present in the graph. These formulations make use of sparsity-inducing regularization techniques to provide the final subset of features selected, and thus, they are highly computationally expensive. These methods are often referred to as structured sparsity-inducing feature selectors (SSFS), or sparse learning based methods <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Li et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Despite the large amount of unsupervised SSFS algorithms described in the literature, these methods are cumbersome to use for a novice user. This is not only due to the codes not being publicly available, but also due to the algorithms requiring regularization parameters which are difficult to tune, in particular in unsupervised settings.</ns0:p><ns0:p>In this work, an efficient unsupervised feature selector based on the utility metric (U2FS) is proposed.</ns0:p><ns0:p>U2FS is a ready-to-use, publicly available unsupervised sparsity-inducing feature selector designed to be robust for data containing non-linearities. The code is available here: https://github.com/avillago/u2fs, where all functions and example codes are published. The main contributions of this work are:</ns0:p><ns0:p>• The definition of a new method to automatically approximate the radial-basis function (RBF) kernel parameter without the need for a user-defined tuning parameter. This method is used to tackle the curse of dimensionality when embedding the data taking non-linearities into account.</ns0:p><ns0:p>• The suggestion of a backwards greedy approach for the stage of subset selection, based on the utility metric for the least-squares problem. The utility metric was proposed in the framework of supervised learning <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>, and has been used for channel selection in applications such as electroencephalography (EEG) <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref>, sensor networks <ns0:ref type='bibr' target='#b28'>(Szurley et al., 2014)</ns0:ref>, and microphone arrays <ns0:ref type='bibr' target='#b27'>(Szurley et al., 2012)</ns0:ref>. Nevertheless, this is the first work in which this type of approach is proposed for the sparsity-inducing stage of feature selection.</ns0:p><ns0:p>• Propose a non-parametric and efficient unsupervised SSFS algorithm. This work analyzes the proposed method U2FS in terms of its complexity, and of its performance on simulated and benchmark data. The goal is to reduce the computational cost while maintaining a comparable performance with respect to the state-of-the-art. In order to prove this, U2FS is compared to three related state-of-the-art algorithms in terms of accuracy of the features selected, and computational complexity of the algorithm.</ns0:p><ns0:p>The rest of the paper is structured as follows. In Related Work, previous algorithms on SSFS are summarized. In Methods, the proposed U2FS method is described: first the manifold learning stage, together with the algorithm proposed for the selection of the kernel parameter; and further on, the utility metric is discussed and adapted to feature selection. The experiments performed in simulations and benchmark databases, as well as the results obtained are described in the Results and Discussion sections.</ns0:p><ns0:p>Finally, the last section provides some conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Sparsity-inducing feature selection methods have become widely used in unsupervised learning applications for high-dimensional data. This is due to two reasons. On the one hand, the use of manifold learning guarantees the preservation of local structures present in the high-dimensional data. Additionally, its</ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science combination with feature selection techniques not only reduces the dimensionality of the data, but also guarantees interpretability.</ns0:p><ns0:p>Sparsity-inducing feature selectors learn the structures present in the data via connectivity graphs obtained in the high-dimensional space <ns0:ref type='bibr' target='#b36'>(Yan et al., 2006)</ns0:ref>. The combination of manifold learning and regularization techniques to impose sparsity, allows to select a subset of features from the original dataset that are able to describe these structures in a smaller dimensional space.</ns0:p><ns0:p>These algorithms make use of sparsity-inducing regularization approaches to stress those features that are more relevant for data separation. The sparsity of these approaches is controlled by different statistical norms (l r,p -norms), which contribute to the generalization capability of the methods, adapting them to binary or multi-class problems <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016)</ns0:ref>. One drawback of these sparse regression techniques is that generally, they rely on optimization methods, which are computationally expensive.</ns0:p><ns0:p>The Laplacian Score <ns0:ref type='bibr' target='#b14'>(He et al., 2006)</ns0:ref> was the first method to perform spectral feature selection in an unsupervised way. Based on the Laplacian obtained from the spectral embedding of the data, it obtains a score based on locality preservation. SPEC <ns0:ref type='bibr' target='#b42'>(Zhao and Liu, 2007)</ns0:ref> is a framework that contains this previous approach, but it additionally allows for both supervised or unsupervised learning, including other similarity metrics, as well as other ranking functions. These approaches evaluate each feature independently, without considering feature interactions. These interactions are, however, taken into account in Multi-Cluster Feature Selection (MCFS) <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>, where a multi-cluster approach is defined based on the eigendecomposition of a similarity matrix. The subset selection is performed applying an l 1 -norm regularizer to approximate the eigenvectors obtained from the spectral embedding of the data inducing sparsity. In UDFS <ns0:ref type='bibr' target='#b37'>(Yang et al., 2011)</ns0:ref> the l 1 -norm regularizer is substituted by a l 2,1 -norm to apply sample and feature-wise constraints, and a discriminative analysis is added in the graph description. In NDFS <ns0:ref type='bibr' target='#b20'>(Li et al., 2012)</ns0:ref>, the use of the l 2,1 -norm is preserved, but a non-negative constraint is added to the spectral clustering stage. Additionally, this algorithm performs feature selection and spectral clustering simultaneously.</ns0:p><ns0:p>The aforementioned algorithms perform manifold learning and subset selection in a sequential way.</ns0:p><ns0:p>However, other methods tackle these simultaneously, in order to adaptively change the similarity metric or the selection criteria regarding the error obtained between the original data and the new representation.</ns0:p><ns0:p>Examples of these algorithms are JELSR <ns0:ref type='bibr' target='#b15'>(Hou et al., 2013)</ns0:ref>, SOGFS <ns0:ref type='bibr' target='#b25'>(Nie et al., 2019)</ns0:ref>, (R)JGSC <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref> and DSRMR <ns0:ref type='bibr' target='#b29'>(Tang et al., 2018)</ns0:ref>, and all make use of an l 2,1 -norm. Most recently, the SAMM-FS algorithm was proposed <ns0:ref type='bibr' target='#b40'>(Zhang et al., 2019)</ns0:ref>, where a combination of similarity measures is used to build the similarity graph, and the l 2,0 -norm is used for regression. This group of algorithms are currently the ones achieving the best results, at the cost of using complex optimization techniques to adaptively tune both stages of the feature selection process. While this can lead to good results, it comes with a high computation cost, which might hamper the tuning process, or might simply not be worthy for some applications. SAMM-FS and SOGFS are the ones that more specifically suggest new approaches to perform the embedding stage, by optimally creating the graph <ns0:ref type='bibr' target='#b25'>(Nie et al., 2019)</ns0:ref> or deriving it from a combination of different similarity metrics <ns0:ref type='bibr' target='#b40'>(Zhang et al., 2019)</ns0:ref>. Again, both approaches require computationally expensive optimization techniques to select a subset of features.</ns0:p><ns0:p>In summary, even if SSFS methods are getting more sophisticated and accurate, this results in algorithms becoming more complex in terms of computational time, and in the ease of use. The use of advanced numerical optimization techniques to improve results makes algorithms more complex, and requires regularization parameters which are not easy to tune. In this work, the combination of a new approach to estimate the graph connectivity based on the RBF kernel, together with the use of the utility metric for subset selection, results in an efficient SSFS algorithm, which is easy to use and with lower complexity than the state-of-the-art. This efficient implementation is competitive with state-of-the-art methods in terms of performance, while using a simpler strategy, which is faster to compute and easier to use.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>This section describes the proposed U2FS algorithm, which focuses on selecting the relevant features in an unsupervised way, at a relatively small computational cost. The method is divided in three parts.</ns0:p><ns0:p>Firstly, the suggested manifold learning approach is explained, where an embedding based on binary weighting and the RBF kernel are used. Then a method to select the kernel parameter of the RBF kernel</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science is proposed, specially designed for high-dimensional data. Once the manifold learning stage is explained, the Utility metric is proposed as a new approach for subset selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Manifold learning considering non-linearities</ns0:head><ns0:p>Given is a data matrix X ∈ R N×d , with X = [x 1 ; x 2 ; . . . ; x N ],</ns0:p><ns0:formula xml:id='formula_0'>x i = [x (1) i , x (2) i , . . . , x (d)</ns0:formula><ns0:p>i ], i = 1, . . . , N, N the number of data points, and d the number of features (i.e., dimensions) in the data. The aim is to learn the structure hidden in the d-dimensional data and approximate it with only a subset of the original features.</ns0:p><ns0:p>In this paper, this structure will be identified by means of clustering, where the dataset is assumed to be characterized by c clusters.</ns0:p><ns0:p>In spectral clustering, the clustering structure of this data can be obtained by studying the eigenvectors derived from a Laplacian built from the original data <ns0:ref type='bibr' target='#b34'>(Von Luxburg (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>Biggs et al. (1993)</ns0:ref>). The data is represented using a graph G = (V , E ). V is the set of vertices v i , i = 1, . . . , N where v i = x i . E = {e i j } with i = 1, . . . , N j = 1, . . . , N is the set of edges between the vertices where {e i j } denotes the edge between vertices v i and v j . The weight of these edges is determined by the entries w i j ≥ 0 of a similarity matrix W. We define the graph as undirected. Therefore, the similarity matrix W, is symmetric (since w i j = w ji , with the diagonal set to w ii = 0). Typically, W is computed after coding the pairwise distances between all N data points. There are several ways of doing this, such as calculating the k-nearest neighbours (KNN) for each point, or choosing the ε-neighbors below a certain distance <ns0:ref type='bibr' target='#b3'>(Belkin and Niyogi, 2002)</ns0:ref>.</ns0:p><ns0:p>In this paper, two similarity matrices are adopted inspired by the work in <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>, namely a binary one and one based on an RBF kernel. The binary weighting is based on KNN, being w i j = 1 if and only if vertex i is within the K closest points to vertex j. Being a non-parametric approach, the binary embedding allows to simply characterize the connectivity of the data.</ns0:p><ns0:p>Additionally, the use of the RBF kernel is considered, which is well suited for non-linearities and allows to characterize complex and sparse structures <ns0:ref type='bibr' target='#b34'>(Von Luxburg, 2007)</ns0:ref>. The RBF kernel is defined as</ns0:p><ns0:formula xml:id='formula_1'>K(x i , x j ) = exp(−||x i − x j || 2 /2σ 2 ).</ns0:formula><ns0:p>The selection of the kernel parameter σ is a long-standing challenge in machine learning. For instance, in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>, σ 2 is defined as the mean of all the distances between the data points. Alternatively, a rule of thumb, uses the sum of the standard deviations of the data along each dimension <ns0:ref type='bibr' target='#b32'>(Varon et al., 2015)</ns0:ref>. However, the estimation of this parameter is highly influenced by the amount of features or dimensions in the data, making it less robust to noise and irrelevant features.</ns0:p><ns0:p>In the next section, a new and better informed method to approximate the kernel parameter is proposed.</ns0:p><ns0:p>The graph G, defined by the similarity matrix W, can be partitioned into multiple disjoint sets. Given the focus on multi-cluster data of our approach, the k-Way Normalized Cut (NCut) Relaxation is used, as proposed in <ns0:ref type='bibr' target='#b24'>Ng et al. (2002)</ns0:ref>. In order to obtain this partition, the degree matrix D of W must be calculated. D is a diagonal matrix for which each element on the diagonal is calculated as</ns0:p><ns0:formula xml:id='formula_2'>D ii = ∑ j W i, j .</ns0:formula><ns0:p>The normalized Laplacian L is then obtained as L = D −1/2 WD −1/2 , as suggested in Von <ns0:ref type='bibr' target='#b34'>Luxburg (2007)</ns0:ref>.</ns0:p><ns0:p>The vectors y embedding the data in L can be extracted from the eigenvalue problem <ns0:ref type='bibr' target='#b8'>(Chung and Graham, 1997)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_3'>Ly = λ y (1)</ns0:formula><ns0:p>Given the use of a normalized Laplacian for the data embedding, the vectors y must be adjusted using the degree matrix D:</ns0:p><ns0:formula xml:id='formula_4'>α = D 1/2 y,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>which means that α is the solution of the generalized eigenvalue problem of the pair W and D. These eigenvectors α are a new representation of the data, that gathers the most relevant information about the structures appearing in the high-dimensional space. The c eigenvectors, corresponding to the c highest eigenvalues (after excluding the largest one), can be used to characterize the data in a lower dimensional space <ns0:ref type='bibr' target='#b24'>(Ng et al., 2002)</ns0:ref>. Thus, the matrix E = [α 1 , α 2 , . . . , α c ] containing column-wise the c selected eigenvectors, will be the low-dimensional representation of the data to be mimicked using a subset of the original features, as suggested in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Kernel parameter approximation for high-dimensional data</ns0:head><ns0:p>One of the most used similarity functions is the RBF kernel, which allows to explore non-linearities in the data. Nevertheless, the kernel parameter σ 2 must be selected correctly, to avoid overfitting or the allocation of all data points to the same cluster. This work proposes a new approach to approximate this kernel parameter, which will be denoted by σ 2 when derived from our method. This method takes into account the curse of dimensionality and the potential irrelevant features or dimensions in the data.</ns0:p><ns0:p>As a rule of thumb, σ 2 is approximated as the sum of the standard deviation of the data along each dimension <ns0:ref type='bibr' target='#b32'>(Varon et al., 2015)</ns0:ref>. This approximation grows with the number of features (i.e. dimensions) of the data, and thus, it is not able to capture its underlying structures in high-dimensional spaces.</ns0:p><ns0:p>Nevertheless, this σ 2 is commonly used as an initialization value, around which a search is performed, considering some objective function <ns0:ref type='bibr' target='#b2'>(Alzate and Suykens, 2008;</ns0:ref><ns0:ref type='bibr' target='#b32'>Varon et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The MCFS algorithm skips the search around an initialization of the σ 2 value by substituting the sum of the standard deviations by the mean of these <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>. By doing so, the value of σ 2 does not overly grow. This estimation of σ 2 suggested in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref> will be referred to as σ 2 0 . A drawback of this approximation in high-dimensional spaces is that it treats all dimensions as equally relevant for the final estimation of σ 2 0 , regardless of the amount of information that they actually contain.</ns0:p><ns0:p>The aim of the proposed approach is to provide a functional value of σ 2 that does not require any additional search, while being robust to high-dimensional data. Therefore, this work proposes an approximation technique based on two factors: the distances between the points, and the number of features or dimensions in the data.</ns0:p><ns0:p>The most commonly used distance metric is the euclidean distance. However, it is very sensitive to high-dimensional data, deriving unsubstantial distances when a high number of features is involved in the calculation <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>. In this work, the use of the Manhattan or taxicab distance <ns0:ref type='bibr' target='#b26'>(Reynolds, 1980)</ns0:ref> is proposed, given its robustness when applied to high-dimensional data <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>.</ns0:p><ns0:p>For each feature l, the Manhattan distance δ l is calculated as:</ns0:p><ns0:formula xml:id='formula_5'>δ l = 1 N N ∑ i, j=1 |x il − x jl |<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Additionally, in order to reduce the impact of irrelevant or redundant features, a system of weights is added to the approximation of σ 2 . The goal is to only take into account the distances associated to features that contain relevant information about the structure of the data. To calculate these weights, the probability density function (PDF) of each feature is compared with a Gaussian distribution. Higher weights are assigned to the features with less Gaussian behavior, i.e. those the PDF of which differs the most from a Gaussian distribution. By doing so, these will influence more the final σ 2 value, since they allow a better separation of the structures present in the data.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows a graphical representation of this estimation. The dataset in the example has 3 dimensions or features: f 1 , f 2 and f 3 . f 1 and f 2 contain the main clustering information, as it can be observed in Figure <ns0:ref type='figure'>1a</ns0:ref>, while f 3 is a noisy version of f 1 , derived as</ns0:p><ns0:formula xml:id='formula_6'>f 3 = f 1 + 1.5n</ns0:formula><ns0:p>, where n is drawn from a normal distribution N (0, 1). Figures <ns0:ref type='figure'>1b, 1c and 1d</ns0:ref> show in a continuous black line the PDFs derived from the data, and in a grey dash line their fitted Gaussian, in dimensions f 1 , f 2 and f 3 respectively. This fitted Gaussian was derived using the Curve Fitting toolbox of Matlab TM . As it can be observed, the matching of a Gaussian with an irrelevant feature is almost perfect, while those features that contain more information, like f 1 and f 2 , deviate much more from a normal distribution.</ns0:p><ns0:p>Making use of these differences, an error, denoted φ l , for each feature l, where l = 1, . . . , d, is calculated as:</ns0:p><ns0:formula xml:id='formula_7'>φ l = 1 H H ∑ i=1 (p i − g i ) 2 , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>where H is the number of bins in which the range of the data is divided to estimate the PDF (p), and g</ns0:p><ns0:p>is the fitted Gaussian. The number of bins in this work is set to 100 for standardization purposes. Equation (4) corresponds to the mean-squared error (MSE) between the PDF of the data over feature l and its fitted Gaussian. From these φ l , the final weights b l are calculated as:</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>. Weight system for relevance estimation. In Figure <ns0:ref type='figure'>1A</ns0:ref>, f 1 and f 2 can be seen. 1B, 1C and 1D show in black the PDFs p i of f 1 , f 2 and f 3 respectively, and in grey dotted line their fitted Gaussian g i .</ns0:p><ns0:formula xml:id='formula_9'>b l = φ l ∑ d l=1 φ l<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Therefore, combining (3) and ( <ns0:ref type='formula' target='#formula_9'>5</ns0:ref>), the proposed approximation, denoted σ 2 , is derived as:</ns0:p><ns0:formula xml:id='formula_10'>σ 2 = d ∑ l=1 b l δ l ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>which gathers the distances present in the most relevant features, giving less importance to the dimensions that do not contribute to describe the structure of the data. The complete algorithm to calculate σ 2 is described in Algorithm 1.</ns0:p><ns0:p>Algorithm 1 Kernel parameter approximation for high-dimensional data.</ns0:p><ns0:p>Input: Data X ∈ R N×d .</ns0:p><ns0:p>Output: Sigma parameter σ 2 1: Calculate the Manhattan distances between the datapoints using Equation (3): vector of distances per feature δ l . 2: Obtain the weights for each of the features using Equations ( <ns0:ref type='formula' target='#formula_7'>4</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_9'>5</ns0:ref>): weights b l . 3: Calculate σ 2 using Equation (6).</ns0:p></ns0:div>
<ns0:div><ns0:head>Utility metric for feature subset selection</ns0:head><ns0:p>In the manifold learning stage, a new representation E of the data based on the eigenvectors was built, which described the main structures present in the original high-dimensional data. The goal is to select a subset of the features which best approximates the data in this new representation. In the literature, this feature selection problem is formulated using a graph-based loss function and a sparse regularizer of the coefficients is used to select a subset of features, as explained in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref>. The main idea of these approaches is to regress the data to its low dimensional embedding along with some sparse regularization. The use of such regularization techniques reduces overfitting and achieves dimensionality reduction. This regression is generally formulated as a least squares (LS) problem, and in many of these</ns0:p></ns0:div>
<ns0:div><ns0:head>6/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science cases, the metric that is used for feature selection is the magnitude of their corresponding weights in the least squares solution <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b12'>Gui et al., 2016)</ns0:ref>. However, the optimized weights do not necessarily reflect the importance of the corresponding feature as it is scaling dependent and it does not properly take interactions across features into account <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>. Instead, the importance of a feature can be quantified using the increase in least-squared error (LSE) if that feature was to be removed and the weights were re-optimized. This increase in LSE, called the 'utility' of the feature can be efficiently computed <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref> and can be used as an informative metric for a greedy backwards feature selection procedure <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Narayanan and Bertrand, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Szurley et al., 2014)</ns0:ref>, as an alternative for (group-)LASSO based techniques. Under some technical conditions, a greedy selection based on this utility metric can even be shown to lead to the optimal subset <ns0:ref type='bibr' target='#b9'>(Couvreur and Bresler, 2000)</ns0:ref>.</ns0:p><ns0:p>After representing the dataset using the matrix E ∈ R N×c containing the c eigenvectors, the following LS optimization problem finds the weights p that best approximate the data X in the c-dimensional representation in E:</ns0:p><ns0:formula xml:id='formula_11'>J = min P 1 N ||Xp − E|| 2 F (<ns0:label>7</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula><ns0:p>where J is the cost or the LSE and ||.|| F denotes the Frobenius norm.</ns0:p><ns0:p>If X is a full rank matrix and if N > d, the LS solution p of ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) is</ns0:p><ns0:formula xml:id='formula_13'>p = R −1 XX R XE , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>) with R XX = 1 N X T X and R XE = 1 N X T E.</ns0:formula><ns0:p>The goal of this feature selection method is to select the subset of s(< d) features that best represents E. This feature selection problem can be reduced to the selection of the best s(< d) columns of X which minimize (7). However, this is inherently a combinatorial problem and is computationally unfeasible to solve. Nevertheless, several greedy and approximative methods have been proposed <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>Narayanan and Bertrand, 2020)</ns0:ref>. In the current work, the use of the utility metric for subset selection is proposed to select these best s columns.</ns0:p><ns0:p>The utility of a feature l of X, in an LS problem like (7), is defined as the increase in the LSE J when the column corresponding to the l-th feature in X is removed from the problem and the new optimal weight matrix, p−l , is re-computed similar to (8). Consider the new LSE after the removal of feature l and the re-computation of the weight matrix p−l to be J −l , defined as:</ns0:p><ns0:formula xml:id='formula_15'>J −l = 1 N ||X −l p−l − E|| 2 F (9)</ns0:formula><ns0:p>where X −l denotes the matrix X with the column corresponding to l-th feature removed. Then according to the definition, the utility of feature l, U l is:</ns0:p><ns0:formula xml:id='formula_16'>U l = J −l − J<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>A straightforward computation of U l would be computationally heavy due to the fact that the computation of pl requires a matrix inversion of X −l X T −l , which has to be repeated for each feature l.</ns0:p><ns0:p>However, it can be shown that the utility of the l-th feature of X in (10) can be computed efficiently without the explicit recomputation of p−l by using the following expression <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_17'>U l = 1 q l || pl || 2 , (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>where q l is the l-th diagonal element of R −1 XX and p l is the l-th row in p, corresponding to the l-th feature. The mathematical proof of (11) can be found in <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref>. Note that R −1 XX is already known from the computation of p such that no additional matrix inversion is required.</ns0:p><ns0:p>However, since the data matrix X can contain redundant features or features that are linear combinations of each other in its columns, it cannot be guaranteed that the matrix X in ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) is full-rank. In this case, the removal of a redundant column from X will not lead to an increase in the LS cost of (7).</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Moreover, R −1 XX , used to find the solution of ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) in (8), will not exist in this case since the matrix X is rank deficient. A similar problem appears if N < d, which can happen in case of very high-dimensional data.</ns0:p><ns0:p>To overcome this problem, the definition of utility generalized to a minimum l 2 -norm selection <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref> is used in this work. This approach eliminates the feature yielding the smallest increase in the l 2 -norm of the weight matrix when the column corresponding to that feature were to be removed and the weight matrix would be re-optimized. Moreover, minimizing the l 2 -norm of the weights further reduces the risk of overfitting.</ns0:p><ns0:p>This generalization is achieved by first adding an l 2 -norm penalty β to the cost function that is minimized in (7):</ns0:p><ns0:formula xml:id='formula_19'>J = min p 1 2 ||Xp − E|| 2 F + β ||p|| 2 2 (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>where 0 < β µ with µ equal to the smallest non-zero eigenvalue of R XX in order to ensure that the bias added due to the penalty term in ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is negligible. The minimizer of ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is:</ns0:p><ns0:formula xml:id='formula_21'>p = R −1 XXβ R XE = (R XX + β I) −1 R XE<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>It is noted that (13) reduces to R † XX R XE when β → 0, where R † XX denotes the Moore-Penrose pseudoinverse. This solution corresponds to the minimum norm solution of (7) when X contains linearly dependent columns or rows. The utility U l of the l-th column in X based on ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_22'>U l = ||X −l p−l − E|| 2 2 − ||X p − E|| 2 2 + β || p−l || 2 2 − || p|| 2 2 = (J −l − J) + β || p−l || 2 2 − || p|| 2 2 (14)</ns0:formula><ns0:p>Note that if column l in X is linearly independent from the other columns, (14) closely approximates to the original utility definition in (10) as the first term dominates over the second. However, if column l is linearly dependent, the first term vanishes and the second term will dominate. In this case, the utility quantifies the increase in l 2 -norm after removing the l-th feature.</ns0:p><ns0:p>To select the best s features of X, a greedy selection based on the iterative elimination of the features with the least utility is carried out. After the elimination of each feature, a re-estimation of the weights p is carried out and the process of elimination is repeated, until s features remain.</ns0:p><ns0:p>Note that the value of β depends on the smallest non-zero eigenvalue of R XX . Since R XX has to be recomputed every time when a feature is removed, also its eigenvalues change along the way. In practice, the value of β is selected only once and fixed for the remainder of the algorithm, as smaller than the smallest non-zero eigenvalue of R XX before any of the features are eliminated <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref>. This value of β will be smaller than all the non-zero eigenvalues of any principal submatrix of R XX using the Cauchy's interlace theorem <ns0:ref type='bibr' target='#b16'>(Hwang, 2004)</ns0:ref>.</ns0:p><ns0:p>The summary of the utility subset selection is described in Algorithm 2. Algorithm 3 outlines the complete U2FS algorithm proposed in this paper.</ns0:p><ns0:p>Algorithm 2 Utility metric algorithm for subset selection. Compute R −1 XXβ and p as described in (13).</ns0:p><ns0:formula xml:id='formula_23'>Input: Data X,</ns0:formula></ns0:div>
<ns0:div><ns0:head>5:</ns0:head><ns0:p>Calculate the utility of the remaining features using (11) 6:</ns0:p><ns0:p>Remove the feature f l with the lowest utility. • RBF kernel, using σ 2 based on Algorithm 1 2: Calculate the normalized Laplacian L and the eigenvectors α derived from Equation (2).</ns0:p><ns0:p>Keep the c eigenvectors corresponding to the highest eigenvalues, excluding the first one. 3: Apply the backward greedy utility algorithm 2. 4: Return the s features remaining from the backward greedy utility approach.</ns0:p><ns0:p>As it has been stated before, one of the most remarkable aspects of the U2FS algorithm is the use of a greedy technique to solve the subset selection problem. The use of this type of method reduces the computational cost of the algorithm. This can be confirmed analyzing the computational complexity of U2FS, where the most demanding steps are the eigendecomposition of the Laplacian matrix (step 2 of Algortihm 3), which has a cost of O(N 3 ) <ns0:ref type='bibr' target='#b31'>(Tsironis et al., 2013)</ns0:ref>, and the subset selection stage in step 3 of Algorithm 3. Contrary to the state-of-the-art, the complexity of U2FS being a greedy method depends on the number of features to select. The most computationally expensive step of the subset selection in U2FS is the calculation of the matrix R −1 XX , which has a computational cost of O(d 3 ). In addition, this matrix needs to be updated d − s times. This update can be done efficiently using a recursive updating equation from <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref> with a cost of O(t 2 ), with t the number of features remaining in the dataset, i.e. t = d − s. Since t < d, the cost for performing d − s iterations will be O((d − s)d 2 ), which depends on the number of features s to be selected. Note that the cost of computing the least squares solution p−l for each l in ( <ns0:ref type='formula'>14</ns0:ref>) is eliminated using the efficient equation ( <ns0:ref type='formula' target='#formula_17'>11</ns0:ref>), bringing down the cost for computing the utility from O(t 4 ) to O(t) in each iteration. This vanishes with respect to the O(d 3 ) term (remember that t < d). Therefore, the total asymptotic complexity of U2FS is O(N 3 + d 3 ).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The aim of the following experiments is to evaluate the U2FS algorithm based on multiple criteria. With the focus on the new estimation of the embedding proposed, the proposed RBF kernel approach using the estimated σ 2 is compared to the σ 2 0 parameter proposed in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>, and to the binary KNN graph commonly used in <ns0:ref type='bibr' target='#b12'>Gui et al. (2016)</ns0:ref>. On the other hand, the utility metric for subset selection is compared to other sparsity-inducing techniques, based on l p − norm regularizations. In these experiments, this is evaluated using the l 1 − norm. The outline of the different combinations considered in this work summarized in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The last method, RBF σ 2 + Utility, would be the one referred to as U2FS, combining the novelties suggested in this work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• MCFS <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref> 1 . This algorithm served as inspiration to create U2FS, and therefore, it is added to the set of comparison algorithms as baseline reference. MCFS performs spectral embedding and l 1 -norm regularization sequentially, and which served as inspiration to create U2FS.</ns0:p><ns0:p>• NDFS <ns0:ref type='bibr' target='#b20'>(Li et al., 2012)</ns0:ref> 2 , which performs nonnegative spectral analysis with l 2,1 -norm regularization.</ns0:p><ns0:p>This algorithm is added to the experiments since it is an improvement of MCFS, while being the first algorithm simultaneously adapting both stages of manifold learning and subset selection. Therefore, NDFS represents the transition to these adaptive optimization-based feature selection algorithms.</ns0:p><ns0:p>• RJGSC <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref> optimally derives the embedding of the data by adapting the results with l 2 , 1-norm regularization. This algorithm is taken as a reference for the large class of adaptive sparsity-inducing feature selection algorithms, which are much more complex than U2FS, since they apply optimization to recursively adapt the embedding and feature selection stages of the methods. RJGSC was already compared to several feature selectors in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref>, and therefore, it is taken here as upper-bound threshold in performance. and the final description of the datasets can be seen in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>.</ns0:p><ns0:p>All these problems are balanced, except for the last dataset Cres-Moon, for which the data is divided 25% to 75% between the two clusters. Five extra features in addition to the original f 1 and f 2 were added to each of the datasets in order to include redundant or irrelevant information:</ns0:p><ns0:p>• f ′ 1 and f ′ 2 : random values extracted from two Pearson distributions characterized by the same higher-order statistics as f 1 and f 2 respectively.</ns0:p><ns0:p>• f ′ 3 and f ′ 4 : Original f 1 and f 2 contaminated with Gaussian noise (νN (0, 1)), with ν = 1.5.</ns0:p><ns0:p>• f ′ 5 : Constant feature of value 0.</ns0:p><ns0:p>The first step in the preprocessing of the features was to standardize the data using z-score to reduce the impact of differences in scaling and noise. In order to confirm the robustness of the feature selection techniques, the methods were applied using 10-fold cross-validation on the standardized data. For each fold a training set was selected using m-medoids, setting m to 2000 and using the centers of the clusters found as training samples. By doing so, the generalization ability of the methods can be guaranteed Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b32'>Varon et al., 2015)</ns0:ref>. On each of the 10 training sets, the features were selected applying the 5 methods mentioned in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. For each of the methods, the number of clusters c was introduced as the number of classes presented in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Since these experiments aim to evaluate the correct selection of the features, and the original features f 1 and f 2 are known, the number of features s to be selected was set to 2.</ns0:p><ns0:p>Regarding the parameter settings within the embedding methods, the binary was obtained setting k in the kNN approach to 5. For the RBF kernel embedding, σ 2 0 was set to the mean of the standard deviation along each dimension, as done in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>. When using σ 2 , its value was obtained by applying the method described in Algorithm 1.</ns0:p><ns0:p>In terms of subset selection approaches, the method based on the l 1 − norm automatically sets the value of the regularization parameter required for the LARS implementation, as described in <ns0:ref type='bibr' target='#b10'>(Deng Cai, Chiyuan Zhang, 2020)</ns0:ref>. For the utility metric, β was automatically set to the smallest non-zero eigenvalue of the matrix R XX as described in Algorithm 2.</ns0:p><ns0:p>The performance of the algorithm is evaluated comparing the original set of features f 1 and f 2 to those selected by the algorithm. In these experiments, the evaluation of the selection results is binary: either the feature set selected is correct or not, regardless of the additional features f ′ i , for i = 1, 2, ..., 5, selected.</ns0:p><ns0:p>In Table <ns0:ref type='table'>3</ns0:ref> the most common results obtained in the 10 folds are shown. The utility-based approaches always obtained the same results for all 10 folds of the experiments. On the contrary, the l 1 − norm methods provided different results for different folds of the experiment. For these cases, Table <ns0:ref type='table'>3</ns0:ref> shows the most common feature pair for each experiment, occurring at least 3 times.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3. Results feature selection for toy examples</ns0:head></ns0:div>
<ns0:div><ns0:head>Method</ns0:head><ns0:p>Utility metric</ns0:p><ns0:formula xml:id='formula_24'>l 1 − norm Embedding KNN Bin RBF σ 2 0 RBF σ 2 KNN Bin RBF σ 2 0 Clouds f 1 , f 2 f' 1 , f' 4 f 1 , f 2 f' 1 , f' 2 f' 1 , f' 2 Moons f 1 , f 2 f' 3 , f' 4 f 1 , f 2 f' 1 , f' 3 f' 1 , f' 3 Spirals f 1 , f 2 f 1 , f 2 f 1 , f 2 f 2 , f' 2 f 2 , f' 2 Corners f 1 , f 2 f' 1 , f' 2 f 1 , f 2 f 2 , f' 2 f 2 , f' 2 Half-Kernel f 1 , f 2 f 2 , f' 3 f 1 , f 2 f 1 , f' 3 f 1 , f' 3 Cres-Moon f 1 , f 2 f 1 , f' 4 f 1 , f 2 f 2 , f' 1 f 2 , f' 2</ns0:formula><ns0:p>As shown in Table <ns0:ref type='table'>3</ns0:ref>, the methods that always obtain the adequate set of features are based on utility, both with the binary weighting and with the RBF kernel and the suggested σ 2 . Since these results were obtained for the 10 folds, they confirm both the robustness and the consistency of the U2FS algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Benchmark datasets</ns0:head><ns0:p>Additionally, the proposed methods were evaluated using 6 well-known benchmark databases. The databases considered represent image (USPS, ORL, COIL20), audio (ISOLET) and text data (PCMAC, BASEHOCK) 4 , proposing examples with more samples than features, and vice versa. The description of these databases is detailed in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. All these datasets are balanced, except USPS.</ns0:p><ns0:p>In these datasets, the relevant features are unknown. Therefore, the common practice in the literature to evaluate feature selectors consists of applying the algorithms, taking from 10 to 80% of the original 4 All datasets downloaded from http://featureselection.asu.edu/datasets.php</ns0:p></ns0:div>
<ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science set of features, and evaluating the accuracy of a classifier when trained and evaluated with the selected feature set <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref>. The classifier used for this aim in other papers is k-Nearest Neighbors (KNN), setting the number of neighbors to 5.</ns0:p><ns0:p>These accuracy results are computed using 10-fold cross-validation to confirm the generalization capabilities of the algorithm. By setting m to 90% of the number of samples available in each benchmark dataset, m-medoids is used to select the m centroids of the clusters and use them as training set. Feature selection and the training of the KNN classifier are performed in these 9 folds of the standardized data, and the accuracy of the KNN is evaluated in the remaining 10% for testing. Exclusively for USPS, given the size of the dataset, 2000 samples were used for training and the remaining data was used for testing.</ns0:p><ns0:p>These 2000 samples were also selected using m-medoids. Since PCMAC and BASEHOCK consist of binary data, these datasets were not standardized.</ns0:p><ns0:p>The parameters required for the binary and RBF embeddings, as well as β for the utility algorithm, are automatically set as detailed in section .</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows the median accuracy obtained for each of the 5 methods. The shadows along the lines correspond to the 25 and 75 percentile of the 10 folds. As a reference, the accuracy of the classifier without using feature selection is shown in black for each of the datasets. Additionally, Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> shows the computation time for both the utility metric and the l 1 − norm applied on a binary weighting embedding.</ns0:p><ns0:p>In this manner, the subset selection techniques can be evaluated regardless of the code efficiency of the embedding stage. Similarly to Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, the computation time plots show in bold the median running time for each of the subset selection techniques, and the 25 and 75 percentiles around it obtained from the 10-fold cross-validation.</ns0:p><ns0:p>The difference in the trends of the l 1 − norm and utility in terms of computation time is due to their formulation. Feature selection based on l 1 − norm regularization, solved using the LARS algorithm in this case, requires the same computation time regardless of the number of features aimed to select. All features are evaluated together, and later on, an MCFS score obtained from the regression problem is assigned to them <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>. The features with the higher scores are the ones selected. On the other hand, since the utility metric is applied in a backward greedy trend, the computation times change for different number of features selected. The lower the number of features selected compared to the original set, the higher the computation time. This is aligned with the computational complexity of the algorithm, described in Section . In spite of this, it can be seen that even the highest computation time for utility is lower than the time taken using l 1 − norm regularization. The experiments were performed with 2x Intel Xeon E5-2640 @ 2.5 GHz processors and 64GB of working memory.</ns0:p><ns0:p>Finally, the experiments in benchmark databases are extended to compare U2FS to other key algorithms in the state-of-the-art. As it was mentioned at the beginning of this section, the selected algorithms are MCFS, NDFS, and RJGSC, which represent, respectively, the precursor of U2FS, an improved version of MCFS, and an example from the class of adaptive algorithms which recursively optimize the objective function proposed. NDFS and RJGSC require the tuning of their regularization parameters, for which the indications in their corresponding articles were followed. For NDFS, the value of γ was set to 10 8 , and α and β were selected from the values {10 −6 , 10 −4 , ..., 10 6 } applying grid search. The matrix F was initialized with the results of spectral clustering using all the features. For RJGSC, the results described in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref> for the BASEHOCK and PCMAC datasets are taken as a reference. In MCFS, the embedding is done using KNN and binary weighting, and the l 1 − norm is used for subset selection.</ns0:p><ns0:p>U2FS, on the other hand, results from the combination of the RBF kernel with σ 2 and the utility metric. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science PCMAC datasets. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The results obtained in the experiments suggest that the proposed U2FS algorithm obtains comparable results to the state-of-the-art in all the applications suggested, taking less computational time. Nevertheless, the performance of the utility metric for feature selection varies for the different experiments presented and requires a detailed analysis.</ns0:p><ns0:p>From Table <ns0:ref type='table'>3</ns0:ref>, in Section , it can be concluded that the utility metric is able to select the correct features in an artificially contaminated dataset. Both the binary embedding and the RBF kernel with σ 2 select the original set of features for the 10 folds of the experiment. The stability in the results also applies for the RBF embedding with σ 2 0 , which always selected the same feature pair for all 10 folds even though they are only correct for the spirals problem.</ns0:p><ns0:p>Therefore, considering the stability of the results, it can be concluded that the proposed approach is more robust in the selection of results than that based on the l 1 − norm.</ns0:p><ns0:p>On the other hand, when considering the suitability of the features selected, two observations can be made. First of all, it can be seen that the lack of consistency in the l 1 − norm approaches discards the selection of the correct set of features. Moreover, the wrong results obtained with both l 1 − norm and utility methods for the RBF embedding using σ 2 0 reveal the drawback of applying this approximation of σ 2 0 in presence of redundant or irrelevant features. Since this value is calculated as the mean of the standard deviation of all the dimensions in the data, this measure can be strongly affected by irrelevant data, that could be very noisy and enlarge this sigma, leading to the allocation of all the samples to a mega-cluster.</ns0:p><ns0:p>While the use of the proposed approximation for σ 2 achieves better results than σ 2 0 , these are comparable to the ones obtained with the KNN binary embedding when using the utility metric. The use of KNN to build graphs is a well-known practice, very robust for dense clusters, as it is the case in these examples. The definition of a specific field where each of the embeddings would be superior is beyond the scope of this paper. However, the excellence of both methods when combined with the proposed subset selection method only confirms the robustness of the utility metric, irrespective of the embedding considered.</ns0:p><ns0:p>For standardization purposes, the performance of the method was evaluated in benchmark databases.</ns0:p><ns0:p>As it can be observed, in terms of the accuracy obtained for each experiment, U2FS achieves comparable results to the l 1 − norm methods for most of the datasets considered, despite its condition of greedy method.</ns0:p><ns0:p>In spite of this, some differences in performance can be observed in the different datasets. The different ranking of the methods, as well as the accuracy obtained for each of the databases can be explained taking into account the type of data under study and the ratio between samples and dimensions.</ns0:p><ns0:p>With regard to the type of data represented by each test, it can be observed that for the ISOLET dataset, containing sound information, two groups of results are distinguishable. The group of the utility metric Manuscript to be reviewed</ns0:p><ns0:p>Computer Science results outperforms those derived from the l 1 − norm, which only reach comparable results for 60% of the features selected. These two groups of results are caused by the subset selection method applied, and not for the embedding, among which the differences are not remarkable.</ns0:p><ns0:p>In a similar way, for the case of the image datasets USPS, ORL and COIL20, the results derived from utility are slightly better than those coming from the l 1 − norm. In these datasets, similarly to the performance observed in ISOLET, accuracy increases with the number of features selected.</ns0:p><ns0:p>Regarding the differences between the proposed embeddings, it can be observed that the results obtained are comparable for all of them. Nonetheless, Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows that there is a slight improvement in the aforementioned datasets for the RBF kernel with σ 2 , but the results are still comparable to those obtained with other embeddings. Moreover, this similarity in the binary and RBF results holds for the l 1 − norm methods, for which the accuracy results almost overlap in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. This can be explained by the relation between the features considered. Since for these datasets the samples correspond to pixels, and the features to the color codes, a simple neighboring method such as the binary weighting is able to code the connectivity of pixels of similar colors.</ns0:p><ns0:p>The text datasets, PCMAC and BASEHOCK, are the ones that show bigger differences between the results obtained with utility and those obtained with the l 1 − norm. This can be explained by the amount of zeros present in the data, with which the utility metric is able to cope slightly better. The sparsity of the data leads to more error in the l 1 − norm results, since more features end up having the same MCFS score, and among those, the order for selection comes at random. The results obtained with the utility metric are more stable, in particular for the BASEHOCK dataset. For this dataset, U2FS even outperforms the results without feature selection if at least 40% of the features are kept.</ns0:p><ns0:p>In all the datasets proposed, the results obtained with the l 1 − norm show greater variability, i.e. larger percentiles. This is aligned with the results obtained in the simulations. The results for the l 1 − norm are not necessarily reproducible in different runs, since the algorithm is too sensitive to the training set selected. The variability of the utility methods is greater for the approaches based on the RBF kernel.</ns0:p><ns0:p>This is due to the selection of the σ 2 parameter, which also depends on the training set. The tuning of this parameter is still very sensitive to high-dimensional and large-scale data, posing a continuous challenge for the machine learning community <ns0:ref type='bibr' target='#b39'>(Yin and Yin, 2016;</ns0:ref><ns0:ref type='bibr' target='#b30'>Tharwat et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Despite it being a greedy method, the utility metric proves to be applicable to feature selection approaches and to strongly outperform the l 1 − norm in terms of computational time, without significant reduction in accuracy. U2FS proves to be effective both in cases with more samples than features and vice versa. The reduction in computation time is clear, for all the benchmark databases described, and is particularly attractive for high-dimensional datasets. Altogether, our feature selection approach U2FS, based on the utility metric, and with the binary or the RBF kernel with σ 2 is recommended due to its fast performance and its interpretability.</ns0:p><ns0:p>Additionally, the performance of U2FS is comparable to the state-of-the-art, as shown in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. In this table, the performance of U2FS (RBF kernel and σ 2 , with the utility metric) is compared to that of MCFS, NDFS and RJGSC. For MCFS, it can be seen that, as expected, U2FS appears as an improvement of this algorithm, achieving better results for both datasets. For NDFS, the results are slightly worse than for U2FS, most probably due to problems in the tuning of regularization parameters. Given the consistent good results for different datasets of RJGSC when compared against the state-of-the-art, and its condition of simultaneously adapting the spectral embedding and subset selection stages, this algorithm is taken as example of the most complex SSFS algorithms (SAMM-FS, SOGFS or DSRMR). These algorithms perform manifold learning and feature selection simultaneously, iteratively adapting both steps to achieve optimal results.</ns0:p><ns0:p>It is clear that in terms of accuracy, both for 10% of the features and for the maximal value of achieved, U2FS obtains similar results to RJGSC, while at the same time having a much smaller computational complexity. Furthermore, while RJGSC requires the manual tuning of extra parameters, similarly to other algorithms in the state-of-the-art, U2FS tunes its parameters automatically. Hence, the application of the method is straightforward for the users. The stages of higher complexity in U2FS, previously defined as O(N 3 + d 3 ), are shared by most of the algorithms in the state-of-the-art. However, on top of these eigendecompositions and matrix inversions, the algorithms in the literature require a number of iterations in the optimization process that U2FS avoids. Additionally, U2FS is the only algorithm for which the computation time scales linearly with the amount of features selected.</ns0:p><ns0:p>The current state-of-the-art of unsupervised spectral feature selectors applies the stages of manifold Manuscript to be reviewed</ns0:p><ns0:p>Computer Science learning and subset selection simultaneously, which can lead to optimal results. In a field that gets more and more complex and goes far from applicability, U2FS is presented as a quick solution for a sequential implementation of both stages of SSFS algorithms, yet achieving comparable results to the state-of-the-art.</ns0:p><ns0:p>Being a greedy method, the utility metric cannot be applied simultaneously to the manifold learning and subset selection stages. However, other sequential algorithms from the state-of-the-art could consider the use of utility for subset selection, instead of the current sparsity-inducing techniques. One of the most direct applications could be the substitution of group-LASSO for group-utility, in order to perform selections of groups of features as proposed by <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref>. This can be of interest in cases where the relations between features are known, such as in channel selection <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref> or in multi-modal applications <ns0:ref type='bibr' target='#b41'>(Zhao et al., 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This work presents a new method for unsupervised feature selection based on manifold learning and sparse regression. The main contribution of this paper is the formulation of the utility metric in the field of spectral feature selection, substituting other sparse regression methods that require more computational resources. This method, being a backward greedy approach, has been proven to obtain comparable results to the state-of-the-art methods with analogous embedding approaches, yet at considerably reduced computational load. The method shows consistently good results in different applications, from images to text and sound data; and it is broadly applicable to problems of any size: using more features than samples or vice versa.</ns0:p><ns0:p>Furthermore, aiming to show the applicability of U2FS to data presenting non-linearities, the proposed approach has been evaluated in simulated data, considering both a binary and an RBF kernel embedding.</ns0:p><ns0:p>Given the sensitivity of the RBF kernel to high-dimensional spaces, a new approximation of the RBF kernel parameter was proposed, which does not require further tuning around the value obtained. The proposed approximation outperforms the rule-of-thumb widely used in the literature in most of the scenarios presented. Nevertheless, in terms of feature selection, the utility metric is robust against the embedding.</ns0:p><ns0:p>U2FS is proposed as a non-parametric efficient algorithm, which does not require any manual tuning or special knowledge from the user. Its simplicity, robustness and accuracy open a new path for structure sparsity-inducing feature selection methods, which can benefit from this quick and efficient technique.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>and R XE by removing the rows and columns related to that feature f l . 8: end while8/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)Manuscript to be reviewedComputer ScienceAlgorithm 3 Unsupervised feature selector based on the utility metric (U2FS).Input: Data X, Number of clusters c, Number of features s to select Output: s features selected 1: Construct the similarity graph W as described in Section selecting one of the weightings:• Binary • RBF kernel, using σ 2 0</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Toy examples used for simulations: Clouds (A), Moons (B), Spirals (C), Corners (D), Half-Kernel (E), Crescent Moon (F).</ns0:figDesc><ns0:graphic coords='11,224.49,257.79,247.90,137.97' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Accuracy results for the benchmark databases, for selecting from 10 to 80% of the original number of features. The thick lines represent the median accuracy of the 10-fold cross-validation, and the shadows, the 25 and 75 percentile. USPS (Figure 3A), Isolet (B), ORL (C), COIL20 (D), AR10 (E), PIE10 (F).</ns0:figDesc><ns0:graphic coords='14,141.81,86.73,413.12,235.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Computation time for extracting from 10 to 80% of the original number of features for each of the benchmark databases. USPS (Figure 4A), Isolet (B), ORL (C), COIL20 (D), AR10 (E), PIE10 (F).</ns0:figDesc><ns0:graphic coords='14,141.81,433.50,413.13,238.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,203.84,64.10,289.13,220.15' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Eigenvectors E, Number of features s to select Output: s features selected 1: Calculate R XX and R XE as described in Equation (8).</ns0:figDesc><ns0:table /><ns0:note>2: Calculate β as the smallest non-zero eigenvalue of R XX 3: while Number of features remaining is > s do 4:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Methods compared in the experiments</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Similarity measure</ns0:cell><ns0:cell>Subset selection</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>KNN Bin + l 1 − norm KNN + binary weighting l 1 -norm</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF σ 2 0</ns0:cell><ns0:cell>+ l 1 − norm</ns0:cell><ns0:cell>RBF kernel, σ 2 0</ns0:cell><ns0:cell>l 1 -norm</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>KNN Bin + Utility</ns0:cell><ns0:cell cols='2'>KNN + binary weighting Utility metric</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RBF σ 2 0 RBF σ 2 + Utility + Utility</ns0:cell><ns0:cell>RBF kernel, σ 2 0 RBF kernel, σ 2</ns0:cell><ns0:cell>Utility metric Utility metric</ns0:cell></ns0:row></ns0:table><ns0:note>These novelties are evaluated in two different scenarios, namely a simulation study, and in the application of the methods on benchmark datasets. In particular for the latter, the methods are not only evaluated in terms of accuracy, but also regarding computational cost. Additionally, U2FS is compared with 3 representative state-of-the-art algorithms from the general family of unsupervised sparsity-inducing feature selection algorithms:9/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of the toy example datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'># samples # classes</ns0:cell></ns0:row><ns0:row><ns0:cell>Clouds</ns0:cell><ns0:cell>9000</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Moons</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Spirals</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Corners</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Half-Kernel</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Crescent-Moon 10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>(</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Description of the benchmark databases</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>Data Type Samples Features Classes</ns0:cell></ns0:row><ns0:row><ns0:cell>USPS</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>9298</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Isolet</ns0:cell><ns0:cell>Audio</ns0:cell><ns0:cell>1560</ns0:cell><ns0:cell>617</ns0:cell><ns0:cell>26</ns0:cell></ns0:row><ns0:row><ns0:cell>ORL</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell>COIL20</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>1440</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMAC</ns0:cell><ns0:cell>Text</ns0:cell><ns0:cell>1943</ns0:cell><ns0:cell>3289</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BASEHOCK Text</ns0:cell><ns0:cell>1993</ns0:cell><ns0:cell>4862</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>summarizes the results by showing the KNN accuracy (ACC) for 10% of the features used, and the maximum ACC achieved among the percentages of features considered, for the BASEHOCK and</ns0:figDesc><ns0:table /><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Comparison of classification accuracy (ACC) with the state-of-the-art for PCMAC and BASEHOCK datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='4'>Method ACC at 10% features % features at Max ACC Max ACC</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMAC</ns0:cell><ns0:cell>U2FS</ns0:cell><ns0:cell>0.785</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MCFS</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>0.697</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NDFS</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RJGSC</ns0:cell><ns0:cell>0.805</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BASEHOCK U2FS</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>0.925</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MCFS</ns0:cell><ns0:cell>0.815</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NDFS</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>0.794</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RJGSC</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>0.917</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:1:1:NEW 29 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Amalia Villa
Dynamical Systems, Signal Processing
and Data Analytics (STADIUS),
Leuven (Arenberg)
Kasteelpark Arenberg 10 - box 2446
3001 Leuven
Belgium
21st January 2021
Dear Dr. Fernández, dear reviewers,
We would like to thank you for providing us with interesting feedback on our manuscript 'Utility metric for unsupervised feature selection' (#54747). Please find attached a revised version of our manuscript. The comments provided by the reviewers helped us to strengthen the paper considerably, you can find each of our answers in the following pages, as well as marked in blue in the updated manuscript. We also applied the comments related to enrich the github repository.
The coming pages of this document address and answer all reviewer's comments and suggestions.
Amalia Villa
On behalf of all authors.
Reviewer 1
We would like to thank the reviewer for his/her feedback. Hereby we answer the reviewer’s comment and detail the applied changes.
1. The experiments lack the comparison with state-of-the-art unsupervised feature selection (dimensional reduction) methods, i.e., Graph-based methods, Extended Sammon projection and wavelet kernel extreme learning machine for gait-based legitimate user identification, Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, 1216-1219. Etc.
We agree with the comment: this paper could benefit from further comparison with state-of-the-art feature selection methods. We have added one extra graph-based algorithm in our benchmark study (see below).
We have also mentioned the suggested work using Extended Sammon Projection in the Introduction , together with PCA, highlighting their use as dimensionality reduction methods. However, we have not included the method in the experiments for comparison to our method, since U2FS focuses on the interpretability of feature selection to reduce dimensions (i.e. selecting a subset of the original set of features) instead of transforming the data into a lower dimensional space (as PCA or ESP). The corresponding change can be found In the Introduction:
(Line 42)
There are two types of dimensionality reduction techniques. So-called feature selection techniques directly select a subset of the original features. On the other hand, transformation techniques compute a new (smaller) set of features, each of which are derived from all features of the original set. Some examples of these are Principal Component Analysis (PCA) (Wold et al., 1987), Independent Component Analysis (ICA) (Jiang et al., 2006) or the Extended Sammon Projection (ESP) (Ahmad et al., 2019). While these methods lead to a reduction in the number of dimensions, results are less interpretable, since their direct relationship with the original set of features is lost.
Additionally, as a new method to compare our results, we have selected Unsupervised Feature Selection using Nonnegative Spectral Analysis (NDFS). Reviewer 2 suggested this algorithm, which, similarly to U2FS, belongs to the category of Sparse Learning feature selectors, Moreover, the method was the first to apply manifold learning and feature selection simultaneously. The additional results can be found in Table 5.
• We updated the Experiments and Discussion sections accordingly:
(Line 439)
Finally, the experiments in benchmark databases are extended to compare U2FS to other key algorithms in the state-of-the-art. As it was mentioned at the beginning of this section, the selected algorithms are MCFS, NDFS, and RJGSC, which represent, respectively, the precursor of U2FS, an improved version of MCFS, and an example of the class of adaptive algorithms which recursively optimize the objective function proposed. NDFS and RJGSC require the tuning of their regularization parameters, for which the indications in their corresponding articles were followed. For NDFS, the value of γ was set to 108, and α and β were selected from the values {10-6, 10-4,...,106} applying grid search. The matrix F was initialized with the results of spectral clustering using all the features. For RJGSC, the results described in (Zhu et al., 2016) for the BASEHOCK and PCMAC datasets are taken as a reference. In MCFS, the embedding is done using KNN and binary weighting, and the l1-norm is used for subset selection. U2FS, on the other hand, results from the combination of the RBF kernel with σ2 and the utility metric. Table 5 summarizes the results by showing the KNN accuracy (ACC) for 10% of the features used, and the maximum ACC achieved among the percentages of features considered, for the BASEHOCK and PCMAC datasets.
(Line 525)
Additionally, the performance of U2FS is comparable to the state-of-the-art, as shown in Table 5. In this table, the performance of U2FS (RBF kernel and σ2, with the utility metric) is compared to that of MCFS, NDFS and RJGSC. For MCFS, it can be seen that, as expected, U2FS appears as an improvement of this algorithm, achieving better results for both datasets. For NDFS, the results are slightly worse than for U2FS, most probably due to problems in the tuning of the regularization parameters. Given the consistent good results for different datasets of RJGSC when compared against the state-of-the-art, and its condition of simultaneously adapting the spectral embedding and subset selection stages, this algorithm is taken as example of the most complex SSFS algorithms (SAMM-FS, SOGFS or DSRMR). These algorithms perform manifold learning and feature selection simultaneously, iteratively adapting both steps to achieve optimal results.
It is clear that in terms of accuracy, both for 10% of the features and for the maximal value of achieved, U2FS obtains similar results to RJGSC, while at the same time having a much smaller computational complexity. Furthermore, while RJGSC requires the manual tuning of extra parameters, similarly to other algorithms in the state-of-the-art, U2FS tunes its parameters automatically. Hence, the application of the method is straightforward for the users. Nevertheless, U2FS does not present as a competitor in terms of accuracy for these methods, but in terms of simplicity. The stages of higher complexity in U2FS, previously defined as O(N3 + d3), are shared by most of the algorithms in the state-of-the-art. However, on top of these eigendecompositions and matrix inversions, the algorithms in the literature require a number of iterations in the optimization process that U2FS avoids. Additionally, U2FS is the only algorithm for which the computation time scales linearly with the amount of features selected.
Reviewer 2
We would like to thank the reviewer for his/her valuable feedback. Hereby we answer the reviewer’s comments and detail the applied changes:
1. The evaluation of the proposed U2FS method by comparing to other state-of-the-art methods in section Results (page 12 and Table 5) should be expended to more methods (of the same type and subcategory as U2FS, e.g. NDFS [Li Z, Yang Y, Liu J, Zhou X, Lu H. Unsupervised feature selection using nonnegative spectral analysis. In Twenty-Sixth AAAI Conference on Artificial Intelligence 2012 Jul 14.], DSRMR [Tang C, Liu X, Li M, Wang P, Chen J, Wang L, Li W. Robust unsupervised feature selection via dual self-representation and manifold regularization. Knowledge-Based Systems. 2018 Apr 1;145:109-20.]) beside the RJGSC and MCFS methods to compare U2FS with.
We agree with your comment that the paper would be strengthened by adding additional comparisons to the state-of-the-art. Furthermore, this aspect is also related to your last comment: we need to specify more clearly which algorithms we compare with and why.
We have considered adding both NDFS and DSRMR to Table 5, but finally we have opted to only include the first one. This algorithm has a similar structure as U2FS, and it belongs to the category of sparsity-inducing feature selectors, being the first one that introduced the idea of simultaneously adjusting the feature selection strategy while performing spectral clustering. Therefore, this algorithm fits well as a transition to recent sparsity-inducing feature selection algorithms, which are more sophisticated and much more complex than U2FS.
DSRMR also belongs to this latter type of methods, and we considered its addition to the experiments. However the method’s use of grid-search to tune parameters, on top of the complex optimization strategy both for the feature and sample self-representation process leads to a computationally expensive algorithm. In fact, we tested DSRMR on our benchmark data sets, and the method used almost 12 hours per run for the grid-search of the regularization parameters. Therefore, we concluded that DSRMR is outside the scope of comparison with our algorithm. Although the results might be potentially better, the time taken to obtain them removes the benefit.
By introducing NDFS, we have changed
• Table 5:
• We updated the Experiments and Discussion sections accordingly:
(Line 439)
Finally, the experiments in benchmark databases are extended to compare U2FS to other key algorithms in the state-of-the-art. As it was mentioned at the beginning of this section, the selected algorithms are MCFS, NDFS, and RJGSC, which represent, respectively, the precursor of U2FS, an improved version of MCFS, and an example of the class of adaptive algorithms which recursively optimize the objective function proposed. NDFS and RJGSC require the tuning of their regularization parameters, for which the indications in their corresponding articles were followed. For NDFS, the value of γ was set to 108, and α and β were selected from the values {10-6, 10-4,...,106} applying grid search. The matrix F was initialized with the results of spectral clustering using all the features. For RJGSC, the results described in (Zhu et al., 2016) for the BASEHOCK and PCMAC datasets are taken as a reference. In MCFS, the embedding is done using KNN and binary weighting, and the l1-norm is used for subset selection. U2FS, on the other hand, results from the combination of the RBF kernel with σ2 and the utility metric. Table 5 summarizes the results by showing the KNN accuracy (ACC) for 10% of the features used, and the maximum ACC achieved among the percentages of features considered, for the BASEHOCK and PCMAC datasets.
(Line 525)
Additionally, the performance of U2FS is comparable to the state-of-the-art, as shown in Table 5. In this table, the performance of U2FS (RBF kernel and σ2, with the utility metric) is compared to that of MCFS, NDFS and RJGSC. For MCFS, it can be seen that, as expected, U2FS appears as an improvement of this algorithm, achieving better results for both datasets. For NDFS, the results are slightly worse than for U2FS, most probably due to problems in the tuning of the regularization parameters. Given the consistent good results for different datasets of RJGSC when compared against the state-of-the-art, and its condition of simultaneously adapting the spectral embedding and subset selection stages, this algorithm is taken as example of the most complex SSFS algorithms (SAMM-FS, SOGFS or DSRMR). These algorithms perform manifold learning and feature selection simultaneously, iteratively adapting both steps to achieve optimal results.
It is clear that in terms of accuracy, both for 10% of the features and for the maximal value of achieved, U2FS obtains similar results to RJGSC, while at the same time having a much smaller computational complexity. Furthermore, while RJGSC requires the manual tuning of extra parameters, similarly to other algorithms in the state-of-the-art, U2FS tunes its parameters automatically. Hence, the application of the method is straightforward for the users. Nevertheless, U2FS does not present as a competitor in terms of accuracy for these methods, but in terms of simplicity. The stages of higher complexity in U2FS, previously defined as O(N3 + d3), are shared by most of the algorithms in the state-of-the-art. However, on top of these eigendecompositions and matrix inversions, the algorithms in the literature require a number of iterations in the optimization process that U2FS avoids. Additionally, U2FS is the only algorithm for which the computation time scales linearly with the amount of features selected.
Nevertheless, this suggestion made us realize that our choice of algorithms should be better justified, such that it is clear why we compare our method to the set of feature selectors mentioned in the paper. To this end, we have added some changes in the text.
• To clarify the type of algorithm the following changes were made. In the Introduction:
(Line 52)
Within unsupervised feature selectors, sparse learning based methods have gained attention in the last 20 years (Li et al., 2017). These methods rely on graph theory and manifold learning to learn the underlying structures of the data (Lunga et al., 2013) and they apply sparsity inducing techniques to perform subset selection.
(Line 70)
U2FS is a ready-to-use, publicly available unsupervised sparsity-inducing feature selector designed to be robust for data containing non-linearities.
(Line 85)
In order to prove this, U2FS is compared to three related state-of-the-art algorithms in terms of accuracy of the features selected, and computational complexity of the algorithm.
• To primarily place U2FS in the literature, in the section Related Work:
(Line 143)
This efficient implementation is competitive with state-of-the-art methods in terms of performance, while using a simpler strategy, which is faster to compute and easier to use.
• To explain the selection of state-of-the-art algorithms, in the Results section, line 343:
(Line 345)
Additionally, U2FS is compared with 3 representative state-of-the-art algorithms from the general family of unsupervised sparsity-inducing feature selection algorithms:
• MCFS (Cai et al., 2010). This algorithm served as inspiration to create U2FS, and therefore, it is added to the set of comparison algorithms as baseline reference. MCFS performs spectral embedding and l1-norm regularization sequentially, and served as inspiration to create U2FS.
• NDFS (Li et al., 2012), which performs nonnegative spectral analysis with l2,1-norm regularization. This algorithm is added to the experiments since it is an improvement of MCFS, while being the first algorithm simultaneously adapting both stages of manifold learning and subset selection. Therefore, NDFS represents the transition to these adaptive optimization-based feature selection algorithms.
• RJGSC (Zhu et al., 2016) optimally derives the embedding of the data by adapting the results with l2,1-norm regularization. This algorithm is taken as a reference for the large class of adaptive sparsity-inducing feature selection algorithms, which are much more complex than U2FS, since they apply optimization to recursively adapt the embedding and feature selection stages of the methods. RJGSC was already compared to several feature selectors in Zhu et al., 2016, and therefore, it is taken here as upper-bound threshold in performance.
2. In the section Results in the evaluation analysis, beside the classification accuracy, additional evaluation measure e.g. error rate could be used.
We use the accuracy metric as a way to uniformly report results throughout all benchmark databases, as provided in the literature when results are evaluated using supervised learning (see Zhu, X., Li, X., Zhang, S., Ju, C., & Wu, X. (2016). Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE transactions on neural networks and learning systems, 28(6), 1263-1275).
Additionally, BASEHOCK and PCMAC, which are the datasets used to compare U2FS to the state-of-the-art algorithms, are binary databases with balanced classes, so the error rate would be equivalent to calculate 1 - Accuracy. Therefore, in order to keep the results consistent among all experiments, we provide only Accuracy results. Nevertheless, we clarified the condition of balanced datasets in the description of the experiments:
(Line 404)
All these datasets are balanced, except USPS.
3. The URL of the github method could be more explicit given within the manuscript text itself, because it is the valuable result of the work.
We fully agree with this comment. We have mentioned the code within the text itself.
(Line 71)
The code is available here: https://github.com/avillago/u2fs, where all functions and example codes are published.
4. The README file on the github could be more informative, e.g. basic description of the U2FS method, requirement list if any, example of basic use of the code, etc.
We agree that the README file could be more informative. Following your indications, we have extended the description and added an example of the use of the code. In the README file:
U2FS
Utility metric for Unsupervised Feature Selection - functions and code to generate simulated data.
U2FS is an unsupervised method to perform spectral feature selection. It consists of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the high-dimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures.
The manifold learning stage can use 3 different embeddings:
• A KNN approach with binary weighting
• An RBF kernel, with sigma approximated as the mean of the distances between data points
• An RBF kernel, with sigma approximated with a robust method designed for high-dimensional data.
The stage of subset selection is based on the utility metric, a backwards greedy method that requires, as only input, the number of features to keep.
The details of the U2FS algorithm are described in 'Utility metric for unsupervised feature selection' Amalia Villa, Abhijith Mundanad Narayanan, Sabine Van Huffel, Alexander Bertrand, Carolina Varon
Contact: amalia.villagomez@kuleuven.be
The codes contain:
• u2fs.m : U2FS code of the full algorithm. Requires sigHighDim and utiSelect to run
◦ sigHighDim.m : Code to approximate the RBF kernel parameter, robust to high dimensional data.
◦ utiSelect.m : Utility metric for subset selection
• createDatasetsSimulation.m : code to generate simulation data. Clustering data presenting non-linearities, with extra irrelevant/redundant features. Uses the codes in the folder ./datasetsSimulation which contain:
◦ twospirals.m
◦ halfkernel.m
◦ corners.m
◦ crescentfullmoon.m
◦ clouds.m
◦ twomoons.m
• exampleCode.m : example script to run the code
5. The data i.e. Matlab code for generation for simulation samples for two sets of Moons and Clouds is missing from the github repository.
The clouds and moons codes were embedded in the dataGeneration code itself, and they did not require external functions. Nevertheless, in order to reduce confusion, we have added them as external functions.
6. In the section Results, in the ‘comparison with state-of-the-art methods’ part (page 12), it should be emphasized in the manuscript that analysis included only methods within the same category as investigated novel U2FS method.
We agree that the choice of comparison algorithms leads to confusion and suggests that U2FS could be compared to any algorithm in the literature. In order to specify this more clearly, we think that the changes in line 345 clarify this confusion (See the answer of comment 1).
" | Here is a paper. Please give your review comments after reading it. |
57 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>14</ns0:head><ns0:p>Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands.</ns0:p><ns0:p>In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the high-dimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of non-linearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed.</ns0:p><ns0:p>The combination of these new ingredients results in the Utility metric for Unsupervised feature selection (U2FS) algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters. 38 this often leads to problems related to the curse of dimensionality (Verleysen and Franc ¸ois, 2005) and 39 thus, to irrelevant and redundant data for machine learning algorithms (Maindonald, 2007). Therefore, it 40 is crucial to perform dimensionality reduction before analyzing the data. 41 There are two types of dimensionality reduction techniques. So-called feature selection techniques 42 directly select a subset of the original features. On the other hand, transformation techniques compute 43 a new (smaller) set of features, each of which are derived from all features of the original set. Some 44 examples of these are Principal Component Analysis (PCA) (Wold et al., 1987), Independent Component 45 Analysis (ICA) (Jiang et al., 2006) or the Extended Sammon Projection (ESP) (Ahmad et al., 2019).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands. In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the highdimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of nonlinearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed. The combination of these new ingredients results in the Utility metric for Unsupervised feature selection U2FS algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION 36</ns0:head><ns0:p>Many applications of data science require the study of highly multi-dimensional data. A high number of 37 dimensions implies a high computational cost as well as a large amount of memory required. Furthermore, While these methods lead to a reduction in the number of dimensions, results are less interpretable, since their direct relationship with the original set of features is lost.</ns0:p><ns0:p>In this work, the focus is on unsupervised feature selectors. Since these methods do not rely on the availability of labels or annotations in the data, the information comes from the learning of the underlying structure of the data. Despite this challenge, the generalization capabilities of these methods are typically better than for supervised or semi-supervised methods <ns0:ref type='bibr' target='#b13'>(Guyon and Elisseeff, 2003)</ns0:ref>. Within unsupervised feature selectors, sparse learning based methods have gained attention in the last 20 years <ns0:ref type='bibr' target='#b19'>(Li et al., 2017)</ns0:ref>.</ns0:p><ns0:p>These methods rely on graph theory and manifold learning to learn the underlying structures of the data <ns0:ref type='bibr' target='#b21'>(Lunga et al., 2013)</ns0:ref>, and they apply sparsity inducing techniques to perform subset selection. However, to the best of our knowledge, none explores specifically the behavior of these methods with data presenting non-linear relationships between the features (i.e., dimensions). While the graph definition step can make use of kernels to tackle non-linearities, these can be heavily affected by the curse of dimensionality, since they are often based on a distance metric <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>.</ns0:p><ns0:p>After the manifold learning stage, sparse regression is applied to score the relevance of the features in the structures present in the graph. These formulations make use of sparsity-inducing regularization techniques to provide the final subset of features selected, and thus, they are highly computationally expensive. These methods are often referred to as structured sparsity-inducing feature selectors (SSFS), or sparse learning based methods <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Li et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Despite the large amount of unsupervised SSFS algorithms described in the literature, these methods are cumbersome to use for a novice user. This is not only due to the codes not being publicly available, but also due to the algorithms requiring regularization parameters which are difficult to tune, in particular in unsupervised settings.</ns0:p><ns0:p>In this work, an efficient unsupervised feature selector based on the utility metric (U2FS) is proposed.</ns0:p><ns0:p>U2FS is a ready-to-use, publicly available unsupervised sparsity-inducing feature selector designed to be robust for data containing non-linearities. The code is available here: https://github.com/avillago/u2fs, where all functions and example codes are published. The main contributions of this work are:</ns0:p><ns0:p>• The definition of a new method to automatically approximate the radial-basis function (RBF) kernel parameter without the need for a user-defined tuning parameter. This method is used to tackle the curse of dimensionality when embedding the data taking non-linearities into account.</ns0:p><ns0:p>• The suggestion of a backwards greedy approach for the stage of subset selection, based on the utility metric for the least-squares problem. The utility metric was proposed in the framework of supervised learning <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>, and has been used for channel selection in applications such as electroencephalography (EEG) <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref>, sensor networks <ns0:ref type='bibr' target='#b28'>(Szurley et al., 2014)</ns0:ref>, and microphone arrays <ns0:ref type='bibr' target='#b27'>(Szurley et al., 2012)</ns0:ref>. Nevertheless, this is the first work in which this type of approach is proposed for the sparsity-inducing stage of feature selection.</ns0:p><ns0:p>• Propose a non-parametric and efficient unsupervised SSFS algorithm. This work analyzes the proposed method U2FS in terms of its complexity, and of its performance on simulated and benchmark data. The goal is to reduce the computational cost while maintaining a comparable performance with respect to the state-of-the-art. In order to prove this, U2FS is compared to three related state-of-the-art algorithms in terms of accuracy of the features selected, and computational complexity of the algorithm.</ns0:p><ns0:p>The rest of the paper is structured as follows. In Related Work, previous algorithms on SSFS are summarized. In Methods, the proposed U2FS method is described: first the manifold learning stage, together with the algorithm proposed for the selection of the kernel parameter; and further on, the utility metric is discussed and adapted to feature selection. The experiments performed in simulations and benchmark databases, as well as the results obtained are described in the Results and Discussion sections.</ns0:p><ns0:p>Finally, the last section provides some conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Sparsity-inducing feature selection methods have become widely used in unsupervised learning applications for high-dimensional data. This is due to two reasons. On the one hand, the use of manifold learning guarantees the preservation of local structures present in the high-dimensional data. Additionally, its Sparsity-inducing feature selectors learn the structures present in the data via connectivity graphs obtained in the high-dimensional space <ns0:ref type='bibr' target='#b36'>(Yan et al., 2006)</ns0:ref>. The combination of manifold learning and regularization techniques to impose sparsity, allows to select a subset of features from the original dataset that are able to describe these structures in a smaller dimensional space.</ns0:p><ns0:p>These algorithms make use of sparsity-inducing regularization approaches to stress those features that are more relevant for data separation. The sparsity of these approaches is controlled by different statistical norms (l r,p -norms), which contribute to the generalization capability of the methods, adapting them to binary or multi-class problems <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016)</ns0:ref>. One drawback of these sparse regression techniques is that generally, they rely on optimization methods, which are computationally expensive.</ns0:p><ns0:p>The Laplacian Score <ns0:ref type='bibr' target='#b14'>(He et al., 2006)</ns0:ref> was the first method to perform spectral feature selection in an unsupervised way. Based on the Laplacian obtained from the spectral embedding of the data, it obtains a score based on locality preservation. SPEC <ns0:ref type='bibr' target='#b42'>(Zhao and Liu, 2007)</ns0:ref> is a framework that contains this previous approach, but it additionally allows for both supervised or unsupervised learning, including other similarity metrics, as well as other ranking functions. These approaches evaluate each feature independently, without considering feature interactions. These interactions are, however, taken into account in Multi-Cluster Feature Selection (MCFS) <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>, where a multi-cluster approach is defined based on the eigendecomposition of a similarity matrix. The subset selection is performed applying an l 1 -norm regularizer to approximate the eigenvectors obtained from the spectral embedding of the data inducing sparsity. In UDFS <ns0:ref type='bibr' target='#b37'>(Yang et al., 2011)</ns0:ref> the l 1 -norm regularizer is substituted by a l 2,1 -norm to apply sample and feature-wise constraints, and a discriminative analysis is added in the graph description. In NDFS <ns0:ref type='bibr' target='#b20'>(Li et al., 2012)</ns0:ref>, the use of the l 2,1 -norm is preserved, but a non-negative constraint is added to the spectral clustering stage. Additionally, this algorithm performs feature selection and spectral clustering simultaneously.</ns0:p><ns0:p>The aforementioned algorithms perform manifold learning and subset selection in a sequential way.</ns0:p><ns0:p>However, other methods tackle these simultaneously, in order to adaptively change the similarity metric or the selection criteria regarding the error obtained between the original data and the new representation.</ns0:p><ns0:p>Examples of these algorithms are JELSR <ns0:ref type='bibr' target='#b15'>(Hou et al., 2013)</ns0:ref>, SOGFS <ns0:ref type='bibr' target='#b25'>(Nie et al., 2019)</ns0:ref>, (R)JGSC <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref> and DSRMR <ns0:ref type='bibr' target='#b29'>(Tang et al., 2018)</ns0:ref>, and all make use of an l 2,1 -norm. Most recently, the SAMM-FS algorithm was proposed <ns0:ref type='bibr' target='#b40'>(Zhang et al., 2019)</ns0:ref>, where a combination of similarity measures is used to build the similarity graph, and the l 2,0 -norm is used for regression. This group of algorithms are currently the ones achieving the best results, at the cost of using complex optimization techniques to adaptively tune both stages of the feature selection process. While this can lead to good results, it comes with a high computation cost, which might hamper the tuning process, or might simply not be worthy for some applications. SAMM-FS and SOGFS are the ones that more specifically suggest new approaches to perform the embedding stage, by optimally creating the graph <ns0:ref type='bibr' target='#b25'>(Nie et al., 2019)</ns0:ref> or deriving it from a combination of different similarity metrics <ns0:ref type='bibr' target='#b40'>(Zhang et al., 2019)</ns0:ref>. Again, both approaches require computationally expensive optimization techniques to select a subset of features.</ns0:p><ns0:p>In summary, even if SSFS methods are getting more sophisticated and accurate, this results in algorithms becoming more complex in terms of computational time, and in the ease of use. The use of advanced numerical optimization techniques to improve results makes algorithms more complex, and requires regularization parameters which are not easy to tune. In this work, the combination of a new approach to estimate the graph connectivity based on the RBF kernel, together with the use of the utility metric for subset selection, results in an efficient SSFS algorithm, which is easy to use and with lower complexity than the state-of-the-art. This efficient implementation is competitive with state-of-the-art methods in terms of performance, while using a simpler strategy, which is faster to compute and easier to use.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>This section describes the proposed U2FS algorithm, which focuses on selecting the relevant features in an unsupervised way, at a relatively small computational cost. The method is divided in three parts.</ns0:p><ns0:p>Firstly, the suggested manifold learning approach is explained, where an embedding based on binary weighting and the RBF kernel are used. Then a method to select the kernel parameter of the RBF kernel Manuscript to be reviewed Computer Science is proposed, specially designed for high-dimensional data. Once the manifold learning stage is explained, the Utility metric is proposed as a new approach for subset selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Manifold learning considering non-linearities</ns0:head><ns0:p>Given is a data matrix X ∈ R N×d , with X = [x 1 ; x 2 ; . . . ; x N ],</ns0:p><ns0:formula xml:id='formula_0'>x i = [x (1) i , x (2) i , . . . , x (d)</ns0:formula><ns0:p>i ], i = 1, . . . , N, N the number of data points, and d the number of features (i.e., dimensions) in the data. The aim is to learn the structure hidden in the d-dimensional data and approximate it with only a subset of the original features.</ns0:p><ns0:p>In this paper, this structure will be identified by means of clustering, where the dataset is assumed to be characterized by c clusters.</ns0:p><ns0:p>In spectral clustering, the clustering structure of this data can be obtained by studying the eigenvectors derived from a Laplacian built from the original data <ns0:ref type='bibr' target='#b34'>(Von Luxburg (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>Biggs et al. (1993)</ns0:ref>). The data is represented using a graph G = (V , E ). V is the set of vertices v i , i = 1, . . . , N where v i = x i . E = {e i j } with i = 1, . . . , N j = 1, . . . , N is the set of edges between the vertices where {e i j } denotes the edge between vertices v i and v j . The weight of these edges is determined by the entries w i j ≥ 0 of a similarity matrix W. We define the graph as undirected. Therefore, the similarity matrix W, is symmetric (since w i j = w ji , with the diagonal set to w ii = 0). Typically, W is computed after coding the pairwise distances between all N data points. There are several ways of doing this, such as calculating the k-nearest neighbours (KNN) for each point, or choosing the ε-neighbors below a certain distance <ns0:ref type='bibr' target='#b3'>(Belkin and Niyogi, 2002)</ns0:ref>.</ns0:p><ns0:p>In this paper, two similarity matrices are adopted inspired by the work in <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>, namely a binary one and one based on an RBF kernel. The binary weighting is based on KNN, being w i j = 1 if and only if vertex i is within the K closest points to vertex j. Being a non-parametric approach, the binary embedding allows to simply characterize the connectivity of the data.</ns0:p><ns0:p>Additionally, the use of the RBF kernel is considered, which is well suited for non-linearities and allows to characterize complex and sparse structures <ns0:ref type='bibr' target='#b34'>(Von Luxburg, 2007)</ns0:ref>. The RBF kernel is defined as</ns0:p><ns0:formula xml:id='formula_1'>K(x i , x j ) = exp(−||x i − x j || 2 /2σ 2 ).</ns0:formula><ns0:p>The selection of the kernel parameter σ is a long-standing challenge in machine learning. For instance, in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>, σ 2 is defined as the mean of all the distances between the data points. Alternatively, a rule of thumb, uses the sum of the standard deviations of the data along each dimension <ns0:ref type='bibr' target='#b32'>(Varon et al., 2015)</ns0:ref>. However, the estimation of this parameter is highly influenced by the amount of features or dimensions in the data, making it less robust to noise and irrelevant features.</ns0:p><ns0:p>In the next section, a new and better informed method to approximate the kernel parameter is proposed.</ns0:p><ns0:p>The graph G, defined by the similarity matrix W, can be partitioned into multiple disjoint sets. Given the focus on multi-cluster data of our approach, the k-Way Normalized Cut (NCut) Relaxation is used, as proposed in <ns0:ref type='bibr' target='#b24'>Ng et al. (2002)</ns0:ref>. In order to obtain this partition, the degree matrix D of W must be calculated. D is a diagonal matrix for which each element on the diagonal is calculated as</ns0:p><ns0:formula xml:id='formula_2'>D ii = ∑ j W i, j .</ns0:formula><ns0:p>The normalized Laplacian L is then obtained as L = D −1/2 WD −1/2 , as suggested in Von <ns0:ref type='bibr' target='#b34'>Luxburg (2007)</ns0:ref>.</ns0:p><ns0:p>The vectors y embedding the data in L can be extracted from the eigenvalue problem <ns0:ref type='bibr' target='#b8'>(Chung and Graham, 1997)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_3'>Ly = λ y (1)</ns0:formula><ns0:p>Given the use of a normalized Laplacian for the data embedding, the vectors y must be adjusted using the degree matrix D:</ns0:p><ns0:formula xml:id='formula_4'>α = D 1/2 y,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>which means that α is the solution of the generalized eigenvalue problem of the pair W and D. These eigenvectors α are a new representation of the data, that gathers the most relevant information about the structures appearing in the high-dimensional space. The c eigenvectors, corresponding to the c highest eigenvalues (after excluding the largest one), can be used to characterize the data in a lower dimensional space <ns0:ref type='bibr' target='#b24'>(Ng et al., 2002)</ns0:ref>. Thus, the matrix E = [α 1 , α 2 , . . . , α c ] containing column-wise the c selected eigenvectors, will be the low-dimensional representation of the data to be mimicked using a subset of the original features, as suggested in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Kernel parameter approximation for high-dimensional data</ns0:head><ns0:p>One of the most used similarity functions is the RBF kernel, which allows to explore non-linearities in the data. Nevertheless, the kernel parameter σ 2 must be selected correctly, to avoid overfitting or the allocation of all data points to the same cluster. This work proposes a new approach to approximate this kernel parameter, which will be denoted by σ 2 when derived from our method. This method takes into account the curse of dimensionality and the potential irrelevant features or dimensions in the data.</ns0:p><ns0:p>As a rule of thumb, σ 2 is approximated as the sum of the standard deviation of the data along each dimension <ns0:ref type='bibr' target='#b32'>(Varon et al., 2015)</ns0:ref>. This approximation grows with the number of features (i.e. dimensions) of the data, and thus, it is not able to capture its underlying structures in high-dimensional spaces.</ns0:p><ns0:p>Nevertheless, this σ 2 is commonly used as an initialization value, around which a search is performed, considering some objective function <ns0:ref type='bibr' target='#b2'>(Alzate and Suykens, 2008;</ns0:ref><ns0:ref type='bibr' target='#b32'>Varon et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The MCFS algorithm skips the search around an initialization of the σ 2 value by substituting the sum of the standard deviations by the mean of these <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>. By doing so, the value of σ 2 does not overly grow. This estimation of σ 2 suggested in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref> will be referred to as σ 2 0 . A drawback of this approximation in high-dimensional spaces is that it treats all dimensions as equally relevant for the final estimation of σ 2 0 , regardless of the amount of information that they actually contain.</ns0:p><ns0:p>The aim of the proposed approach is to provide a functional value of σ 2 that does not require any additional search, while being robust to high-dimensional data. Therefore, this work proposes an approximation technique based on two factors: the distances between the points, and the number of features or dimensions in the data.</ns0:p><ns0:p>The most commonly used distance metric is the euclidean distance. However, it is very sensitive to high-dimensional data, deriving unsubstantial distances when a high number of features is involved in the calculation <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>. In this work, the use of the Manhattan or taxicab distance <ns0:ref type='bibr' target='#b26'>(Reynolds, 1980)</ns0:ref> is proposed, given its robustness when applied to high-dimensional data <ns0:ref type='bibr' target='#b0'>(Aggarwal et al., 2001)</ns0:ref>.</ns0:p><ns0:p>For each feature l, the Manhattan distance δ l is calculated as:</ns0:p><ns0:formula xml:id='formula_5'>δ l = 1 N N ∑ i, j=1 |x il − x jl |<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Additionally, in order to reduce the impact of irrelevant or redundant features, a system of weights is added to the approximation of σ 2 . The goal is to only take into account the distances associated to features that contain relevant information about the structure of the data. To calculate these weights, the probability density function (PDF) of each feature is compared with a Gaussian distribution. Higher weights are assigned to the features with less Gaussian behavior, i.e. those the PDF of which differs the most from a Gaussian distribution. By doing so, these will influence more the final σ 2 value, since they allow a better separation of the structures present in the data.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows a graphical representation of this estimation. The dataset in the example has 3 dimensions or features: f 1 , f 2 and f 3 . f 1 and f 2 contain the main clustering information, as it can be observed in Figure <ns0:ref type='figure'>1a</ns0:ref>, while f 3 is a noisy version of f 1 , derived as</ns0:p><ns0:formula xml:id='formula_6'>f 3 = f 1 + 1.5n</ns0:formula><ns0:p>, where n is drawn from a normal distribution N (0, 1). Figures <ns0:ref type='figure'>1b, 1c and 1d</ns0:ref> show in a continuous black line the PDFs derived from the data, and in a grey dash line their fitted Gaussian, in dimensions f 1 , f 2 and f 3 respectively. This fitted Gaussian was derived using the Curve Fitting toolbox of Matlab TM . As it can be observed, the matching of a Gaussian with an irrelevant feature is almost perfect, while those features that contain more information, like f 1 and f 2 , deviate much more from a normal distribution.</ns0:p><ns0:p>Making use of these differences, an error, denoted φ l , for each feature l, where l = 1, . . . , d, is calculated as:</ns0:p><ns0:formula xml:id='formula_7'>φ l = 1 H H ∑ i=1 (p i − g i ) 2 , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>where H is the number of bins in which the range of the data is divided to estimate the PDF (p), and g</ns0:p><ns0:p>is the fitted Gaussian. The number of bins in this work is set to 100 for standardization purposes. Equation (4) corresponds to the mean-squared error (MSE) between the PDF of the data over feature l and its fitted Gaussian. From these φ l , the final weights b l are calculated as:</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>. Weight system for relevance estimation. In Figure <ns0:ref type='figure'>1A</ns0:ref>, f 1 and f 2 can be seen. 1B, 1C and 1D show in black the PDFs p i of f 1 , f 2 and f 3 respectively, and in grey dotted line their fitted Gaussian g i .</ns0:p><ns0:formula xml:id='formula_9'>b l = φ l ∑ d l=1 φ l<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Therefore, combining (3) and ( <ns0:ref type='formula' target='#formula_9'>5</ns0:ref>), the proposed approximation, denoted σ 2 , is derived as:</ns0:p><ns0:formula xml:id='formula_10'>σ 2 = d ∑ l=1 b l δ l ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>which gathers the distances present in the most relevant features, giving less importance to the dimensions that do not contribute to describe the structure of the data. The complete algorithm to calculate σ 2 is described in Algorithm 1.</ns0:p><ns0:p>Algorithm 1 Kernel parameter approximation for high-dimensional data.</ns0:p><ns0:p>Input: Data X ∈ R N×d .</ns0:p><ns0:p>Output: Sigma parameter σ 2 1: Calculate the Manhattan distances between the datapoints using Equation (3): vector of distances per feature δ l . 2: Obtain the weights for each of the features using Equations ( <ns0:ref type='formula' target='#formula_7'>4</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_9'>5</ns0:ref>): weights b l . 3: Calculate σ 2 using Equation (6).</ns0:p></ns0:div>
<ns0:div><ns0:head>Utility metric for feature subset selection</ns0:head><ns0:p>In the manifold learning stage, a new representation E of the data based on the eigenvectors was built, which described the main structures present in the original high-dimensional data. The goal is to select a subset of the features which best approximates the data in this new representation. In the literature, this feature selection problem is formulated using a graph-based loss function and a sparse regularizer of the coefficients is used to select a subset of features, as explained in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref>. The main idea of these approaches is to regress the data to its low dimensional embedding along with some sparse regularization. The use of such regularization techniques reduces overfitting and achieves dimensionality reduction. This regression is generally formulated as a least squares (LS) problem, and in many of these Manuscript to be reviewed Computer Science cases, the metric that is used for feature selection is the magnitude of their corresponding weights in the least squares solution <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b12'>Gui et al., 2016)</ns0:ref>. However, the optimized weights do not necessarily reflect the importance of the corresponding feature as it is scaling dependent and it does not properly take interactions across features into account <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>. Instead, the importance of a feature can be quantified using the increase in least-squared error (LSE) if that feature was to be removed and the weights were re-optimized. This increase in LSE, called the 'utility' of the feature can be efficiently computed <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref> and can be used as an informative metric for a greedy backwards feature selection procedure <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Narayanan and Bertrand, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Szurley et al., 2014)</ns0:ref>, as an alternative for (group-)LASSO based techniques. Under some technical conditions, a greedy selection based on this utility metric can even be shown to lead to the optimal subset <ns0:ref type='bibr' target='#b9'>(Couvreur and Bresler, 2000)</ns0:ref>.</ns0:p><ns0:p>After representing the dataset using the matrix E ∈ R N×c containing the c eigenvectors, the following LS optimization problem finds the weights p that best approximate the data X in the c-dimensional representation in E:</ns0:p><ns0:formula xml:id='formula_11'>J = min P 1 N ||Xp − E|| 2 F (<ns0:label>7</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula><ns0:p>where J is the cost or the LSE and ||.|| F denotes the Frobenius norm.</ns0:p><ns0:p>If X is a full rank matrix and if N > d, the LS solution p of ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) is</ns0:p><ns0:formula xml:id='formula_13'>p = R −1 XX R XE , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>) with R XX = 1 N X T X and R XE = 1 N X T E.</ns0:formula><ns0:p>The goal of this feature selection method is to select the subset of s(< d) features that best represents E. This feature selection problem can be reduced to the selection of the best s(< d) columns of X which minimize (7). However, this is inherently a combinatorial problem and is computationally unfeasible to solve. Nevertheless, several greedy and approximative methods have been proposed <ns0:ref type='bibr' target='#b12'>(Gui et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nie et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>Narayanan and Bertrand, 2020)</ns0:ref>. In the current work, the use of the utility metric for subset selection is proposed to select these best s columns.</ns0:p><ns0:p>The utility of a feature l of X, in an LS problem like (7), is defined as the increase in the LSE J when the column corresponding to the l-th feature in X is removed from the problem and the new optimal weight matrix, p−l , is re-computed similar to (8). Consider the new LSE after the removal of feature l and the re-computation of the weight matrix p−l to be J −l , defined as:</ns0:p><ns0:formula xml:id='formula_15'>J −l = 1 N ||X −l p−l − E|| 2 F (9)</ns0:formula><ns0:p>where X −l denotes the matrix X with the column corresponding to l-th feature removed. Then according to the definition, the utility of feature l, U l is:</ns0:p><ns0:formula xml:id='formula_16'>U l = J −l − J<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>A straightforward computation of U l would be computationally heavy due to the fact that the computation of pl requires a matrix inversion of X −l X T −l , which has to be repeated for each feature l.</ns0:p><ns0:p>However, it can be shown that the utility of the l-th feature of X in (10) can be computed efficiently without the explicit recomputation of p−l by using the following expression <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_17'>U l = 1 q l || pl || 2 , (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>where q l is the l-th diagonal element of R −1 XX and p l is the l-th row in p, corresponding to the l-th feature. The mathematical proof of (11) can be found in <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref>. Note that R −1 XX is already known from the computation of p such that no additional matrix inversion is required.</ns0:p><ns0:p>However, since the data matrix X can contain redundant features or features that are linear combinations of each other in its columns, it cannot be guaranteed that the matrix X in ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) is full-rank. In this case, the removal of a redundant column from X will not lead to an increase in the LS cost of (7).</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Moreover, R −1 XX , used to find the solution of ( <ns0:ref type='formula' target='#formula_11'>7</ns0:ref>) in (8), will not exist in this case since the matrix X is rank deficient. A similar problem appears if N < d, which can happen in case of very high-dimensional data.</ns0:p><ns0:p>To overcome this problem, the definition of utility generalized to a minimum l 2 -norm selection <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref> is used in this work. This approach eliminates the feature yielding the smallest increase in the l 2 -norm of the weight matrix when the column corresponding to that feature were to be removed and the weight matrix would be re-optimized. Moreover, minimizing the l 2 -norm of the weights further reduces the risk of overfitting.</ns0:p><ns0:p>This generalization is achieved by first adding an l 2 -norm penalty β to the cost function that is minimized in (7):</ns0:p><ns0:formula xml:id='formula_19'>J = min p 1 2 ||Xp − E|| 2 F + β ||p|| 2 2 (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>where 0 < β µ with µ equal to the smallest non-zero eigenvalue of R XX in order to ensure that the bias added due to the penalty term in ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is negligible. The minimizer of ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is:</ns0:p><ns0:formula xml:id='formula_21'>p = R −1 XXβ R XE = (R XX + β I) −1 R XE<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>It is noted that (13) reduces to R † XX R XE when β → 0, where R † XX denotes the Moore-Penrose pseudoinverse. This solution corresponds to the minimum norm solution of (7) when X contains linearly dependent columns or rows. The utility U l of the l-th column in X based on ( <ns0:ref type='formula' target='#formula_19'>12</ns0:ref>) is <ns0:ref type='bibr' target='#b4'>(Bertrand, 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_22'>U l = ||X −l p−l − E|| 2 2 − ||X p − E|| 2 2 + β || p−l || 2 2 − || p|| 2 2 = (J −l − J) + β || p−l || 2 2 − || p|| 2 2 (14)</ns0:formula><ns0:p>Note that if column l in X is linearly independent from the other columns, (14) closely approximates to the original utility definition in (10) as the first term dominates over the second. However, if column l is linearly dependent, the first term vanishes and the second term will dominate. In this case, the utility quantifies the increase in l 2 -norm after removing the l-th feature.</ns0:p><ns0:p>To select the best s features of X, a greedy selection based on the iterative elimination of the features with the least utility is carried out. After the elimination of each feature, a re-estimation of the weights p is carried out and the process of elimination is repeated, until s features remain.</ns0:p><ns0:p>Note that the value of β depends on the smallest non-zero eigenvalue of R XX . Since R XX has to be recomputed every time when a feature is removed, also its eigenvalues change along the way. In practice, the value of β is selected only once and fixed for the remainder of the algorithm, as smaller than the smallest non-zero eigenvalue of R XX before any of the features are eliminated <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref>. This value of β will be smaller than all the non-zero eigenvalues of any principal submatrix of R XX using the Cauchy's interlace theorem <ns0:ref type='bibr' target='#b16'>(Hwang, 2004)</ns0:ref>.</ns0:p><ns0:p>The summary of the utility subset selection is described in Algorithm 2. Algorithm 3 outlines the complete U2FS algorithm proposed in this paper.</ns0:p><ns0:p>Algorithm 2 Utility metric algorithm for subset selection. Compute R −1 XXβ and p as described in (13).</ns0:p><ns0:formula xml:id='formula_23'>Input: Data X,</ns0:formula></ns0:div>
<ns0:div><ns0:head>5:</ns0:head><ns0:p>Calculate the utility of the remaining features using (11) 6:</ns0:p><ns0:p>Remove the feature f l with the lowest utility. • RBF kernel, using σ 2 based on Algorithm 1 2: Calculate the normalized Laplacian L and the eigenvectors α derived from Equation (2).</ns0:p><ns0:p>Keep the c eigenvectors corresponding to the highest eigenvalues, excluding the first one. 3: Apply the backward greedy utility algorithm 2. 4: Return the s features remaining from the backward greedy utility approach.</ns0:p><ns0:p>As it has been stated before, one of the most remarkable aspects of the U2FS algorithm is the use of a greedy technique to solve the subset selection problem. The use of this type of method reduces the computational cost of the algorithm. This can be confirmed analyzing the computational complexity of U2FS, where the most demanding steps are the eigendecomposition of the Laplacian matrix (step 2 of Algortihm 3), which has a cost of O(N 3 ) <ns0:ref type='bibr' target='#b31'>(Tsironis et al., 2013)</ns0:ref>, and the subset selection stage in step 3 of Algorithm 3. Contrary to the state-of-the-art, the complexity of U2FS being a greedy method depends on the number of features to select. The most computationally expensive step of the subset selection in U2FS is the calculation of the matrix R −1 XX , which has a computational cost of O(d 3 ). In addition, this matrix needs to be updated d − s times. This update can be done efficiently using a recursive updating equation from <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref> with a cost of O(t 2 ), with t the number of features remaining in the dataset, i.e. t = d − s. Since t < d, the cost for performing d − s iterations will be O((d − s)d 2 ), which depends on the number of features s to be selected. Note that the cost of computing the least squares solution p−l for each l in ( <ns0:ref type='formula'>14</ns0:ref>) is eliminated using the efficient equation ( <ns0:ref type='formula' target='#formula_17'>11</ns0:ref>), bringing down the cost for computing the utility from O(t 4 ) to O(t) in each iteration. This vanishes with respect to the O(d 3 ) term (remember that t < d). Therefore, the total asymptotic complexity of U2FS is O(N 3 + d 3 ).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The aim of the following experiments is to evaluate the U2FS algorithm based on multiple criteria. With the focus on the new estimation of the embedding proposed, the proposed RBF kernel approach using the estimated σ 2 is compared to the σ 2 0 parameter proposed in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>, and to the binary KNN graph commonly used in <ns0:ref type='bibr' target='#b12'>Gui et al. (2016)</ns0:ref>. On the other hand, the utility metric for subset selection is compared to other sparsity-inducing techniques, based on l p − norm regularizations. In these experiments, this is evaluated using the l 1 − norm. The outline of the different combinations considered in this work summarized in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The last method, RBF σ 2 + Utility, would be the one referred to as U2FS, combining the novelties suggested in this work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• MCFS <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref> 1 . This algorithm served as inspiration to create U2FS, and therefore, it is added to the set of comparison algorithms as baseline reference. MCFS performs spectral embedding and l 1 -norm regularization sequentially, and which served as inspiration to create U2FS.</ns0:p><ns0:p>• NDFS <ns0:ref type='bibr' target='#b20'>(Li et al., 2012)</ns0:ref> 2 , which performs nonnegative spectral analysis with l 2,1 -norm regularization.</ns0:p><ns0:p>This algorithm is added to the experiments since it is an improvement of MCFS, while being the first algorithm simultaneously adapting both stages of manifold learning and subset selection. Therefore, NDFS represents the transition to these adaptive optimization-based feature selection algorithms.</ns0:p><ns0:p>• RJGSC <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref> optimally derives the embedding of the data by adapting the results with l 2 , 1-norm regularization. This algorithm is taken as a reference for the large class of adaptive sparsity-inducing feature selection algorithms, which are much more complex than U2FS, since they apply optimization to recursively adapt the embedding and feature selection stages of the methods. RJGSC was already compared to several feature selectors in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref>, and therefore, it is taken here as upper-bound threshold in performance. and the final description of the datasets can be seen in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>.</ns0:p><ns0:p>All these problems are balanced, except for the last dataset Cres-Moon, for which the data is divided 25% to 75% between the two clusters. Five extra features in addition to the original f 1 and f 2 were added to each of the datasets in order to include redundant or irrelevant information:</ns0:p><ns0:p>• f ′ 1 and f ′ 2 : random values extracted from two Pearson distributions characterized by the same higher-order statistics as f 1 and f 2 respectively.</ns0:p><ns0:p>• f ′ 3 and f ′ 4 : Original f 1 and f 2 contaminated with Gaussian noise (νN (0, 1)), with ν = 1.5.</ns0:p><ns0:p>• f ′ 5 : Constant feature of value 0.</ns0:p><ns0:p>The first step in the preprocessing of the features was to standardize the data using z-score to reduce the impact of differences in scaling and noise. In order to confirm the robustness of the feature selection techniques, the methods were applied using 10-fold cross-validation on the standardized data. For each fold a training set was selected using m-medoids, setting m to 2000 and using the centers of the clusters found as training samples. By doing so, the generalization ability of the methods can be guaranteed Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b32'>Varon et al., 2015)</ns0:ref>. On each of the 10 training sets, the features were selected applying the 5 methods mentioned in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. For each of the methods, the number of clusters c was introduced as the number of classes presented in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Since these experiments aim to evaluate the correct selection of the features, and the original features f 1 and f 2 are known, the number of features s to be selected was set to 2.</ns0:p><ns0:p>Regarding the parameter settings within the embedding methods, the binary was obtained setting k in the kNN approach to 5. For the RBF kernel embedding, σ 2 0 was set to the mean of the standard deviation along each dimension, as done in <ns0:ref type='bibr' target='#b7'>Cai et al. (2010)</ns0:ref>. When using σ 2 , its value was obtained by applying the method described in Algorithm 1.</ns0:p><ns0:p>In terms of subset selection approaches, the method based on the l 1 − norm automatically sets the value of the regularization parameter required for the LARS implementation, as described in <ns0:ref type='bibr' target='#b10'>(Deng Cai, Chiyuan Zhang, 2020)</ns0:ref>. For the utility metric, β was automatically set to the smallest non-zero eigenvalue of the matrix R XX as described in Algorithm 2.</ns0:p><ns0:p>The performance of the algorithm is evaluated comparing the original set of features f 1 and f 2 to those selected by the algorithm. In these experiments, the evaluation of the selection results is binary: either the feature set selected is correct or not, regardless of the additional features f ′ i , for i = 1, 2, ..., 5, selected.</ns0:p><ns0:p>In Table <ns0:ref type='table'>3</ns0:ref> the most common results obtained in the 10 folds are shown. The utility-based approaches always obtained the same results for all 10 folds of the experiments. On the contrary, the l 1 − norm methods provided different results for different folds of the experiment. For these cases, Table <ns0:ref type='table'>3</ns0:ref> shows the most common feature pair for each experiment, occurring at least 3 times.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3. Results feature selection for toy examples</ns0:head></ns0:div>
<ns0:div><ns0:head>Method</ns0:head><ns0:p>Utility metric</ns0:p><ns0:formula xml:id='formula_24'>l 1 − norm Embedding KNN Bin RBF σ 2 0 RBF σ 2 KNN Bin RBF σ 2 0 Clouds f 1 , f 2 f' 1 , f' 4 f 1 , f 2 f' 1 , f' 2 f' 1 , f' 2 Moons f 1 , f 2 f' 3 , f' 4 f 1 , f 2 f' 1 , f' 3 f' 1 , f' 3 Spirals f 1 , f 2 f 1 , f 2 f 1 , f 2 f 2 , f' 2 f 2 , f' 2 Corners f 1 , f 2 f' 1 , f' 2 f 1 , f 2 f 2 , f' 2 f 2 , f' 2 Half-Kernel f 1 , f 2 f 2 , f' 3 f 1 , f 2 f 1 , f' 3 f 1 , f' 3 Cres-Moon f 1 , f 2 f 1 , f' 4 f 1 , f 2 f 2 , f' 1 f 2 , f' 2</ns0:formula><ns0:p>As shown in Table <ns0:ref type='table'>3</ns0:ref>, the methods that always obtain the adequate set of features are based on utility, both with the binary weighting and with the RBF kernel and the suggested σ 2 . Since these results were obtained for the 10 folds, they confirm both the robustness and the consistency of the U2FS algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Benchmark datasets</ns0:head><ns0:p>Additionally, the proposed methods were evaluated using 6 well-known benchmark databases. The databases considered represent image (USPS, ORL, COIL20), audio (ISOLET) and text data (PCMAC, BASEHOCK) 4 , proposing examples with more samples than features, and vice versa. The description of these databases is detailed in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. All these datasets are balanced, except USPS.</ns0:p><ns0:p>In these datasets, the relevant features are unknown. Therefore, the common practice in the literature to evaluate feature selectors consists of applying the algorithms, taking from 10 to 80% of the original 4 All datasets downloaded from http://featureselection.asu.edu/datasets.php</ns0:p></ns0:div>
<ns0:div><ns0:head>11/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science set of features, and evaluating the accuracy of a classifier when trained and evaluated with the selected feature set <ns0:ref type='bibr' target='#b43'>(Zhu et al., 2016)</ns0:ref>. The classifier used for this aim in other papers is k-Nearest Neighbors (KNN), setting the number of neighbors to 5.</ns0:p><ns0:p>These accuracy results are computed using 10-fold cross-validation to confirm the generalization capabilities of the algorithm. By setting m to 90% of the number of samples available in each benchmark dataset, m-medoids is used to select the m centroids of the clusters and use them as training set. Feature selection and the training of the KNN classifier are performed in these 9 folds of the standardized data, and the accuracy of the KNN is evaluated in the remaining 10% for testing. Exclusively for USPS, given the size of the dataset, 2000 samples were used for training and the remaining data was used for testing.</ns0:p><ns0:p>These 2000 samples were also selected using m-medoids. Since PCMAC and BASEHOCK consist of binary data, these datasets were not standardized.</ns0:p><ns0:p>The parameters required for the binary and RBF embeddings, as well as β for the utility algorithm, are automatically set as detailed in section .</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the median accuracy obtained for each of the 5 methods. The shadows along the lines correspond to the 25 and 75 percentile of the 10 folds. As a reference, the accuracy of the classifier without using feature selection is shown in black for each of the datasets. Additionally, Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> shows the computation time for both the utility metric and the l 1 − norm applied on a binary weighting embedding.</ns0:p><ns0:p>In this manner, the subset selection techniques can be evaluated regardless of the code efficiency of the embedding stage. Similarly to Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>, the computation time plots show in bold the median running time for each of the subset selection techniques, and the 25 and 75 percentiles around it obtained from the 10-fold cross-validation.</ns0:p><ns0:p>The difference in the trends of the l 1 − norm and utility in terms of computation time is due to their formulation. Feature selection based on l 1 − norm regularization, solved using the LARS algorithm in this case, requires the same computation time regardless of the number of features aimed to select. All features are evaluated together, and later on, an MCFS score obtained from the regression problem is assigned to them <ns0:ref type='bibr' target='#b7'>(Cai et al., 2010)</ns0:ref>. The features with the higher scores are the ones selected. On the other hand, since the utility metric is applied in a backward greedy trend, the computation times change for different number of features selected. The lower the number of features selected compared to the original set, the higher the computation time. This is aligned with the computational complexity of the algorithm, described in Section . In spite of this, it can be seen that even the highest computation time for utility is lower than the time taken using l 1 − norm regularization. The experiments were performed with 2x Intel Xeon E5-2640 @ 2.5 GHz processors and 64GB of working memory.</ns0:p><ns0:p>Finally, the experiments in benchmark databases are extended to compare U2FS to other key algorithms in the state-of-the-art. As it was mentioned at the beginning of this section, the selected algorithms are MCFS, NDFS, and RJGSC, which represent, respectively, the precursor of U2FS, an improved version of MCFS, and an example from the class of adaptive algorithms which recursively optimize the objective function proposed. NDFS and RJGSC require the tuning of their regularization parameters, for which the indications in their corresponding articles were followed. For NDFS, the value of γ was set to 10 8 , and α and β were selected from the values {10 −6 , 10 −4 , ..., 10 6 } applying grid search. The matrix F was initialized with the results of spectral clustering using all the features. For RJGSC, the results described in <ns0:ref type='bibr' target='#b43'>Zhu et al. (2016)</ns0:ref> for the BASEHOCK and PCMAC datasets are taken as a reference. In MCFS, the embedding is done using KNN and binary weighting, and the l 1 − norm is used for subset selection.</ns0:p><ns0:p>U2FS, on the other hand, results from the combination of the RBF kernel with σ 2 and the utility metric. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science PCMAC datasets. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The results obtained in the experiments suggest that the proposed U2FS algorithm obtains comparable results to the state-of-the-art in all the applications suggested, taking less computational time. Nevertheless, the performance of the utility metric for feature selection varies for the different experiments presented and requires a detailed analysis.</ns0:p><ns0:p>From Table <ns0:ref type='table'>3</ns0:ref>, in Section , it can be concluded that the utility metric is able to select the correct features in an artificially contaminated dataset. Both the binary embedding and the RBF kernel with σ 2 select the original set of features for the 10 folds of the experiment. The stability in the results also applies for the RBF embedding with σ 2 0 , which always selected the same feature pair for all 10 folds even though they are only correct for the spirals problem.</ns0:p><ns0:p>Therefore, considering the stability of the results, it can be concluded that the proposed approach is more robust in the selection of results than that based on the l 1 − norm.</ns0:p><ns0:p>On the other hand, when considering the suitability of the features selected, two observations can be made. First of all, it can be seen that the lack of consistency in the l 1 − norm approaches discards the selection of the correct set of features. Moreover, the wrong results obtained with both l 1 − norm and utility methods for the RBF embedding using σ 2 0 reveal the drawback of applying this approximation of σ 2 0 in presence of redundant or irrelevant features. Since this value is calculated as the mean of the standard deviation of all the dimensions in the data, this measure can be strongly affected by irrelevant data, that could be very noisy and enlarge this sigma, leading to the allocation of all the samples to a mega-cluster.</ns0:p><ns0:p>While the use of the proposed approximation for σ 2 achieves better results than σ 2 0 , these are comparable to the ones obtained with the KNN binary embedding when using the utility metric. The use of KNN to build graphs is a well-known practice, very robust for dense clusters, as it is the case in these examples. The definition of a specific field where each of the embeddings would be superior is beyond the scope of this paper. However, the excellence of both methods when combined with the proposed subset selection method only confirms the robustness of the utility metric, irrespective of the embedding considered.</ns0:p><ns0:p>For standardization purposes, the performance of the method was evaluated in benchmark databases.</ns0:p><ns0:p>As it can be observed, in terms of the accuracy obtained for each experiment, U2FS achieves comparable results to the l 1 − norm methods for most of the datasets considered, despite its condition of greedy method.</ns0:p><ns0:p>In spite of this, some differences in performance can be observed in the different datasets. The different ranking of the methods, as well as the accuracy obtained for each of the databases can be explained taking into account the type of data under study and the ratio between samples and dimensions.</ns0:p><ns0:p>With regard to the type of data represented by each test, it can be observed that for the ISOLET dataset, containing sound information, two groups of results are distinguishable. The group of the utility metric Manuscript to be reviewed</ns0:p><ns0:p>Computer Science results outperforms those derived from the l 1 − norm, which only reach comparable results for 60% of the features selected. These two groups of results are caused by the subset selection method applied, and not for the embedding, among which the differences are not remarkable.</ns0:p><ns0:p>In a similar way, for the case of the image datasets USPS, ORL and COIL20, the results derived from utility are slightly better than those coming from the l 1 − norm. In these datasets, similarly to the performance observed in ISOLET, accuracy increases with the number of features selected.</ns0:p><ns0:p>Regarding the differences between the proposed embeddings, it can be observed that the results obtained are comparable for all of them. Nonetheless, Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows that there is a slight improvement in the aforementioned datasets for the RBF kernel with σ 2 , but the results are still comparable to those obtained with other embeddings. Moreover, this similarity in the binary and RBF results holds for the l 1 − norm methods, for which the accuracy results almost overlap in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. This can be explained by the relation between the features considered. Since for these datasets the samples correspond to pixels, and the features to the color codes, a simple neighboring method such as the binary weighting is able to code the connectivity of pixels of similar colors.</ns0:p><ns0:p>The text datasets, PCMAC and BASEHOCK, are the ones that show bigger differences between the results obtained with utility and those obtained with the l 1 − norm. This can be explained by the amount of zeros present in the data, with which the utility metric is able to cope slightly better. The sparsity of the data leads to more error in the l 1 − norm results, since more features end up having the same MCFS score, and among those, the order for selection comes at random. The results obtained with the utility metric are more stable, in particular for the BASEHOCK dataset. For this dataset, U2FS even outperforms the results without feature selection if at least 40% of the features are kept.</ns0:p><ns0:p>In all the datasets proposed, the results obtained with the l 1 − norm show greater variability, i.e. larger percentiles. This is aligned with the results obtained in the simulations. The results for the l 1 − norm are not necessarily reproducible in different runs, since the algorithm is too sensitive to the training set selected. The variability of the utility methods is greater for the approaches based on the RBF kernel.</ns0:p><ns0:p>This is due to the selection of the σ 2 parameter, which also depends on the training set. The tuning of this parameter is still very sensitive to high-dimensional and large-scale data, posing a continuous challenge for the machine learning community <ns0:ref type='bibr' target='#b39'>(Yin and Yin, 2016;</ns0:ref><ns0:ref type='bibr' target='#b30'>Tharwat et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Despite it being a greedy method, the utility metric proves to be applicable to feature selection approaches and to strongly outperform the l 1 − norm in terms of computational time, without significant reduction in accuracy. U2FS proves to be effective both in cases with more samples than features and vice versa. The reduction in computation time is clear, for all the benchmark databases described, and is particularly attractive for high-dimensional datasets. Altogether, our feature selection approach U2FS, based on the utility metric, and with the binary or the RBF kernel with σ 2 is recommended due to its fast performance and its interpretability.</ns0:p><ns0:p>Additionally, the performance of U2FS is comparable to the state-of-the-art, as shown in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. In this table, the performance of U2FS (RBF kernel and σ 2 , with the utility metric) is compared to that of MCFS, NDFS and RJGSC. For MCFS, it can be seen that, as expected, U2FS appears as an improvement of this algorithm, achieving better results for both datasets. For NDFS, the results are slightly worse than for U2FS, most probably due to problems in the tuning of regularization parameters. Given the consistent good results for different datasets of RJGSC when compared against the state-of-the-art, and its condition of simultaneously adapting the spectral embedding and subset selection stages, this algorithm is taken as example of the most complex SSFS algorithms (SAMM-FS, SOGFS or DSRMR). These algorithms perform manifold learning and feature selection simultaneously, iteratively adapting both steps to achieve optimal results.</ns0:p><ns0:p>It is clear that in terms of accuracy, both for 10% of the features and for the maximal value of achieved, U2FS obtains similar results to RJGSC, while at the same time having a much smaller computational complexity. Furthermore, while RJGSC requires the manual tuning of extra parameters, similarly to other algorithms in the state-of-the-art, U2FS tunes its parameters automatically. Hence, the application of the method is straightforward for the users. The stages of higher complexity in U2FS, previously defined as O(N 3 + d 3 ), are shared by most of the algorithms in the state-of-the-art. However, on top of these eigendecompositions and matrix inversions, the algorithms in the literature require a number of iterations in the optimization process that U2FS avoids. Additionally, U2FS is the only algorithm for which the computation time scales linearly with the amount of features selected.</ns0:p><ns0:p>The current state-of-the-art of unsupervised spectral feature selectors applies the stages of manifold Manuscript to be reviewed</ns0:p><ns0:p>Computer Science learning and subset selection simultaneously, which can lead to optimal results. In a field that gets more and more complex and goes far from applicability, U2FS is presented as a quick solution for a sequential implementation of both stages of SSFS algorithms, yet achieving comparable results to the state-of-the-art.</ns0:p><ns0:p>Being a greedy method, the utility metric cannot be applied simultaneously to the manifold learning and subset selection stages. However, other sequential algorithms from the state-of-the-art could consider the use of utility for subset selection, instead of the current sparsity-inducing techniques. One of the most direct applications could be the substitution of group-LASSO for group-utility, in order to perform selections of groups of features as proposed by <ns0:ref type='bibr' target='#b4'>Bertrand (2018)</ns0:ref>. This can be of interest in cases where the relations between features are known, such as in channel selection <ns0:ref type='bibr' target='#b23'>(Narayanan and Bertrand, 2020)</ns0:ref> or in multi-modal applications <ns0:ref type='bibr' target='#b41'>(Zhao et al., 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This work presents a new method for unsupervised feature selection based on manifold learning and sparse regression. The main contribution of this paper is the formulation of the utility metric in the field of spectral feature selection, substituting other sparse regression methods that require more computational resources. This method, being a backward greedy approach, has been proven to obtain comparable results to the state-of-the-art methods with analogous embedding approaches, yet at considerably reduced computational load. The method shows consistently good results in different applications, from images to text and sound data; and it is broadly applicable to problems of any size: using more features than samples or vice versa.</ns0:p><ns0:p>Furthermore, aiming to show the applicability of U2FS to data presenting non-linearities, the proposed approach has been evaluated in simulated data, considering both a binary and an RBF kernel embedding.</ns0:p><ns0:p>Given the sensitivity of the RBF kernel to high-dimensional spaces, a new approximation of the RBF kernel parameter was proposed, which does not require further tuning around the value obtained. The proposed approximation outperforms the rule-of-thumb widely used in the literature in most of the scenarios presented. Nevertheless, in terms of feature selection, the utility metric is robust against the embedding.</ns0:p><ns0:p>U2FS is proposed as a non-parametric efficient algorithm, which does not require any manual tuning or special knowledge from the user. Its simplicity, robustness and accuracy open a new path for structure sparsity-inducing feature selection methods, which can benefit from this quick and efficient technique.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021) Manuscript to be reviewed Computer Science combination with feature selection techniques not only reduces the dimensionality of the data, but also guarantees interpretability.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>and R XE by removing the rows and columns related to that feature f l . 8: end while 8/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021) Manuscript to be reviewed Computer Science Algorithm 3 Unsupervised feature selector based on the utility metric (U2FS). Input: Data X, Number of clusters c, Number of features s to select Output: s features selected 1: Construct the similarity graph W as described in Section selecting one of the weightings: • Binary • RBF kernel, using σ 2 0</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Toy examples used for simulations: Clouds (A), Moons (B), Spirals (C), Corners (D), Half-Kernel (E), Crescent Moon (F).</ns0:figDesc><ns0:graphic coords='11,224.49,257.79,247.90,137.97' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Accuracy results for the benchmark databases, for selecting from 10 to 80% of the original number of features. The thick lines represent the median accuracy of the 10-fold cross-validation, and the shadows, the 25 and 75 percentile. USPS (Figure 3A), Isolet (B), ORL (C), COIL20 (D), PCMAC (E), BASEHOCK (F).</ns0:figDesc><ns0:graphic coords='14,141.81,83.74,413.12,235.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Computation time for extracting from 10 to 80% of the original number of features for each of the benchmark databases. USPS (Figure 4A), Isolet (B), ORL (C), COIL20 (D), PCMAC (E), BASEHOCK (F).</ns0:figDesc><ns0:graphic coords='14,141.81,424.54,413.13,238.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Eigenvectors E, Number of features s to select Output: s features selected 1: Calculate R XX and R XE as described in Equation (8).</ns0:figDesc><ns0:table /><ns0:note>2: Calculate β as the smallest non-zero eigenvalue of R XX 3: while Number of features remaining is > s do 4:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Methods compared in the experiments</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Similarity measure</ns0:cell><ns0:cell>Subset selection</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>KNN Bin + l 1 − norm KNN + binary weighting l 1 -norm</ns0:cell></ns0:row><ns0:row><ns0:cell>RBF σ 2 0</ns0:cell><ns0:cell>+ l 1 − norm</ns0:cell><ns0:cell>RBF kernel, σ 2 0</ns0:cell><ns0:cell>l 1 -norm</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>KNN Bin + Utility</ns0:cell><ns0:cell cols='2'>KNN + binary weighting Utility metric</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>RBF σ 2 0 RBF σ 2 + Utility + Utility</ns0:cell><ns0:cell>RBF kernel, σ 2 0 RBF kernel, σ 2</ns0:cell><ns0:cell>Utility metric Utility metric</ns0:cell></ns0:row></ns0:table><ns0:note>These novelties are evaluated in two different scenarios, namely a simulation study, and in the application of the methods on benchmark datasets. In particular for the latter, the methods are not only evaluated in terms of accuracy, but also regarding computational cost. Additionally, U2FS is compared with 3 representative state-of-the-art algorithms from the general family of unsupervised sparsity-inducing feature selection algorithms:9/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Description of the toy example datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'># samples # classes</ns0:cell></ns0:row><ns0:row><ns0:cell>Clouds</ns0:cell><ns0:cell>9000</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Moons</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Spirals</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Corners</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Half-Kernel</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Crescent-Moon 10000</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>(</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Description of the benchmark databases</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>Data Type Samples Features Classes</ns0:cell></ns0:row><ns0:row><ns0:cell>USPS</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>9298</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Isolet</ns0:cell><ns0:cell>Audio</ns0:cell><ns0:cell>1560</ns0:cell><ns0:cell>617</ns0:cell><ns0:cell>26</ns0:cell></ns0:row><ns0:row><ns0:cell>ORL</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell>COIL20</ns0:cell><ns0:cell>Images</ns0:cell><ns0:cell>1440</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMAC</ns0:cell><ns0:cell>Text</ns0:cell><ns0:cell>1943</ns0:cell><ns0:cell>3289</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BASEHOCK Text</ns0:cell><ns0:cell>1993</ns0:cell><ns0:cell>4862</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>summarizes the results by showing the KNN accuracy (ACC) for 10% of the features used, and the maximum ACC achieved among the percentages of features considered, for the BASEHOCK and</ns0:figDesc><ns0:table /><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Comparison of classification accuracy (ACC) with the state-of-the-art for PCMAC and BASEHOCK datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='4'>Method ACC at 10% features % features at Max ACC Max ACC</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMAC</ns0:cell><ns0:cell>U2FS</ns0:cell><ns0:cell>0.785</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MCFS</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>0.697</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NDFS</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RJGSC</ns0:cell><ns0:cell>0.805</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BASEHOCK U2FS</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>0.925</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>MCFS</ns0:cell><ns0:cell>0.815</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>0.84</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NDFS</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>0.794</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RJGSC</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>0.917</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54747:2:0:NEW 25 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Amalia Villa
Dynamical Systems, Signal Processing
and Data Analytics (STADIUS),
Leuven (Arenberg)
Kasteelpark Arenberg 10 - box 2446
3001 Leuven
Belgium
25th February 2021
Dear Dr. Fernández, dear reviewers,
We would like to thank you for providing us with new interesting feedback on our manuscript 'Utility metric for unsupervised feature selection' (#54747). Please find attached a revised version of our manuscript. The comments provided by the reviewers helped us to improve our submission, you can find each of our answers in the following page. We also updated the Github repository, according to the remarks of the reviewer.
In the coming page of this document we address and answer all reviewer's comments and suggestions.
Amalia Villa
On behalf of all authors.
Reviewer 3
We would like to thank the reviewer for his/her feedback. Hereby we answer the reviewer’s comments.
1. The sample code in the manuscript code will encounter dimension errors. Please check the experiment code again.
Thank you very much for finding this error. The error is due to the code requiring the Curve Fitting toolbox from Matlab. We have updated the README in the Github repository, indicating the dependency of the U2FS on this toolbox.
2. The comparison method and the data set used can be increased, which will make the article more professional and persuasive.
Thank you very much for your comment. We consider the current Experiments section to be informative enough, given the use of simulation data, benchmark datasets of different characteristics, and the comparison of representative methods from the state-of-the-art. Hereunder we justify the selection of this experiments set up.
Regarding the number of datasets considered, the use of 6 benchmark datasets to report results should be considered sufficient according to the literature. See:
Zhu, X., Li, X., Zhang, S., Ju, C., & Wu, X. (2016). Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE transactions on neural networks and learning systems, 28(6), 1263-1275.,
Hou, C., Nie, F., Li, X., Yi, D., & Wu, Y. (2013). Joint embedding learning and sparse regression: A framework for unsupervised feature selection. IEEE Transactions on Cybernetics, 44(6), 793-804. ,
Yang, Y., Shen, H. T., Ma, Z., Huang, Z., & Zhou, X. (2011, December). ℓ 2, 1-norm regularized discriminative feature selection for unsupervised learning. In IJCAI international joint conference on artificial intelligence.
The current selection of datasets contains cases with more features than samples and vice-versa, as well as multi and bi-class data. Additionally, we suggest a new set of simulation experiments, where the correct selection of the relevant features can be evaluated. These diverse experiments confirm the versatility of U2FS in multiple situations, always providing successful results in a short computation time.
In relation to the comparison methods used, we consider the current methods a representative sample of the publicly available feature selectors. MDFS, NDFS and RJGSC represent, namely, the precursor, a comparable method, and an adaptable algorithm much more complex than U2FS. Since these methods have already been compared to the state-of-the-art in the corresponding publications, they work as a summary of the available literature.
3. The logic in the manuscript is clear, the grammar is accurate, and the format is relatively standardized. The formula is best placed in the center to look more tidy.
Thank you for your comments. Unfortunately, the alignment of the equations to the left is part of the Peer J template. However, we will propose the Peer J editors to center the equations in the camera-ready version.
" | Here is a paper. Please give your review comments after reading it. |
58 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations -with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field.</ns0:p><ns0:p>In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF.</ns0:p><ns0:p>The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>In recent years, the field of Machine Learning (ML) has experienced a surge in practical applications.</ns0:p><ns0:p>However, many ML models, spanning from random forests to deep neural networks, do not provide a human-understandable clarification of their internal decision processes: this issue is known as the black-box problem <ns0:ref type='bibr' target='#b8'>(Burrell, 2016)</ns0:ref>.</ns0:p><ns0:p>The lack of explanatory power hampers the deployment of ML in real-world scenarios for a variety of reasons. As a first limitation, black-box algorithms are poor decision support systems <ns0:ref type='bibr' target='#b13'>(Doran et al., 2017)</ns0:ref>: domain experts such as doctors or financial analysts would hardly take into account an algorithmicallygenerated second opinion without an argument to support it, or a measurable score of trust <ns0:ref type='bibr' target='#b31'>(Jiang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>From a legal viewpoint, in the European Union AI systems are regulated by law with the General Data Protection Regulation (GDPR) -which includes directives on algorithmic decision-making <ns0:ref type='bibr' target='#b24'>(Goodman and Flaxman, 2016)</ns0:ref>. For instance, GDPR states that: i) The decisions which produces legal effects concerning [a citizen] or of similar importance shall not be based on the data revealing sensitive information, for example about ethnic origins, political opinions, sexual orientation, . . .; and ii) The controller must ensure the right for individuals to obtain further information about the decision of any automated system. While the required explanations of a complex socio-technical system regards the decision process as a whole, clearly good explainability algorithms would help supporting the provisioning of detailed explanations for ML components <ns0:ref type='bibr' target='#b17'>(Edwards and Veale, 2017)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In recent times, widely used ML models turned out to be biased against racial <ns0:ref type='bibr' target='#b42'>(Obermeyer et al., 2019)</ns0:ref> or gender <ns0:ref type='bibr' target='#b28'>(Hamilton, 2019)</ns0:ref> groups. While interpretable-by-design models should be preferred in high-stakes decisions <ns0:ref type='bibr' target='#b53'>(Rudin, 2019)</ns0:ref>, algorithmic fairness is of paramount importance, and its very first step is the explanation of black-box outcomes <ns0:ref type='bibr' target='#b44'>(Panigutti et al., 2020a)</ns0:ref>. Furthermore, an unexplained ML model might also be right for the wrong reasons, due to the algorithm learning spurious correlation in the data. Being capable of assessing the reasons why a ML component outputs a given classification is essential to establish trustable AI <ns0:ref type='bibr' target='#b30'>(Ignatiev, 2020)</ns0:ref>.</ns0:p><ns0:p>In this paper we focus on the class of explanation models named local linear explanations <ns0:ref type='bibr'>(LLE)</ns0:ref>. A taxonomy of other approaches can be found in <ns0:ref type='bibr' target='#b4'>(Belle and Papantonis, 2020)</ns0:ref>. Explainability methods that explain arbitrary black-box models <ns0:ref type='bibr' target='#b49'>(Ribeiro et al., 2016a)</ns0:ref> using LLEs provide an explanation in terms of the most relevant features for a data point classification. The XAI literature considers several evaluation criteria <ns0:ref type='bibr' target='#b7'>(Bohanec et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b27'>Guidotti et al., 2018)</ns0:ref>, but there is generally no consensus on a core of fundamental metrics, and in some cases these properties lack a precise definition <ns0:ref type='bibr' target='#b22'>(Gilpin et al., 2018)</ns0:ref>. Furthermore, the explanation process has to be contextualised with respect the audience for which explainability is sought <ns0:ref type='bibr' target='#b2'>(Arrieta et al., 2019)</ns0:ref> -for example, the same clinical black-box decision could be explained very differently to patients, doctor, and medical data analysts. Finally, a natural application of explanations consists in supporting a decision making task in order to perform the minimum change on an instance that modifies the classification label. Proactive use of explanations can be found for instance in the credit sector <ns0:ref type='bibr' target='#b25'>(Grath et al., 2018)</ns0:ref>, to guide an applicant in amending his/her position in order to get a loan approval, or in planning <ns0:ref type='bibr' target='#b20'>(Fox et al., 2017)</ns0:ref>. To the best of our knowledge, there is no agreement on how to quantify the prescriptive exploitation of local linear explanations.</ns0:p><ns0:p>The main contribution of this paper is a clear and unambiguous definition of a core set of metrics to compare and evaluate explainability methods that explain black-box models with LLEs. We propose to quantify three new LLE aspects: i) the local concordance of the white-box model w.r.t the black-box model for the instance to be explained, under the constraint of explanation conciseness; ii) the tendency of an explainability method to produce the same explanation on the same data point (named reiteration similarity); and iii) how good the explanation is when it is taken as a recipe to change a data point classification (prescriptivity). The core set of metrics also includes existing LLE metrics taken from literature (local fidelity). To encourage users to adopt a single, reproducible definition of these metrics for LLEs, we provide an implementation in the form of a Python framework, named LEAF (Local Explanation evAluation Framework). We apply LEAF on the two most used model-agnostic explainability algorithms for LLEs, LIME and SHAP, and show that they display unexpected behaviours that were not previously reported in literature.</ns0:p><ns0:p>The paper is structured as follows: in Section 2 we provide an overview of Local Linear Explainers basic concepts as well as the two techniques we will compare, LIME and SHAP. In Section 3 we introduce the LEAF framework and focus on the metrics it computes to evaluate LLE explanations. In Section 4 we apply LEAF on a range of datasets and black-box classifiers, evaluating the explanations provides by LIME and SHAP, and discussing the results. We end the paper with a general discussion (Section 5) and final considerations (Section 6).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In order to tackle the black-box problem, several XAI techniques have been proposed <ns0:ref type='bibr' target='#b23'>(Goebel et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Došilović et al., 2018)</ns0:ref>, exploiting different principles and processes, usually with supporting graphical representations <ns0:ref type='bibr' target='#b62'>(Tamagnini et al., 2017)</ns0:ref> and visual analytics <ns0:ref type='bibr' target='#b10'>(Choo and Liu, 2018)</ns0:ref>. In general the goal is to extract human-understandable knowledge from trained black-box models <ns0:ref type='bibr' target='#b38'>(Ming et al., 2018;</ns0:ref><ns0:ref type='bibr'>Ribeiro et al., 2016b,c)</ns0:ref>. However, despite strong consensus about the necessity of enriching ML models with explainability modules, there is no shared definition of what an explanation should be <ns0:ref type='bibr' target='#b35'>(Lipton, 2016;</ns0:ref><ns0:ref type='bibr' target='#b47'>Pedreschi et al., 2019)</ns0:ref>, nor quantitative comparisons of different methods <ns0:ref type='bibr' target='#b41'>(Murdoch et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Model-agnostic explanation models are typically based on decision trees, rules or feature importance <ns0:ref type='bibr' target='#b27'>(Guidotti et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Freitas, 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Craven and Shavlik, 1995)</ns0:ref>, because of the simplicity of such explanations. Several model-specific and data-specific explanation models have also been developed, like for deep neural networks <ns0:ref type='bibr' target='#b6'>(Binder et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b55'>Selvaraju et al., 2019)</ns0:ref>, deep relational machines <ns0:ref type='bibr' target='#b57'>(Srinivasan et al., 2019a)</ns0:ref>, time series <ns0:ref type='bibr' target='#b34'>(Karlsson et al., 2019)</ns0:ref>, multi-labelled and ontology-linked data <ns0:ref type='bibr'>(Panigutti et al.,</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 2020b) or logic problems <ns0:ref type='bibr' target='#b58'>(Srinivasan et al., 2019b)</ns0:ref>; software toolkits including the implementation of various XAI algorithms have been also introduced <ns0:ref type='bibr' target='#b5'>(Biecek, 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Arya et al., 2019)</ns0:ref>. A comprehensive survey of explainability methods can be found in <ns0:ref type='bibr' target='#b27'>Guidotti et al. (2018)</ns0:ref> and in <ns0:ref type='bibr' target='#b14'>Došilović et al. (2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Local Linear Explanation Methods</ns0:head><ns0:p>In this section we review the class of explanation models named local linear explanations (LLE), also called feature importance models <ns0:ref type='bibr' target='#b27'>(Guidotti et al., 2018)</ns0:ref>, additive feature attribution methods <ns0:ref type='bibr' target='#b36'>(Lundberg and Lee, 2017)</ns0:ref> or linear proxy models <ns0:ref type='bibr' target='#b22'>(Gilpin et al., 2018)</ns0:ref>. Let x be a data point of a dataset X, that is fed to a black-box model f . For the sake of generality, no assumption on the details of f are made. A XAI method explains the value f (x) with a LLE by building an interpretable, white-box classifier g that mimics f around x.</ns0:p><ns0:p>Formally, let F be the set of input features of X, and F = |F | be its cardinality. Let X ⊆ R F be the input dataset, with mean µ X and variance σ 2 X . For simplicity, we consider X to be a tabular dataset in an interpretable space, i.e. the same space where explanations are provided. Let f : R F → R be the original black-box model, and let x ∈ R F be the input to be explained. We consider white-box models in the form of local linear explanations (LLE). A LLE model g is a linear function of the input variables with form</ns0:p><ns0:formula xml:id='formula_0'>g(x) = w 0 + F ∑ i=1 w i • x i</ns0:formula><ns0:p>(1) g assigns to each feature i a weight w i , in order to approximate the behaviour of f in the local neighbourhood of x. Intuitively, the absolute value w i of each feature i gives its importance in the explanation. In some cases, only K ≤ F features have non-zero weights.</ns0:p><ns0:p>In the following subsection we will describe LIME and SHAP in detail. Several other explainability methods <ns0:ref type='bibr' target='#b27'>(Guidotti et al., 2018)</ns0:ref> for specific classes of black-box models (i.e. model-aware) exist. For instance, DeepLIFT <ns0:ref type='bibr' target='#b56'>(Shrikumar et al., 2017</ns0:ref>) is a recursive prediction explanation for deep learning models, which introduced the idea of background values also used by SHAP. Another approach, LACE <ns0:ref type='bibr' target='#b46'>(Pastor and Baralis, 2019)</ns0:ref>, combines the local perturbation concept of LIME together with the computation of Shapley-like used in SHAP. Other approaches tackle explainability from a different perspective, i.e. they do not provide explicit indications about the reasons why a decision has been taken but rather generate counterfactual explanations through optimisation-based methods <ns0:ref type='bibr' target='#b64'>(Ustun et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Karimi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Grath et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>LIME: Local Interpretable Model-agnostic Explanations</ns0:head><ns0:p>The LIME method <ns0:ref type='bibr' target='#b51'>(Ribeiro et al., 2016c)</ns0:ref> constructs LLE models g starting from a synthetic neighbourhood N(x) around the input to be explained x. A local neighbourhood N(x) of H points around an instance x is defined as</ns0:p><ns0:formula xml:id='formula_1'>N(x) = x j = x + p j , p j ∼ N (0, σ X ) j = 1 . . . H (2)</ns0:formula><ns0:p>where each vector p j represents a local perturbation, and N is the multivariate normal distribution. For simplicity, we do not consider how to randomly generate categorical features, which can be uniformly sampled from their frequency in the input dataset X. To find g, LIME fits a ridge regression model to N(x) using the linear least squares function</ns0:p><ns0:formula xml:id='formula_2'>L ( f , g, π x ) = ∑ z∈N(x) π x (z) f (z) − g(η(z)) 2</ns0:formula><ns0:p>where the default distance kernel π x (z) = exp −d(x, z) 2 /σ 2 is a weighted distance function for a kernel width σ = 3 4 √ F, and d(•, •) is the Euclidean distance. Explanations can be given in an explanation space X ′ which can differ from X, provided that a mapping function η : X → X ′ is given (η is the identity when</ns0:p><ns0:formula xml:id='formula_3'>X ≡ X ′ ).</ns0:formula><ns0:p>LIME is designed to select a subset of the F features for the LLE model g iteratively, by fitting multiple ridge regressors using a sequential feature selection (SFS) algorithm <ns0:ref type='bibr' target='#b48'>(Pudil et al., 1994)</ns0:ref>. Intuitively, the resulting model g is a local explanation for x because it is built to classify the behaviour of N(x), which is a synthetic dataset built around x. It is a linear explanation because it provides a single scalar weight for each feature of x.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A different distance kernel π x (z) can be used, altering the resulting explanation. LIME also supports the generation of explanation models g in a feature space X ′ different from X, provided that a mapping function X ′ → X is given.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>SHAP: SHapley Additive exPlanations</ns0:head><ns0:p>The SHAP method (Lundberg and Lee, 2017) (also called IME, Interactions-based Method for Explanation) derives local explanation models using the concept of Shapley values from cooperative game theory <ns0:ref type='bibr' target='#b40'>(Moretti and Patrone, 2008;</ns0:ref><ns0:ref type='bibr' target='#b39'>Moeyersoms et al., 2016)</ns0:ref>. In principle the Shapley theory applies to binary features only, but it can be extended to real values. Many variations of SHAP have been defined:</ns0:p><ns0:p>in this paper we will refer to the KernelExplainer, which is the model-agnostic and most general version of the method. Other model-specific variations, like TreeExplainer for tree-based models <ns0:ref type='bibr' target='#b37'>(Lundberg et al., 2020)</ns0:ref>, have also been defined, but are not considered in this paper due to their model-aware nature.</ns0:p><ns0:p>A SHAP explanation is a vector φ = (φ 0 , φ 1 . . . φ F ) that assigns a feature importance φ i to each input feature. Intuitively, the input features of a classifier are akin to players cooperating to win a game (the model prediction). The more important is a player i to the cooperation, the higher is its Shapley value φ (i). Features are grouped into coalitional sets, corresponding to the power set of the set of features F.</ns0:p><ns0:p>To evaluate the black-box model for a subset of features S ⊆ F (i.e. the coalition with only players from S), SHAP introduces the concept of background values B, to replace the missing features in the evaluation of f . Let f S (x S ) denote the evaluation of f on x where only the features in the subset S are kept, and the others are replaced with the background values from B. The set B is usually taken as either a single sample B = {E(X)} being the dataset average, or as a set of centroids of X.</ns0:p><ns0:p>For a feature i ∈ F , its Shapley value φ (i) is defined as follows</ns0:p><ns0:formula xml:id='formula_4'>φ (i) = ∑ S⊆F \{i} |S|! • (F-|S| -1)! F! f S∪{i} (x S∪{i} ) − f S (x S )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Let φ 0 = f ∅ (x ∅ ) be the constant neutral value, where all feature values of x are ignored.</ns0:p><ns0:p>We can derive a LLE model g as a linear regressor using (1) with weights</ns0:p><ns0:formula xml:id='formula_5'>w 0 = φ 0 , w i = φ i x i − µ i , 1 ≤ i ≤ F, µ i ∈ B {i}<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Such derivation guarantees to be locally accurate (Lundberg and Lee, 2017, page 4), i.e. f (x) = g(x).</ns0:p><ns0:p>Unlike LIME, SHAP does not build a local neighbourhood N(x) to generate the explanation, relying instead on the computation of the feature importances using (3).</ns0:p><ns0:p>An important observation is that SHAP is deterministic only in its theoretical formulation <ns0:ref type='bibr' target='#b36'>(Lundberg and Lee, 2017;</ns0:ref><ns0:ref type='bibr' target='#b59'>Štrumbelj and Kononenko, 2010)</ns0:ref>, but may significantly diverge from such promise in practice, due to implementative details. The exact computation of Shapley values requires to evaluate (3) on all subsets of F, resuinting in 2 F evaluations. However SHAP limits the evaluation to a user-defined number of subsets (default 2F + 2 11 ). Therefore, when explaining data points with more than 11 features, SHAP resorts to the Adaptive Sampling <ns0:ref type='bibr' target='#b60'>( Štrumbelj and Kononenko, 2014)</ns0:ref> heuristic, which approximates the Shapley values using a reduced set of subsets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>On Feature Importance</ns0:head><ns0:p>While both LIME and SHAP are based on the concept of feature importance, the definition of what constitutes an 'important' feature is not unique. A survey of the different interpretations of the importance concept can be found in <ns0:ref type='bibr' target='#b56'>Shrikumar et al. (2017)</ns0:ref>. LIME adopts the concept of local importance. A feature is locally important if its variation around x produces a big change in the output value of f . Therefore, a feature that is highly relevant for the classification of x, but whose perturbation in the close neighbourhood N(x) does not impact much on the resulting classification, can easily get a low importance. For SHAP importance is determined in comparison against the background values B. A feature is important if its deviation from the background value to x produces a large variation in the output. As a result, LIME and SHAP produce explanations that are not directly comparable and therefore it is of high importance to determine a set of evaluation metrics that are not biased by the different concepts of importance.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>LEAF is a Python framework designed to support several steps of a machine learning pipeline where local linear explanations are part of the final decision process. It is designed around two major perspectives reflecting the most common scenarios where explainability methods might be present, as sketched in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, i.e. final decision process and the model development:</ns0:p><ns0:p>• P1: Explaining single decisions. The black-box classifier f is given. The goal is to understand if an explanation g computed by an explainer on a single instance x can be trusted, and how complex g should be. LEAF supports this use case by providing metrics related to the quality of the explanation, given that the LLE method and a level of explanation complexity are defined. This allows an end-user to trust the white-box explanation, or to reject it if it does not meet the expected quality.</ns0:p><ns0:p>• Once the best model (that might also be an intrinsically black-box) is identified, it can be used to take decisions and classify previously unseen data points (black arrows). Here, post-hoc explanation techniques can be used to understand the black-box decisions by means of white-box explanations. The LEAF framework can induce two feedback loops in the whole pipeline, both to quantify the goodness and choose the more suitable explanation approach (P1, blue arrows), or to sort out which model is providing the most accurate post-hoc explanations, according to the explanation metrics (P2, red arrows).</ns0:p><ns0:p>In literature there is no consensus upon nor a unique definition of 'explanation quality'. However, different aspects of a LLE model g can be measured by different metrics. LEAF includes the most commonly used (conciseness, local fidelity), along with new metrics (local concordance, reiteration similarity, prescriptivity) described hereafter.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conciseness</ns0:head><ns0:p>We agree with <ns0:ref type='bibr' target='#b27'>Guidotti et al. (2018)</ns0:ref> that the general definition of interpretability as to which extent the model and/or the prediction are human-understandable is pivotal in the evaluation of an explanation; alas, interpretability is often only vaguely defined in literature as correlated to human comprehensibility <ns0:ref type='bibr' target='#b21'>(Freitas, 2014)</ns0:ref>. Due to our quantitative approach, we decided to focus on a specific measurable aspect of interpretability: for LLE models, we introduce the Conciseness metric as corresponding to the maximum number K of non-zero weights w i that are kept in the explanation presented to the user, while the other F − K features are treated as non-relevant and are excluded. For datasets with a large number of features, a compact explanation is clearly preferable, for a human reader, over one encompassing all features in the dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>LIME and SHAP have different approaches to the conciseness of their explanations. In LIME the conciseness is given a priori, i.e. the number K is fixed by the user and the explanation is provided with exactly K features, with no guarantee if K is too high or too low. SHAP instead has no direct concept of conciseness, and will always produce a feature importance φ i for every feature F. However, the user could apply an information criterion (like the Akaike or the Bayesian) to get the most relevant, and drop the others.</ns0:p><ns0:p>Local Fidelity It measures how good is the white-box g in approximating the behaviour of the black-box f for the target sample x around its synthetic neighbourhood N(x). The Fidelity score can be defined either as a global fidelity (see <ns0:ref type='bibr' target='#b27'>(Guidotti et al., 2018)</ns0:ref>) or as a local fidelity (see <ns0:ref type='bibr' target='#b51'>Ribeiro et al. (2016c)</ns0:ref>);</ns0:p><ns0:p>hereafter, we will refer to the latter. The agreement between f and g is usually measured using the F1 score. Being a local metric, each sample x will result in a different local fidelity score. By using the N(x) instead of x, local fidelity gives an indication of how g behaves in the locality of x, but as a consequence local fidelity is highly dependent on how the N(x) points are sampled. While there is no canonical way to define such sampling <ns0:ref type='bibr' target='#b1'>(Andrews et al., 1995)</ns0:ref>, we adopt the one provided in </ns0:p><ns0:formula xml:id='formula_6'>ℓ f (x) − g(x) , where ℓ(k) = max(0, 1 − k)</ns0:formula><ns0:p>is the hinge loss function <ns0:ref type='bibr' target='#b52'>(Rosasco et al., 2004)</ns0:ref>, so that the score ranges from 0 for total disagreement, to 1 for a perfect match. In literature there are several similar metrics with different names, like completeness <ns0:ref type='bibr' target='#b61'>(Sundararajan et al., 2017)</ns0:ref>, summation-to-delta <ns0:ref type='bibr' target='#b56'>(Shrikumar et al., 2017)</ns0:ref>, or the axiomatic definition of local accuracy defined in <ns0:ref type='bibr' target='#b36'>(Lundberg and Lee, 2017)</ns0:ref>. However, all these definitions overlook the relationship with the conciseness. An explanation is typically pruned to encompass a reduced set K of features. We clarify this ambiguity by evaluating the local concordance for white-box models with only the K requested features. Local Concordance is different from local Fidelity because it also consider the conciseness constraint, i.e. only the K requested features are considered.</ns0:p><ns0:p>SHAP claims <ns0:ref type='bibr' target='#b36'>(Lundberg and Lee, 2017)</ns0:ref> the local concordance of g to be always 1. Unfortunately, this is true only in the trivial case when K = F, i.e. when all features belong to the explanation. When a conciseness constraint requires K to be less than F, also deterministic methods like SHAP may fail to provide a local concordance of 1. We shall provide a detailed analysis of this behaviour in Section 4.</ns0:p><ns0:p>LIME provides no guarantees about the local concordance of its LLE models. Since for both techniques f (x) and g(x) are not guaranteed to be exactly the same, the label of the explanation could be inconsistent with the label predicted by the black-box model. We propose to measure the similarity of a set of explanations of a single instance x as a measure of similarity across multiple reiterations of the explanation process. Given two explanations g, g ′ , let J(g, g ′ ) be the Jaccard similarity between the sets Φ(g) and Φ(g ′ ) of non-zero weight indices of g and g ′ .</ns0:p><ns0:p>Given a set of R explanations of x, namely G x = g i i = 1 . . . R , we define the reiteration similarity metric as E J(g, g ′ ) , ∀ g, g ′ ∈ G x . Note that this definition only considers the selected features, not their weight in the LLE model. A fine-tuned definition that includes also the feature weights and/or the feature ranking <ns0:ref type='bibr' target='#b65'>(Webber et al., 2010</ns0:ref>) could be designed, but it is not considered in this paper.</ns0:p><ns0:p>Prescriptivity Most literature in the field deals with the subject of building explanations, without considering how to use such explanation models to take decisions. One intuitive information that an</ns0:p></ns0:div>
<ns0:div><ns0:head>6/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_7'>x x 0 h f D g 1 2 Figure 2. Finding the prescriptive point x ′ on the LLE boundary D g ( 1 2 ).</ns0:formula><ns0:p>explanation should provide is: 'since x is classified as class A, what are the minimum changes on x according to the explanation g that lead to a new instance x ′ that belongs to class B?' While there are several examples of such usage of explanations <ns0:ref type='bibr' target='#b25'>(Grath et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Joshi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b33'>Karimi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ustun et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Fernandez et al., 2020)</ns0:ref>, explanators like LIME and SHAP are not designed to explicitly support this kind of use for the explanations they produce. Therefore it is of interest to understand if local linear explanations are applicable in a proactive scenario. We introduce the novel metric of prescriptivity to measure how effective is a LLE when taken as a recipe to change the class of x. Without loss of generality, we will consider the target classification boundary y ′ to be 1 2 , but y ′ could assume any value.</ns0:p><ns0:p>Let D g (y ′ ) = x ∈ R F g(x) = y ′ be the set of points in the domain of g whose codomain is the constant y ′ . The boundary of g is then the set D g ( 1 2 ). Given x and the LLE model g, let x ′ be the projection of x on the boundary of g, such that d(x, x ′ ) is minimal. The intuition is that x ′ is the closest point to x that switches its classification, according to the explanation model g. Let h = x ′ − x be the difference vector. Since g(x) = w 0 + ∑ F i=1 w i • x i , the point x ′ is the solution to the equation</ns0:p><ns0:formula xml:id='formula_8'>w 0 + F ∑ i=1 (x i + h i ) • w i = 1 2 ⇒ h i = 1 2 − g(x) • w −1 i (5)</ns0:formula><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows how x ′ is found on the LLE model g that gives the local explanation of x, and it is loosely based on a similar picture in <ns0:ref type='bibr' target='#b51'>Ribeiro et al. (2016c)</ns0:ref>. The projection x ′ of x on the boundary</ns0:p><ns0:formula xml:id='formula_9'>D g ( 1 2</ns0:formula><ns0:p>) is the closest point to x where g(x ′ ) = 1 2 . Of course, the value of f (x ′ ) could be different from 1 2 , depending on how good g is in approximating the classification boundary.</ns0:p><ns0:p>We define the prescriptivity metric as:</ns0:p><ns0:formula xml:id='formula_10'>ℓ 1 C • f (x ′ ) − g(x ′</ns0:formula><ns0:p>) , where ℓ(•) is the hinge loss function, and</ns0:p><ns0:formula xml:id='formula_11'>C = max(y ′ , 1 − y ′</ns0:formula><ns0:p>) is a normalisation factor, so that 1 means that x ′ lies on the boundary, and 0 means x ′ is on the extreme side of the classification. Observe that by taking the absolute value, we measure both over-shoots and under-shoots of the boundary as a loss of prescriptivity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>Many reasons might contribute to LIME and SHAP popularity: their ability to explain a wide range of black-boxes and different kinds of data (such as images and text), a user-friendly Python implementation that includes helpful visualisations, the adoption and inclusion into some companies' XAI solutions are some of them. As a matter fact, they currently represent the standard approach to tackle explainability in machine learning pipelines. In both methods, however, complex data is always mapped into an interpretable representation in tabular form for the explanation. For instance, images could be either partitioned into a bit-vector of superpixels/pixels patches, or the pixels could be directly mapped into tabular columns; texts could be converted into a bit-vector of word occurrences. Explanations are then produced in terms of the features (i.e. the columns) of these tabular representations. Therefore, without loss of generality, our experiments focus on the explanations of tabular data, which is easier to analyse and understand, without any intermediate conversion. Moreover, experiments cover binary classification tasks only, as it is common practice among LLE methods (including LIME and SHAP) to map n-ary classification using n one-vs-all binary classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed We start by considering the problem P1 defined in Section 3 and addressed it with LEAF key metrics.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Given an instance x to be explained, for each explainability method considered, LEAF generates R explanations of x. It then shows a summary of the explanation metrics along with an explanation, to help the user understand if the local model g can be trusted.</ns0:p><ns0:p>As an example, we consider the drug consumption dataset <ns0:ref type='bibr' target='#b18'>(Fehrman et al., 2015)</ns0:ref>, where the classifier tries to predict whether a patient is inclined to be a frequent consumer of a specific drug (caffeine in this case) based on personality traits (F = 10 features). Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows how LEAF works on a data point x, using R = 50 explanations. The explained black-box classifier f is a random forest with 50 estimators and max. depth of 5. Both the LIME and SHAP methods are evaluated, using 5000 neighbourhood samples for LIME (default), and 5000 samples to compute the Shapley values for SHAP. While the number of points used by the two tools refers to rather different internal details, we have chosen to take the same number as a basic fair principle. Each box in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows the explainer outputs (top), followed by the summary of the explanation metrics. The summary shows the boxplot of the metric values, as well as their averages. Only the first explanation is shown, even if LIME/SHAP compute 50 explanations each to get the metrics. Asking for K = 4 most relevant features, LIME and SHAP do not agree on which features are the most important, which is not unexpected given their different definition of feature importance (see</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Section 2.4). These two explanations have different local fidelity and prescriptivity scores. In addition, LIME shows some instability in terms of reiteration similarity, i.e. multiple invocations of the explainer will not result in the same set of relevant features. Moreover, LIME explanations shows an average local concordance of 0.78, which means that g(x) is not very close to the value of f (x). SHAP on the same instance is more consistent, reporting the same explanation over and over. As previously stated, SHAP guarantees a perfect local concordance only when considering all the F features in the LLE, not just the K < F most relevant. In this case the local concordance is 0.949 which is still high. However, the local fidelity of the LLE models of SHAP in the neighbourhood of x is much smaller than the one obtained by LIME explanations. SHAP also shows to have a better prescriptivity than LIME, i.e. manipulating the top four features identified by SHAP as explanations produces a new point x ′ with a higher chance of changing the classifier outcome. Even such a trivial example allows us to gain several insights about the trustability of explanations.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>First, LIME and SHAP focus on different sets of features. Second, LIME is not very stable, so the trustability of a single explanation is limited. Third, SHAP is perfectly stable and overall achieves better scores than LIME, even if the low local fidelity is something to be considered.</ns0:p><ns0:p>It is worth noting that the explanation local fidelity for this sample (in particular for the explanation provided by SHAP) is not very good (0.762 and 0.375). However, when we actually follow the indication of the explanation to reach the new boundary f (x ′ ), we see that it is quite close. This could happen because the local fidelity evaluates two different aspects at once: 1) it could be low because the white box model g is not a good model for the neighborhood N(x); 2) the neighborhood N(x) lies close to a non-linear classification boundary, which is not fully captured by g. This example shows that the widely-used local fidelity score does not capture the prescriptive use of an explanation, and it is limited in the local evaluation of the white-box model.</ns0:p><ns0:p>This example shows how the metrics provided by LEAF allow a domain expert to take decisions about the trustworthiness of an explainer for the problem under scrutiny.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Evaluate the Reiteration Similarity of XAI Methods</ns0:head><ns0:p>To be trusted, an explanation needs to be stable, i.e. the explainability method should not provide entirely different sets of relevant features Φ(g) if called multiple times to explain the same instance x. Reiteration similarity is therefore a precondition that needs to be verified. This is even more evident when considering the GDPR remark about the right for individuals to obtain information about the decision of any automated system: clearly, if an algorithmic decision-support system provides inconsistent explanations for the same data point, the single explanation provided to the user cannot be trusted.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> shows the reiteration similarity metric distribution measured on 100 instances, computed for both LIME and SHAP, for four datasets and six classifier categories <ns0:ref type='bibr'>(lin, log, rf, kn, mlp, svc)</ns0:ref>, also reported in the legend. Linear classifiers are transparent by design and do not need additional explanations -we have included one as a baseline. Each boxplot shows the reiteration similarity distribution of the explainer on a classifier f on a dataset, for a fixed value of K (4 and 8 on the left and right, respectively). Each of the 100 values is also shown as a small dot, and is again the result of R = 50 explanations for each instance, thus resulting in 100×50 explanations per boxplot. The datasets are: drug consumption, arrhythmia <ns0:ref type='bibr' target='#b15'>(Dua and Graff, 2017)</ns0:ref>, heartrisk <ns0:ref type='bibr' target='#b12'>(Doddamani, 2018)</ns0:ref>, and the breast cancer Wisconsin dataset <ns0:ref type='bibr' target='#b15'>(Dua and Graff, 2017)</ns0:ref>, with F = 10, 279, 15 and 30 features, respectively.</ns0:p><ns0:p>We used scikit-learn 1 classifiers to test the LEAF framework. The classifiers used in the tests are:</ns0:p><ns0:p>lin (simple linear classifier); log (logistic classifier with liblinear solver); rf (random forest classifier with 50 estimators, 5 levels max depth); kn (k-neighbors with 3 clusters); mlp (simple neural network with 1 hidden layer and 100 neurons); svc (support vector classifier with rbf kernel and γ = 2). Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> summarises the used datasets and the out-of-samples accuracy reached by the tested classifiers. The train/test ratio used to compute the accuracies is 80/20.</ns0:p><ns0:p>The data shows that the considered methods may produce unstable explanations on some pathological instances or for some classifier categories. The classifiers mlp and svc appear to be the hardest ones to explain. The difficulty of explaining ANN models was also noted in <ns0:ref type='bibr' target='#b59'>Štrumbelj and Kononenko (2010)</ns0:ref>, as well as for svc in <ns0:ref type='bibr' target='#b9'>Caragea et al. (2001)</ns0:ref>. Nevertheless this remains a bit surprising, since explainability methods have been promoted to explain ANNs and to be model-agnostic in general. SHAP appears to have slightly higher reiteration similarity values on average than LIME, but pathological cases are observed with both approaches.</ns0:p><ns0:p>To further evaluate the impact of non-linearity to the Reiteration Similarity, we consider in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref> the same metric on the heartrisk dataset for multiple neural network architectures. The plot shows the distribution of the Reiteration Similarity metric on 500 data points. Data points are split into correctly and incorrectly classified (i.e. f (x) = y or f (x) = y), to further investigate whether the original data point classification is relevant for the observed instability. Reiteration Similarity seems to decline with the increasing non-linearity of the classifier. However, the relation between the non-linearity (both in terms of neurons and hidden layers) and the Reiteration Similarity score follows a non trivial pattern.</ns0:p><ns0:p>In fact even simpler models may experience instabilities in the explanations over multiple reiterations.</ns0:p><ns0:p>We have decided to separately investigate correctly and incorrectly classified data points with respect to their Reiteration Similarity; empirical observations show that in this case the correctness of the sample classification does not seem to be a major factor for the Reiteration Similarity scores. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluate the Prescriptivity of Explanations</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref> shows an example of using an explanation (generated by SHAP) of an instance x to identify a new synthetic instance x ′ which is closer to the classification boundary than x. This example uses the heartrisk dataset <ns0:ref type='bibr' target='#b12'>(Doddamani, 2018)</ns0:ref>, which associates heart risks with personal health traits, and we use a mlp regressor for f with 100 neurons in the hidden layer and ReLU activation. The example focuses on a fictional patient: female, aged 49, smoker and with the other health attributes reported in the feature table in Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>(B), column x. The SHAP explanation on Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>(A) shows the heart risk of x. LEAF uses the selected features of SHAP to generate x ′ using Equation (5), which is then explained again by SHAP (bottom of (B)). The initial at risk classification ( f (x) = 0.81) is driven below the risk boundary with f (x ′ ) ≈ 0.49 by changing the K = 3 features (systolic blood pressure from 235 to 125.5, etc...). A prescriptivity close to 1 indicates that the boundary D g ( 1 2 ) identified by the LLE model g is a reliable</ns0:p><ns0:p>indicator of the boundary position.</ns0:p><ns0:p>For this example, we have selected an instance with high prescriptivity, showing how the explanation can be trusted proactively. Since this is not always the case, it is important to quantify whether the explanation can be used in a prescriptive way.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Select an Explainable Classifier</ns0:head><ns0:p>We now consider the problem P2 defined in Section 3. If a classifier may be chosen among different options, LEAF can be used to identify the one that will provide the best results with a XAI method, and the lowest complexity K.</ns0:p><ns0:p>As example we consider again the drug consumption dataset. Instead of evaluating a single instance for a fixed classifier/conciseness value K, in the P2 scenario we consider six classifiers and three values for K. This plot can be used to support several decision aspects of P2:</ns0:p><ns0:p>• What explainability method should be selected in order to have stable explanations? LIME is highly stable only for linear/logistic classifiers, and for lower conciseness levels (higher K values).</ns0:p><ns0:p>SHAP instead shows excellent reiteration similarity even at low conciseness.</ns0:p><ns0:p>• What classifier should be trained to have high accuracy in the LLE models? LIME has low local concordance for some classifiers (kn, mlp, svc), even for high values of K. SHAP instead shows increasing concordance levels at the increase of K, for all classifiers. Therefore SHAP is a better choice for local concordance, unless the black-box classifier is a linear or a logistic one.</ns0:p><ns0:p>• What choices should be made to have explanations with high local fidelity? Surprisingly, high local fidelity explanations can only be achieved by using some classifier categories (linear, logistic, and random forest for LIME). Increasing the conciseness does not appear to increase the local fidelity significantly, at least in the tested range of K. Some classifier categories (mlp, svc) show very poor explanation fidelities, regardless of the explainability method used.</ns0:p><ns0:p>12/18</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science • What choices should be made to have prescriptive explanations? Again, explanations can be used in a prescriptive way only for some classifier categories (lin, log). SHAP appears to have high prescriptive power even for low values of K, but only for a few classifier categories (lin, log and moderately for rf). LIME requires higher values of K than SHAP to generate prescriptive explanations consistently. Other classifier categories (kn, mlp, svc) have poorly prescriptive LLE models, independently of the value of K and the method used.</ns0:p><ns0:p>To summarise the example in Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>, SHAP should be preferred because it shows a higher reiteration similarity and local concordance. It also shows good prescriptive power, at least for some classifiers.</ns0:p><ns0:p>LIME should be preferred only if the user wants a higher local fidelity with low values of K.</ns0:p><ns0:p>Since the drug dataset used in Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref> has only F = 10 features, SHAP exhibits a deterministic behaviour and perfect reiteration similarity, local concordance and prescriptivity are more favorable for SHAP. Figure <ns0:ref type='figure' target='#fig_7'>8</ns0:ref> shows the results for the same metrics on the other three considered datasets (with</ns0:p></ns0:div>
<ns0:div><ns0:head>13/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science These observations lead us to conclude that SHAP is not more stable than LIME in the general case, and the reported advantage of Shapley values can only be exploited in practice for datasets with few features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>Modern ML techniques are currently applied to an ever-growing number of domains, but often these ML models are intrinsically unable to provide explanations of their inner decision processes. XAI algorithms tackle this problem by enriching black-box labels with human-understandable explanations in the form of decision trees, rules or linear approximations. Many explainability methods have been proposed over the past years, and the necessity of comparing and evaluating the different explanations they produce has emerged. Several metrics have been proposed in literature for this task; however, when focusing on local linear explanations, there are no quantitative and systematic ways to assess and compare LLEs. We argue that a toolkit for explanation assessment is fundamental for the adoption of XAI algorithms and, as a consequence, of black-box ML models.</ns0:p><ns0:p>In this paper we propose a general framework that we believe can be of great interest to most users, practitioners or researchers struggling with interpreting machine learning models via post-hoc explanations.</ns0:p><ns0:p>It is worth stressing that the results of our work are not intended to identify the best explainable approach that fits every scenario, but rather to provide a well-grounded framework to provide such assessment case by case. In this direction, we decided to not be constrained to a specific prediction task, but rather to test LEAF on different datasets and models, to fully describe the potential of a systematic approach to evaluate explanations. Our detailed experiments show that even widely adopted methods such as LIME and SHAP are prone to lack of reiteration similarity, low conciseness or even provide insufficient explanations for the correct label (i.e. the K selected features are not enough to categorise the datapoint). This supports our claim that explanations should always be paired with quality metrics to ensure they meet the requirements set by the user.</ns0:p><ns0:p>One of the major limitations of the present study is that we mainly focused on LIME and SHAPhowever, we argue these are the two state-of-the-art model-agnostic XAI techniques for Local Linear Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Explanations. However, the described approach can be extended to analyse other explainability methods as well, both model-aware and model-agnostic, as long as the provided explanations are LLEs. Moreover, we restricted our attention to tabular data only, since less structured data are typically converted in tabular form to build interpretable models.</ns0:p><ns0:p>We argue that further analyses of the introduced metrics would be an interesting direction for future work. For instance, the idea of reiteration similarity has been developed as a comparison between set of features, without taking into account their relative importance, but if the rank of the explanatory features is considered particularly relevant for a specific domain, a weighted reiteration similarity can be defined.</ns0:p><ns0:p>Moreover, reiteration similarity is shown as one of the major weakness of the analysed XAI methods.</ns0:p><ns0:p>In particular, explanations of some classifier categories, like neural networks, seem to be plagued by unstable explanations. In our experiments, we observed that the LIME method is highly affected by unstable explanations and low local concordance, and SHAP deterministic behaviour is in practice limited to simple datasets only. We believe that the lack of a reiteration similarity control in XAI tools is a major obstacle in their adoption. Future works should target the reiteration similarity metric to built novel ensemble approaches, e.g. providing bagged/boosted explanations, and/or design explanations methods that provide some form of statistical accuracy/precision. Moreover, more complex refinements of this metric could be defined, as already mentioned in Section 3, to target additional aspects like feature rankings <ns0:ref type='bibr' target='#b65'>Webber et al. (2010)</ns0:ref> or feature weights.</ns0:p><ns0:p>Local Fidelity as defined in <ns0:ref type='bibr' target='#b51'>(Ribeiro et al., 2016c</ns0:ref>) also shows some limitations: the synthetic neighbourhood generates potentially out-of-sample data points which could lead to unreliable black-box model behaviours, influencing the overall explanation quality. The impact of a Local Fidelity score with a different sampling distribution is an important research direction that has not been fully addressed in this work.</ns0:p><ns0:p>We also believe that explanations interpreted as prescriptive control strategies could become an important source of information for practical action planning, as illustrated by the hearth risk example. To the best of our knowledge, such usage of LLE models has not been fully exploited so far. Measuring the effectiveness of a LLE to reach the decision boundary is a crucial part of any prescriptive strategy, and this is captured by the prescriptivity metric. Future work in this direction should involve the concept of actionable features, i.e. only consider changes on controllable features inside a constrained range, as well as measuring some form of reiteration similarity on the boundary or considering a trajectory instead of a single boundary point, similarly to individual recourses <ns0:ref type='bibr' target='#b32'>(Joshi et al., 2019)</ns0:ref>.</ns0:p><ns0:p>It is worth noting that the actual prescriptivity definition could indicate out-of-distribution x ′ , which have limited to no practical value. Reducing the feature space to an actionable space could therefore constraint the problem to avoid generating inconsistent x ′ . Moreover, we acknowledge that the blind interpretation of the prescripted sample x ′ should not be considered in a causal way, since the target only flips the black-box prediction. For instance in Figure <ns0:ref type='figure'>2</ns0:ref> the prescripted change in the highlighted features does not automatically imply that the patient will heal from her underlying health conditions, but just that the modified behaviours and features flip the classification in the decision support system.</ns0:p><ns0:p>Another interesting direction that could potentially be explored by a supportive framework as LEAF is a more 'interactive' feedback loop in model development (P2) scenario. For instance, self-explaining models that consider also the explanation quality as a target in the training process, are of clear interest.</ns0:p><ns0:p>Metrics like the faithfulness (Alvarez-Melis and Jaakkola, 2018) that measure the feature importance in this self-explaining loop could be included. Moreover, the extension could also consider iterative (but computationally expensive) processes like RemOve And Retrain <ns0:ref type='bibr' target='#b29'>(Hooker et al., 2019)</ns0:ref>.</ns0:p><ns0:p>We have focused our attention on model-agnostic and data-agnostic metrics. If the problem setting is instead targeting a specific complex data type (like images, text, speech, etc.), other model-specific and data-specific LLE metrics could be considered. A non-exhaustive list includes, amongst others, the Area Over the Perturbation Curve <ns0:ref type='bibr' target='#b54'>(Samek et al., 2017)</ns0:ref> and saliency metrics <ns0:ref type='bibr' target='#b63'>(Tomsett et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>In this paper we focused on the metrics that can be applied to model-agnostic explainability methods generating Local Linear Explanations, reviewing both existing metrics (interpretability, local fidelity), clearing ambiguities in the existing definitions (local concordance) and defining new ones that look at critical key points of the explanation process (reiteration similarity). Given the growing interest in using Manuscript to be reviewed</ns0:p><ns0:p>Computer Science explanations to support decision systems, we also defined a general metric (prescriptivity) that evaluates how much a Local Linear Explanation can be used to revert the outcome of a black-box classifier.</ns0:p><ns0:p>Our Python library, named LEAF, implements the aforementioned evaluation metrics for LLEs. We We propose LEAF as an open framework, where researchers are encouraged to include additional metrics and benchmarks to foster the rigorous inspection and assessment of local linear explanation methods for black-box ML models.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>P2: Model development. The black-box model f can be chosen to maximise both classification performances and model interpretability. In this case, it might be important to know which classifier should be used to provide, on average, the best explanations. LEAF supports this use case with a feedback loop allowing the comparison of different black-box models in terms of quality of the explanations that can be extracted.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. How LEAF can support the development of a supervised explainable machine learning pipeline. The standard steps for a supervised machine learning pipeline are: given a labeled data set, features are engineered and models are tested to provide the best prediction accuracy. Once the best model (that might also be an intrinsically black-box) is identified, it can be used to take decisions and classify previously unseen data points (black arrows). Here, post-hoc explanation techniques can be used to understand the black-box decisions by means of white-box explanations. The LEAF framework can induce two feedback loops in the whole pipeline, both to quantify the goodness and choose the more suitable explanation approach (P1, blue arrows), or to sort out which model is providing the most accurate post-hoc explanations, according to the explanation metrics (P2, red arrows).</ns0:figDesc><ns0:graphic coords='6,152.07,303.07,392.88,134.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. LEAF applied to evaluate the quality of the explanations provided by LIME and SHAP on the same data point x, using the same ML model f . The black-box prediction f (x) is 0.72 for the frequent class. (A) shows the output for LIME with K = 4, followed by the four LEAF metrics. (B) shows the SHAP output restricted to the K = 4 most relevant features, whose output values sums up to 0.67 instead of 0.72 due to truncation.</ns0:figDesc><ns0:graphic coords='9,203.77,63.78,289.52,381.58' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Reiteration similarity metric on four datasets and six classifier categories, for two conciseness levels K = 4 and 8.</ns0:figDesc><ns0:graphic coords='10,152.07,63.78,392.91,348.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Reiteration similarity for LIME (left) and SHAP (right) for multiple Neural Network architectures on the Heartrisk dataset, with K = 4. (A)-(B): single layer architecture, the number of neurons in the single hidden layer changes. Out-of-sample accuracies: 0.677, 0.687, 0.685, 0.745, 0.804 and 0.865. (C)-(D): multiple layers with 5 neurons each. Out-of-sample accuracies: 0.687, 0.695, 0.709, 0.705 and 0.720.</ns0:figDesc><ns0:graphic coords='12,152.07,63.78,392.90,260.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Prescriptive use of an explanation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. LEAF used to support the decision of the best explainable classifier (case P2).</ns0:figDesc><ns0:graphic coords='14,183.09,63.78,330.87,468.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Observed metrics values across different classification tasks.</ns0:figDesc><ns0:graphic coords='15,141.73,63.79,413.56,258.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>described how LEAF can be helpful in tackling two common scenarios, namely the explaination of a single decision (P1) and the model development (P2) use cases. An extensive set of experiments, using LIME and SHAP on four datasets, shows how the proposed metrics are fundamental to assess the trustability of LLE explanations.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,203.77,63.78,289.50,401.96' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b51'>Ribeiro et al. (2016c)</ns0:ref>, which is the same as Eq. (2). It is worth noting that this definition of local fidelity, albeit common, favours local importance and could therefore be biased toward LIME-like methods. This happens because LIME fits</ns0:figDesc><ns0:table /><ns0:note>the white-box model g minimizing the classification loss for a neighborhood N(x), thus local fidelity captures the optimization target of LIME.Local Concordance It measures how good g is in mimicking f for the sole instance x under the conciseness constraint. Local concordance is defined as:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Reiteration Similarity Most available explainability methods use some form of randomised/Monte Carlo algorithm to build their explanations. Therefore, if a method is applied several times to explain the same instance x, the resulting explanations may vary. The work in<ns0:ref type='bibr' target='#b35'>Lipton (2016)</ns0:ref> suggests that XAI methods could not converge to a unique solution, while the work in<ns0:ref type='bibr' target='#b66'>Zhang et al. (2019)</ns0:ref> studies the source of uncertainty of LIME explanations, without providing a general way to quantify LLE stability. As already mentioned in Section 2.3, SHAP defines a maximum number of evaluations of f . For small datasets SHAP behaves deterministically; conversely, if 2 F exceeds this threshold value, SHAP relies on Adaptive Sampling and therefore behaves stochastically. In this case, Shapley values are approximated and SHAP explanations are not perfectly stable.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the datasets used in the experimental section.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>9/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://scikit-learn.org/ 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:1:1:NEW 9 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editor,
we would like to thank you and three reviewers for their detailed and insightful comments and
constructive criticism. Our responses to their comments are presented below. We review our
paper and we feel that it is now substantially improved. Please find hereafter a point-by-point
response to reviewers’ comments marked in blue color.
Editor comments (Eibe Frank)
MAJOR REVISIONS
Thank you again for your high-quality submission to PeerJ CS. Although the reviewers'
suggestions, questions, and comments, taken individually, may not require a major change or a
large amount of additional work, their sheer number warrants a 'Major Revision'.
Reviewer 2 has a large number of questions and comments, and excellent suggestions for
improving the paper, particularly regarding the discussion of related work and an additional
experiment with neural networks of increasing size. All of these should be addressed.
Please also consider the other two reviewers' comments, particularly Reviewer 1's comment
regarding the definition of prescriptivity and Reviewer 3's comment regarding deeper networks.
It would also be useful to try to address Reviewer 3's request for more theory, but this is not a
requirement for acceptance from my point of view.
Reviewer 1 (Erico Tjoa)
Basic reporting
Generally, there is no basic issue. English language is appropriately used throughout.
Introduction, context and references are reasonably clear and straightforward. Structure and
formatting seem reasonable. Figures are well displayed and clear-cut enough. At least one of
the dataset is directly provided in the github link; the others are referenced.
Experimental design
The four metrics chosen appear to be well-motivated and within the scope.
Research question is well-defined. The paper simply provides a set of metrics for users to
compare explainability.
The definitions of the metrics are straightforward enough. There is complex processing,
transformation or manipulation that will become real impedance towards the rigor of the
investigation.
1. Fidelity is potentially overlapping with local concordance. However, by pointing out the
instability of fidelity due to sampling, the author has been able to demonstrate the difference.
We clarify in the text the difference between Local Concordance and Fidelity, i.e. the former
includes the interpretability constraint. Moreover, we now refer to Fidelity as Local Fidelity, to
avoid confusions with other definitions in literature.
2. Possibly minor issue: 1/2 is used in the definition of prescriptivity. In many contexts,
thresholding is not strictly 1/2. The presecriptivity is describe w.r.t model f that outputs a scalar
value rather than vector (hence so does the explanation model). This should not be a problem
since generalization is probably easy to do.
We have added in the paper that we consider ½ only for the presentation, but the formulas are
actually general for any threshold value.
Replication should be possible, given the link to the code.
Validity of the findings
The results appear valid, as in the graphs showing metrics are plotted as the metrics are
defined. No real complex processing to complicate the validity. The main results seem to be
direct comparison and observations of the resulting values of respective metrics.
Possibly important insights are present, especially the fact that different explanation methods
such as SHAP and LIME do not agree with each other.
Underlying data have been used in previous researches, and likely to be reliable.
Conclusion is concise and clear.
Reviewer 2 (Richard Tomsett)
Basic reporting
The paper is well written, easy to follow and understand, and appropriately structured. The
introduction and background sections provide suitable context for the work, and are generally
well-referenced, incorporating several review papers in the area in addition to further useful
contextual references and the key papers for the methods used. However, some important prior
work specifically on metrics for explanations has been missed (details below). The figures are
high quality and appropriate for illustrating the ideas and work presented, though some of the
captions could be expanded a little (details below). The paper is self-contained and presents a
suitable selection of results to demonstrate the utility of the explanation metrics.
Specific comments (line numbers refer to lines as labelled in the supplied paper draft):
- Line 30: the word “symbolic” is not necessary here – it could potentially add confusion for the
reader with specific preconceptions about symbolic AI. I would recommend removing it and
leaving “human understandable clarification” (and similarly where this term is used elsewhere).
We agree with the reviewer that this wording can be confusing and thus we removed the
‘symbolic’ term, throughout the paper, and replaced it with ‘human-understandable’.
- Line 76: there is a typo – “where” should be “were”
Thanks for spotting this. The typo was corrected.
- Line 115: N(x) has not been introduced at this point in the paper, so should either be replaced
with text describing it, or the definition of N(x) should be moved into this section
Agreed. We removed the mention of N(x) occurring before its definition.
- Line 120-121: in the equation for LIME’s loss, I think the summation should be over z,z’ in N(x)
(the neighbourhood of x), whereas the notation for the normal distribution has been used by
mistake. Additionally, z’ should be defined explicitly as the “interpretable representation” per the
original LIME paper – it’s a bit unclear what z’ means here and so what the pair z,z’ represents
for the unfamiliar reader
Thanks for noting the typo on N.
We have briefly clarified that LIME is designed to provide explanations in a feature space X’
different from the sample space X, given an appropriate mapping function. We have replaced
the z,z’ pair in the summation with an interpretable mapping function \eta, which is more clear.
- Lines 121-122: it is worth pointing out where this detail comes from as it isn’t, as far as I can
tell, in the main text of the original LIME paper. The specific kernel I believe is the default in the
LIME source code, but this kernel doesn’t have to be used (and indeed the choice of kernel can
significantly alter results).
The distance kernel is found in the LIME source code (lime/lime_tabular.py:245-251), and it is
slightly different from the one in the original paper. We have rewritten the formula to be as much
close to the original definition as possible, clarifying the kernel width as it is computed in the
source code.
- Line 131: mu should be defined explicitly in the text
\mu is the background value used by SHAP, we have clarified it in the paper.
- Overall sections 2.2 and 2.3 would benefit from some additional verbal description to help the
reader interpret the equations. It is difficult to form an intuitive understanding of the equations’
meanings without some prior knowledge of these kinds of models, and a small amount of
additional text description would help the more general reader. Additionally, some reference to
the variants of the SHAP algorithm and mention of which variant was used in the current study
should be added, with some explanation about the possible implications for the results.
We have clarified that we are using the model-agnostic KernelExplainer of SHAP. Other
model-aware versions are now cited, but are not used since we wanted to make the comparison
on model-agnostic explainers. We have added a clarification for the interpretation of LIME, and
why the generated model is a local linear explanation for a sample x. The SHAP section is
entirely rephrased around the concept of coalition game, which we think would be more intuitive
to grasp for the reader.
- Line 149-151: I think this needs a bit of clarification – presumably it is the function f(x) that
does not vary much in the region around x (for a feature assigned low importance)? At the
moment it reads like x itself does not vary much in the neighbourhood of x, which doesn’t make
sense to me… This section is good, though, and makes an important point about the
interpretation of these different methods, despite them both being in the same family of
techniques!
We have clarified the sentence - it is the impact of the perturbation of a feature for the
classification f(z) in the close neighborhood of x. We have also clarified the comparison against
the background values for SHAP, which was not clear in the submitted version.
- Secton 3, starting line 177: I have some concerns about the terminology used to name the
metrics. The definition of interpretability as pertaining to human understanding (per Guidotti et
al. 2018) is fine, but I don’t think it’s appropriate to use “interpretability” as a name for a metric
that only measures one small element of this (the explanation length), especially when the
broader field of “interpretable machine learning” studies a whole range of different facets under
the term “interpretability.” I strongly suggest renaming this metric to something less broad and
more precise, and replacing the use of “interpretability” with a more suitable name throughout
the paper.
We have clarified that Interpretability is the general concept of human-understanding of an
explanation, while we are referring to a more narrowed definition of feature extraction. Therefore
we have changed the name of our metric to conciseness, to capture the concept of
Interpretability as a per-sample feature selection.
Line 188: similarly, “fidelity” has been used to describe the general property of the correctness
of an explanation in other interpretable machine learning literature, rather than as the name of a
specific metric assessing one aspect of this (e.g. the “faithfulness” metric in
https://arxiv.org/abs/1806.07538 or Area Over Perturbation Curve, AOPC, in
https://ieeexplore.ieee.org/abstract/document/7552539 ). Perhaps a more suitable name might
be “local fidelity”?
We have changed Fidelity to Local Fidelity, following the convention used by Ribeiro & al (“Why
Should I Trust You?” Explaining the Predictions of Any Classifier).
Finally, “stability” has been used to refer to explanations that change very little in response to
small perturbations to the input – again in, for example, https://arxiv.org/abs/1806.07538 . This
is a bit different from the author’s proposed metric, so perhaps a different name is warranted?
Apologies if these requests seem a little pedantic – but I think it is important to try to maintain
some consistency in vocabulary within the field.
We understand that sometimes generic words like stability would lead to confusion, and
therefore we have to acknowledge that prior literature has some precedence. Stability is
therefore changed as Reiteration Similarity, to avoid the highlighted name clash and better
suggest that it is defined over multiple runs of the explanation process.
- Line 192: were these instances from the training set or test set? Do the metrics report different
distributions of values between training and test set instances?
Instances are from the test set. However, using the [Ribeiro et al. (2016c)] method for
generating the synthetic neighborhood, the variance of each feature is derived initially from the
training set, and is not changed afterward. We have rephrased that part to clarify how the
neighborhood is specified.
- Paragraph starting after line 232, before equation 4: there is a superfluous “element of” symbol
Done
- Figure 1: the figure could use a little more explanation in the caption, especially regarding the
meaning of the arrows
As Figure 1 represents the motivation behind LEAF, we extended the caption and provided a
self-contained explanation to allow the reader to better understand the pipeline. The two LEAF
scenarios are separated in the picture (using different arrow styles) and briefly described in the
caption.
- Figure 3: make explicit in figure legend that this is for the *same* input data point and same
ML model – and add some explanation of why the output value is different (0.72 vs 0.68) for the
two explanations (presumably due to using K=4 rather than the full feature set?) This figure also
needs a little more description of the numbers in the visuals as, these are the default outputs of
the methods and differ a bit in their presentation. This could be added in the main text or in the
caption.
We have added a more clear explanation to Figure 3. In the explanation we highlight that the
instance is the same for both methods, as well as the model. As you are pointing out, the
difference in SHAP from 0.72 to the output value of 0.67 is due to feature truncation, since only
the K=4 most relevant features are presented to the user. This is clarified in the caption. We
have added in the image a small box that represents the missing value (from 0.67 to 0.72) due
to feature truncation. Note that LIME is not affected by feature truncation, only SHAP.
- Line 242: the comment on the reason behind LIME and SHAP’s popularity seems like
speculation, so should be noted as such (other possibilities include that they both provided
user-friendly Python implementations, provide visualisations, and have been incorporated into
some companies’ XAI solutions).
We clarified that it is a speculation and we further added reviewer’s suggestion on why they are
popular.
- Line 245: Point of clarification - SHAP can be used on image classifiers without using
superpixels as the interpretable representation. The mapping to an “interpretable
representation” in this case is just the presence/absence of the original pixels, rather than of
superpixels.
We wanted to clarify that both LIME and SHAP will ultimately map any dataset into tabular form,
and therefore our experimental focus on tabular data is without loss of generality w.r.t. XAI
techniques. We have clarified that the mapping to super-pixel is just one of the many options in
the passage from complex data to the tabular representation.
- Line 315: according to figure 5, the “not at risk” boundary should be 0.49, not 0.51
We fixed the mistake.
- Line 381: (also in the abstract) “even explain the wrong label” – what does this mean? I don’t
think this is properly explained in the text (apologies if I’ve missed an explanation somewhere).
We have added an explanation of the insufficient/wrong explanation when only a limited number
of features is considered, i.e. when the user is looking for concise explanations instead of
having all feature importances at once.
Experimental design
The experiments chosen to illustrate the utility of the proposed metrics and toolbox are
appropriate and quite thorough. There’s always more that could be explored but I think the
authors have chosen a useful set of experiments to illustrate the key concepts. They have used
freely available datasets and tools so that their experiments can be reproduced, and no private
data has been used inappropriately. As far as I am aware, these datasets have been cleared for
open use and no ethical issues have been raised with any of them previously. I have a couple of
comments and requests for some clarifications:
- Line 241, Section 4: we need more details about the ML models: what was the train/test split?
What was the test performance of each model? How likely was each model to over-fit? Were the
100 instances presented in the figures all classified correctly? If not, was there a correlation
between incorrect classification and stability/any of the other metrics? Would also be good to
briefly mention the tooling used (scikit-learn?), with reference to the github repository)
Not all considered datapoints classify correctly w.r.t to the original dataset.This does not have a
strong impact on our framework, since the explanation is built on top of the black-box model,
regardless of its accuracy. However, you are right that this could potentially be an indicator of
the bad behaviour of the explanation stability (now reiteration similarity). Therefore we have
considered the problem specifically in the new section of Figure 5. We have clarified that
scikit-learn is used, along with all hyper-parameters for each classifier, and the train/test ratio of
80/20 in a dedicated paragraph. We have added Table 1 to summarize the datasets and the
out-of-sample accuracies reached by the scikit-learn classifiers.
- Line 260: the same number of samples were used in SHAP and LIME – can authors elaborate
on the equivalence here? Given the methods work differently, does this make sense? It’s OK if
this is an arbitrary decision, but should be noted as such if the equivalence is not really
appropriate.
We understand that LIME and SHAP use different methods, and the number of samples is
therefore incomparable. However we used the same number as a basic fairness principle. We
have clarified the motivation behind the choice of the same number of samples for both
methods..
- Results showing very unstable explanations for NNs/SVMs: this is important and should be
expanded on. It seems likely that the models are fitting a highly nonlinear boundary and this is
affecting the stability results. However, it’s difficult to generalise the results given only a single
NN architecture (MLP with 100 hidden neurons) was tested. A simple experiment that would
help illustrate the behaviour of the stability metric with nonlinear models would be to measure it
for NNs with increasing numbers of hidden neurons (starting from just 1, also monitoring model
performance). I would request the authors perform this experiment for one or two datasets, as
this would significantly improve the discussion of the stability metric and aid reader’s intuitions
about its behaviour on more complex models.
We have added Figure 5 summarising this experiment. We have taken two different NN
architectures: single layer with increasing number of neurons, and multi-layer with increasing
number of layers. We have tested how the reiteration similarity performs in the two scenarios,
for the heart-risk dataset, also considering whether the data point classification is somehow
correlated with the overall explanation instability.
Validity of the findings
The findings are a demonstration of the behaviour of the proposed metrics using a suitable
variety of ML models and datasets. I have asked for a further experiment to expand on the
findings for NNs in the previous section of the review. The conclusions are sound and well
written. I have a few specific questions/comments:
- Line 279: results indicating prescriptivity > fidelity – are the scales of these metrics comparable
like this, so that you can be confident saying the behaviour of the local explanation models is
better at the boundary than locally to the data point? How does this change between points
classified with high confidence and points classified with low confidence?
The comment is spot on, as the two values are indeed not comparable. We have clarified that
low fidelity does not imply anything on the actual prescriptive use of the explanation: it is
perfectly possible to have poor local fidelity and at the same time have explanation with good
prescriptive effect. We have clarified the text in order to explain the semantic difference between
the two metrics.
- Regarding the local neighbourhood of an input, N(x): the relevance of the local “fidelity” score
is going to depend strongly on whether the local sampling method samples data points that are
within the distribution of the original training data, or produces samples that are out of
distribution. The authors note that the sampling method will affect the fidelity metric (potentially
favouring LIME) but do not expand on this sufficiently in the discussion – especially the potential
impact of generated samples being out of distribution and what this means for interpreting this
metric.
For our local fidelity score we have elected to construct the local neighbourhood N(x) in the
same way as LIME. It is however true that this approach might sample outside of the distribution
of the original training data. We agree that this choice may have complex implications for highly
nonlinear models. It is however important to note that only the fidelity score is affected by that
choice.
We think that Local Fidelity remains a potential indicator of the quality of the explanation for
generic LLE methods, regardless of the fact that LIME targets explicitly this metric, while other
methods like SHAP do not do so directly. The impact of a Local Fidelity score with a different
sampling distribution is an interesting research question, but we have not considered it in this
work, since we have strictly adopted the definition of Ribeiro/Guestrin to avoid a new redefinition
of the Local Fidelity metric. We have added a comment on this open problem in the Discussion
sections.
- The authors note the limitation of their proposed stability metric – that it does not take into
account feature ranking – but I think this is more serious than implied: it seems a severe
limitation, especially for models with fewer features. It is possible that an explanation method
could produce explanations with entirely opposite feature importance rankings yet receive a
high stability score. This at least requires further discussion and should be highlighted more
prominently when the metric is introduced. However, the metric as-is does reveal some issues
with the explanation methods tested, so it is at least useful in its current form. A possible
improvement to this metric might be to use something like Rank-Biased Overlap:
http://dx.doi.org/10.1145/1852102.1852106 rather than Jaccard similarity – perhaps mention
this in the discussion?
The problem with using a rank-based metric is that it considers two effects at the same time: the
presence of a feature among the top K selected, and the actual position in the ranking.
The definition we adopted captures the fact that a feature appearing consistently in the
explanation is more important that its actual ranking. We agree that a refinement of this metric
could go in the direction of rank-bias, which however does not fully consider the actual feature
weight, only the ranking order.
We have therefore extended the discussion on the metric design choice, clarifying possible
fine-tuning in the direction of both ranking and feature weights.
- The discussion of “prescriptivity” should include some mention of causality and the possibility
of mis-interpreting results with predictive rather than causal models. The equation to generate
the contrastive input also has the potential to create out-of-distribution data, and this should be
noted and the implications discussed.
We have extended the discussion for the prescriptivity in the causal direction, explaining the
complexity around the provided example in Figure 3. We have also added a clarification on the
constraining on x’ that could potentially require a more complex method to avoid generating
out-of-distribution samples.
- Some metrics have already been proposed for XAI methods, for example Area Over the
Perturbation Curve (AOPC) and “faithfulness” mentioned above. Though not for LLE methods
specifically, these metrics should at least be mentioned as prior work. Even better would be to
indicate how these metrics differ, or are similar to, the metrics presented in this paper.
For AOPC: in the paper we have considered how to score LLE methods in a model-agnostic
and data-agnostic way. We acknowledge that there are several possible data-specific metrics
that could have been considered and added, but we have preferred to focus only on the
data-agnostic metrics.
For faithfulness: we agree that self-explaining models are an interesting topic. However LEAF
has been designed around the concept of post-hoc explanations. Therefore we have excluded
those metrics that have an explicit interplay with the training model’ loss function. We have
added a section on self-explaining models as an addition to the model development scenario in
the discussion.
On RBO: we have added a discussion for an improvement of the Reiteration Similarity including
a rank-based metric.
On ROAR: we have added iterative retraining processes as an extended scenario for the model
development in the Discussion section.
Comments for the Author
Overall this is a useful and well-written paper addressing an important and under-studied topic
in XAI. The provision of a Python toolbox implementing the proposed metrics is especially
welcome. One final point: there has been some work in the area of metrics previously that is not
mentioned by the authors, perhaps as it has focused on image classifiers/DNNs and saliency
map explanations, rather than LLEs specifically (see mentions of AOPC and “faithfulness”
previously; also the ROAR method: https://arxiv.org/abs/1806.10758 ). However, these papers
make transferable points about metrics so their inclusion in the discussion/prior work seems
important. I am aware that it is rather uncouth of me to suggest referencing my own work, but
our paper “Sanity Checks for Saliency Metrics” https://arxiv.org/abs/1912.01451v1 seems
particularly appropriate here as it proposes methods to assess the statistical validity of
explanation metrics. I would not expect the authors to implement these checks in the current
paper, but it should certainly be considered as further work (or, alternatively, an explanation of
why the proposed sanity checks would not be appropriate should be given).
On AOPC/ROAR/RBO/faithfulness: see above.
We have extended the discussion to consider also data-specific and model-specific metrics,
which are however outside the scope of the paper, since the initial assumptions was to develop
a set of model-agnostic and data-agnostic metrics for LLE explanations. We have added a
paragraph to consider data-specific metrics as a possible extension..
Reviewer 3 (Anonymous)
Basic reporting
This paper presents a set of metrics for the evaluation of local linear explanations. In principle,
evaluating the explainability of local linear models is an important problem. Unfortunately, it is
hard to figure out the novelty of the proposed method in the currently submitted form.
Experimental design
- The authors use medical datasets: drug consumption, arrhythmia, breast cancer and so on.
- The authors define problem1 (P1) and problem (P2), and answer the questions using
proposed method
- The authors define four metrics to evaluate local linear explanation, but the definition looks
heuristic.
- The authors describe detailed information of four proposed metrics.
Validity of the findings
This paper suggests a new framework “LEAF”(Local Explanation evAluation Framework) to
evaluate the local linear explanations. LEAF consists of four metrics so that users can consider
both Explaining single decision and Model development. LEAF reveals that either LIME or
SHAP cannot ensure its robustness for every scenario, and suggests users apply such
frameworks case by case.
Comments for the Author
Here are comments to authors. It would be good to revise the draft to improve the paper
especially on weakness.
Strength
- The authors define key questions about XAI methods, as well as answer the questions using
the proposed method. It looks well organized.
- This paper shows overall good presentations and its figures are easy to understand.
- LEAF can evaluate such existing frameworks as LIME, SHAP. Also, LEAF seems to be flexible
and scalable for various models and datasets.
Weakness
- LEAF looks somewhat heuristic. Suggested four metrics (fidelity, local concordance, stability,
prescriptivity) are defined without theoretical proof. The authors only mentioned the reason why
the four metrics are reasonable. It would be better to add more theoretical approaches.
The proposed metrics were selected and designed to answer data-agnostic, model-agnostic
questions on LLE metrics. For each metric we have provided an intuitive motivation, an
operational definition, and experimental results to support its utility. The paper provides several
insights on these generic LLE metrics, showing several strong and weak points. We agree that
these observations could lead to a theoretical formulation of explanation evaluation, which does
not exist in literature, but is outside the scope of this paper.
- As the authors mentioned in the Discussion, LEAF is applicable to local linear explanation
methods. Though the authors raised universal questions about XAI, the suggested method
deals with only two local linear explanation methods.
We have formalised our framework in the specific context of local linear explanation (LLE)
methods, and we argue that LIME and SHAP represent the state of the art regarding LLEs and
XAI in general. Furthermore, as discussed by Lundberg in [1], SHAP is highly general and
subsumes several other LLE approaches.
We have added a description at the beginning of Section 4 to explain why the two LLE methods
LIME and SHAP are considered, due to their success.
- The authors did not use complicated deep learning models in their experiments. Deep learning
has been widely used in many areas, but they only used a simple perceptron structure.
We have added an additional experiment on the Reiteration Similarity, now in Figure 5, to
investigate the interplay between the non-linearity of the black-box model for single-layer and
deep neural networks. The experiment shows that the increase in the non-linearity of the model
is a relevant source of instability of the explanation.
It is interesting to highlight that explanations quickly lose stability (now named reiteration
similarity) even for simple non-linear models.
------------[1] Lundberg, Scott, and Su-In Lee. 'A unified approach to interpreting model predictions.' arXiv preprint
arXiv:1705.07874 (2017).
" | Here is a paper. Please give your review comments after reading it. |
59 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations -with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field.</ns0:p><ns0:p>In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF.</ns0:p><ns0:p>The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>In recent years, the field of Machine Learning (ML) has experienced a surge in practical applications.</ns0:p><ns0:p>However, many ML models, spanning from random forests to deep neural networks, do not provide a human-understandable clarification of their internal decision processes: this issue is known as the black-box problem <ns0:ref type='bibr' target='#b8'>(Burrell, 2016)</ns0:ref>.</ns0:p><ns0:p>The lack of explanatory power hampers the deployment of ML in real-world scenarios for a variety of reasons. As a first limitation, black-box algorithms are poor decision support systems <ns0:ref type='bibr' target='#b13'>(Doran et al., 2017)</ns0:ref>: domain experts such as doctors or financial analysts would hardly take into account an algorithmicallygenerated second opinion without an argument to support it, or a measurable score of trust <ns0:ref type='bibr' target='#b32'>(Jiang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>From a legal viewpoint, in the European Union AI systems are regulated by law with the General Data Protection Regulation (GDPR) -which includes directives on algorithmic decision-making <ns0:ref type='bibr' target='#b25'>(Goodman and Flaxman, 2016)</ns0:ref>. For instance, GDPR states that: i) The decisions which produces legal effects concerning [a citizen] or of similar importance shall not be based on the data revealing sensitive information, for example about ethnic origins, political opinions, sexual orientation, . . .; and ii) The controller must ensure the right for individuals to obtain further information about the decision of any automated system. While the required explanations of a complex socio-technical system concerns the decision process as a whole, clearly good explainability algorithms would help supporting the provisioning of detailed explanations for ML components <ns0:ref type='bibr' target='#b17'>(Edwards and Veale, 2017)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In recent times, widely used ML models turned out to be biased against racial <ns0:ref type='bibr' target='#b44'>(Obermeyer et al., 2019)</ns0:ref> or gender <ns0:ref type='bibr' target='#b29'>(Hamilton, 2019)</ns0:ref> groups. While interpretable-by-design models should be preferred in high-stakes decisions <ns0:ref type='bibr' target='#b54'>(Rudin, 2019)</ns0:ref>, algorithmic fairness is of paramount importance, and its very first step is the explanation of black-box outcomes <ns0:ref type='bibr' target='#b45'>(Panigutti et al., 2020a)</ns0:ref>. Furthermore, an unexplained ML model might also be right for the wrong reasons, due to the algorithm learning spurious correlation in the data. Being capable of assessing the reasons why a ML component outputs a given classification is essential to establish trustable AI <ns0:ref type='bibr' target='#b31'>(Ignatiev, 2020)</ns0:ref>.</ns0:p><ns0:p>In this paper we focus on the class of explanation models named local linear explanations <ns0:ref type='bibr'>(LLE)</ns0:ref>. A taxonomy of other approaches can be found in <ns0:ref type='bibr' target='#b4'>(Belle and Papantonis, 2020)</ns0:ref>. Explainability methods that explain arbitrary black-box models <ns0:ref type='bibr' target='#b50'>(Ribeiro et al., 2016a)</ns0:ref> using LLEs provide an explanation in terms of the most relevant features for a data point classification. The XAI literature considers several evaluation criteria <ns0:ref type='bibr' target='#b7'>(Bohanec et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b28'>Guidotti et al., 2018)</ns0:ref>, but there is generally no consensus on a core of fundamental metrics, and in some cases these properties lack a precise definition <ns0:ref type='bibr' target='#b23'>(Gilpin et al., 2018)</ns0:ref>. Furthermore, the explanation process has to be contextualised with respect the audience for which explainability is sought <ns0:ref type='bibr' target='#b2'>(Arrieta et al., 2019)</ns0:ref> -for example, the same clinical black-box decision could be explained very differently to patients, doctor, and medical data analysts. Finally, a natural application of explanations consists in supporting a decision making task in order to perform the minimum change on an instance that modifies the classification label. Proactive use of explanations can be found for instance in the credit sector <ns0:ref type='bibr' target='#b26'>(Grath et al., 2018)</ns0:ref>, to guide an applicant in amending his/her position in order to get a loan approval, or in planning <ns0:ref type='bibr' target='#b20'>(Fox et al., 2017)</ns0:ref>. To the best of our knowledge, there is no agreement on how to quantify the prescriptive exploitation of local linear explanations.</ns0:p><ns0:p>The main contribution of this paper is a clear and unambiguous definition of a core set of metrics to compare and evaluate explainability methods that explain black-box models with LLEs. We propose to quantify three new LLE aspects: i) the local concordance of the white-box model w.r.t the black-box model for the instance to be explained, under the constraint of explanation conciseness; ii) the tendency of an explainability method to produce the same explanation on the same data point (named reiteration similarity); and iii) how good the explanation is when it is taken as a recipe to change a data point classification (prescriptivity). The core set of metrics also includes existing LLE metrics taken from literature (local fidelity). To encourage users to adopt a single, reproducible definition of these metrics for LLEs, we provide an implementation in the form of a Python framework, named LEAF (Local Explanation evAluation Framework). We apply LEAF on the two most used model-agnostic explainability algorithms for LLEs, LIME and SHAP, and show that they display unexpected behaviours that were not previously reported in literature.</ns0:p><ns0:p>The paper is structured as follows: in Section 2 we provide an overview of basic concepts concerning Local Linear Explainers as well as the two techniques we will compare, LIME and SHAP. In Section 3 we introduce the LEAF framework and focus on the metrics it computes to evaluate LLE explanations. In Section 4 we apply LEAF on a range of datasets and black-box classifiers, evaluating the explanations provided by LIME and SHAP, and discussing the results. We end the paper with a general discussion (Section 5) and final considerations (Section 6).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In order to tackle the black-box problem, several XAI techniques have been proposed <ns0:ref type='bibr' target='#b24'>(Goebel et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Došilović et al., 2018)</ns0:ref>, exploiting different principles and processes, usually with supporting graphical representations <ns0:ref type='bibr' target='#b62'>(Tamagnini et al., 2017)</ns0:ref> and visual analytics <ns0:ref type='bibr' target='#b10'>(Choo and Liu, 2018)</ns0:ref>. In general the goal is to extract human-understandable knowledge from trained black-box models <ns0:ref type='bibr' target='#b39'>(Ming et al., 2018;</ns0:ref><ns0:ref type='bibr'>Ribeiro et al., 2016b,c)</ns0:ref>. However, despite strong consensus about the necessity of enriching ML models with explainability modules, there is no shared definition of what an explanation should be <ns0:ref type='bibr' target='#b36'>(Lipton, 2016;</ns0:ref><ns0:ref type='bibr' target='#b48'>Pedreschi et al., 2019)</ns0:ref>, nor quantitative comparisons of different methods <ns0:ref type='bibr' target='#b43'>(Murdoch et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Model-agnostic explanation models are typically based on decision trees, rules or feature importance <ns0:ref type='bibr' target='#b28'>(Guidotti et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Freitas, 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Craven and Shavlik, 1995)</ns0:ref>, because of the simplicity of such explanations. Several model-specific and data-specific explanation models have also been developed, e.g., for deep neural networks <ns0:ref type='bibr' target='#b6'>(Binder et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b56'>Selvaraju et al., 2019)</ns0:ref>, deep relational machines <ns0:ref type='bibr' target='#b58'>(Srinivasan et al., 2019)</ns0:ref>, time series <ns0:ref type='bibr' target='#b35'>(Karlsson et al., 2019)</ns0:ref>, multi-labelled and ontology-linked data <ns0:ref type='bibr'>(Panigutti et al.,</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 2020b) or logic problems <ns0:ref type='bibr' target='#b5'>(Biecek, 2018)</ns0:ref>; software toolkits including the implementation of various XAI algorithms have been also introduced <ns0:ref type='bibr' target='#b3'>(Arya et al., 2019)</ns0:ref>. A comprehensive survey of explainability methods can be found in <ns0:ref type='bibr' target='#b28'>Guidotti et al. (2018)</ns0:ref> and in <ns0:ref type='bibr' target='#b15'>Došilović et al. (2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Local Linear Explanation Methods</ns0:head><ns0:p>In this section we review the class of explanation models named local linear explanations (LLE), also called feature importance models <ns0:ref type='bibr' target='#b28'>(Guidotti et al., 2018)</ns0:ref>, additive feature attribution methods <ns0:ref type='bibr' target='#b37'>(Lundberg and Lee, 2017)</ns0:ref> or linear proxy models <ns0:ref type='bibr' target='#b23'>(Gilpin et al., 2018)</ns0:ref>. Let x be a data point of a dataset X, that is fed to a black-box model f . For the sake of generality, no assumption on the details of f are made. A XAI method explains the value f (x) with a LLE by building an interpretable, white-box classifier g that mimics f around x.</ns0:p><ns0:p>Formally, let F be the set of input features of X, and F = |F | be its cardinality. Let X ⊆ R F be the input dataset, with µ X and σ 2 X the vectors of feature means and variances, respectively. For simplicity, we consider X to be a tabular dataset in an interpretable space, i.e., the same space where explanations are provided. Let f : R F → R be the original black-box model, and let x ∈ R F be the input to be explained.</ns0:p><ns0:p>We consider white-box models in the form of local linear explanations (LLE).</ns0:p><ns0:p>A LLE model g is a linear function of the input variables of the form</ns0:p><ns0:formula xml:id='formula_0'>g(x) = w 0 + F ∑ i=1 w i • x i (1)</ns0:formula><ns0:p>g assigns to each feature i a weight w i , in order to approximate the behaviour of f in the local neighbourhood of x. Intuitively, the absolute value w i of each feature i gives its importance in the explanation. In some cases, only K ≤ F features have non-zero weights.</ns0:p><ns0:p>In the following subsection we will describe LIME and SHAP in detail. Several other explainability methods <ns0:ref type='bibr' target='#b28'>(Guidotti et al., 2018)</ns0:ref> for specific classes of black-box models (i.e. model-aware) exist. For instance, DeepLIFT <ns0:ref type='bibr' target='#b57'>(Shrikumar et al., 2017</ns0:ref>) is a recursive prediction explanation for deep learning models, which introduced the idea of background values also used by SHAP. Another approach, LACE <ns0:ref type='bibr' target='#b47'>(Pastor and Baralis, 2019)</ns0:ref>, combines the local perturbation concept of LIME together with the computation of Shapley-like values used in SHAP. Other approaches tackle explainability from a different perspective, i.e., they do not provide explicit indications about the reasons why a decision has been taken but rather generate counterfactual explanations through optimisation-based methods <ns0:ref type='bibr' target='#b64'>(Ustun et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Karimi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Grath et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>LIME: Local Interpretable Model-agnostic Explanations</ns0:head><ns0:p>The LIME method <ns0:ref type='bibr' target='#b52'>(Ribeiro et al., 2016c)</ns0:ref> constructs LLE models g starting from a synthetic neighbourhood N(x) around the input to be explained x. A local neighbourhood N(x) of H points around an instance x is defined as</ns0:p><ns0:formula xml:id='formula_1'>N(x) = x j = x + p j , p j ∼ N (0, σ X ) j = 1 . . . H (2)</ns0:formula><ns0:p>where each vector p j represents a local perturbation, N is the multivariate normal distribution, and σ X is the vector of feature variances in the training dataset. For simplicity, we do not consider how to randomly generate categorical features, which can be uniformly sampled from their frequency in the input dataset X.</ns0:p><ns0:p>To find g, LIME fits a ridge regression model to N(x) using the linear least squares function</ns0:p><ns0:formula xml:id='formula_2'>L ( f , g, π x ) = ∑ z∈N(x) π x (z) f (z) − g(η(z)) 2</ns0:formula><ns0:p>where the default distance kernel π x (z) = exp −d(x, z) 2 /γ 2 is a weighted distance function for a kernel width γ = 3 4 √ F, and d(•, •) is the Euclidean distance. Explanations can be given in an explanation space X ′ which can differ from X, provided that a mapping function η : X → X ′ is given (η is the identity when</ns0:p><ns0:formula xml:id='formula_3'>X ≡ X ′ ).</ns0:formula><ns0:p>LIME is designed to select a subset of the F features for the LLE model g iteratively, by fitting multiple ridge regressors using a sequential feature selection (SFS) algorithm <ns0:ref type='bibr' target='#b49'>(Pudil et al., 1994)</ns0:ref>. Intuitively, the resulting model g is a local explanation for x because it is built to classify the behaviour of N(x), which is a synthetic dataset built around x. It is a linear explanation because it provides a single scalar weight for each feature of x. A different distance kernel π x (z) can be used, altering the resulting explanation.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>SHAP: SHapley Additive exPlanations</ns0:head><ns0:p>The SHAP method (Lundberg and Lee, 2017) (also called IME, Interactions-based Method for Explanation) derives local explanation models using the concept of Shapley values from cooperative game theory <ns0:ref type='bibr' target='#b42'>(Moretti and Patrone, 2008;</ns0:ref><ns0:ref type='bibr' target='#b40'>Moeyersoms et al., 2016)</ns0:ref>. In principle, the Shapley theory applies to binary features only, but it can be extended to real values. Many variations of SHAP have been defined:</ns0:p><ns0:p>in this paper we will refer to the KernelExplainer, which is the model-agnostic and most general version of the method. Other model-specific variations, like TreeExplainer for tree-based models <ns0:ref type='bibr' target='#b38'>(Lundberg et al., 2020)</ns0:ref>, have also been defined, but are not considered in this paper due to their model-aware nature.</ns0:p><ns0:p>A SHAP explanation is a vector φ = (φ 0 , φ 1 . . . φ F ) that assigns a feature importance φ i to each input feature. Intuitively, the input features of a classifier are akin to players cooperating to win a game (the model prediction). The more important a player i is to the cooperation, the higher is its Shapley value φ (i). Features are grouped into coalitional sets, corresponding to the power set of the set of features F.</ns0:p><ns0:p>To evaluate the black-box model for a subset of features S ⊆ F (i.e., the coalition with only players from S), SHAP introduces the concept of background values B, to replace the missing features in the evaluation of f . Let f S (x S ) denote the evaluation of f on x where only the features in the subset S are kept, and the others are replaced with the background values from B. The set B is usually taken as either a single sample B = {E(X)} being the dataset average, or as a set of centroids of X.</ns0:p><ns0:p>For a feature i ∈ F , its Shapley value φ (i) is defined as follows</ns0:p><ns0:formula xml:id='formula_4'>φ (i) = ∑ S⊆F \{i} |S|! • (F-|S| -1)! F! f S∪{i} (x S∪{i} ) − f S (x S )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Let φ 0 = f ∅ (x ∅ ) be the constant neutral value, where all feature values of x are ignored.</ns0:p><ns0:p>We can derive a LLE model g as a linear regressor using (1) with weights</ns0:p><ns0:formula xml:id='formula_5'>w 0 = φ 0 , w i = φ i x i − µ i , 1 ≤ i ≤ F, µ i ∈ B {i}<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Such derivation guarantees to be locally accurate (Lundberg and Lee, 2017, page 4), i.e., f (x) = g(x).</ns0:p><ns0:p>Unlike LIME, SHAP does not build a local neighbourhood N(x) to generate the explanation, relying instead on the computation of the feature importances using (3).</ns0:p><ns0:p>An important observation is that KernelExplainer of SHAP is deterministic only in its theoretical formulation <ns0:ref type='bibr' target='#b37'>(Lundberg and Lee, 2017;</ns0:ref><ns0:ref type='bibr' target='#b59'>Štrumbelj and Kononenko, 2010)</ns0:ref>, but may significantly diverge from such promise in practice, due to implementative details. The exact computation of Shapley values requires to evaluate (3) on all subsets of F, resuinting in 2 F evaluations. However, the official SHAP implementation limits the evaluation to a user-defined number of subsets (default 2F + 2 11 ). Therefore, when explaining data points with more than 11 features, SHAP resorts to the Adaptive Sampling <ns0:ref type='bibr' target='#b60'>( Štrumbelj and Kononenko, 2014)</ns0:ref> heuristic, which approximates the Shapley values using a reduced set of subsets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>On Feature Importance</ns0:head><ns0:p>While both LIME and SHAP are based on the concept of feature importance, the definition of what constitutes an 'important' feature is not unique. A survey of the different interpretations of the importance concept can be found in <ns0:ref type='bibr' target='#b57'>Shrikumar et al. (2017)</ns0:ref>. LIME adopts the concept of local importance. A feature is locally important if its variation around x produces a big change in the output value of f . Therefore, a feature that is highly relevant for the classification of x, but whose perturbation in the close neighbourhood N(x) does not impact much on the resulting classification, can easily get a low importance. For SHAP, importance is determined in comparison against the background values B. A feature is important if its deviation from the background value to x produces a large variation in the output. As a result, LIME and SHAP produce explanations that are not directly comparable and therefore it is of high importance to determine a set of evaluation metrics that are not biased by the different concepts of importance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>LEAF is a Python framework designed to support several steps of a machine learning pipeline where local linear explanations are part of the final decision process. It is designed around two major perspectives reflecting the most common scenarios where explainability methods might be present, as sketched in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• P1: Explaining single decisions. The black-box classifier f is given. The goal is to understand if an explanation g computed by an explainer on a single instance x can be trusted, and how complex g should be. LEAF supports this use case by providing metrics related to the quality of the explanation, given that the LLE method and a level of explanation complexity are defined. This allows an end-user to trust the white-box explanation, or to reject it if it does not meet the expected quality.</ns0:p><ns0:p>• P2: Model development. The black-box model f can be chosen to maximise both classification performances and model interpretability. In this case, it might be important to know which classifier should be used to provide, on average, the best explanations. LEAF supports this use case with a feedback loop allowing the comparison of different black-box models in terms of quality of the explanations that can be extracted. Here, post-hoc explanation techniques can be used to understand the black-box decisions by means of white-box explanations. The LEAF framework can induce two feedback loops in the whole pipeline, both to quantify the goodness and choose the more suitable explanation approach (P1, blue arrows), or to sort out which model is providing the most accurate post-hoc explanations, according to the explanation metrics (P2, red arrows).</ns0:p><ns0:p>In literature there is no consensus upon nor a unique definition of 'explanation quality'. However, different aspects of a LLE model g can be measured by different metrics. LEAF includes the most commonly used ones (conciseness, local fidelity), along with new metrics (local concordance, reiteration similarity, prescriptivity) described hereafter.</ns0:p><ns0:p>Conciseness We agree with <ns0:ref type='bibr' target='#b28'>Guidotti et al. (2018)</ns0:ref> that the general definition of interpretability as to which extent the model and/or the prediction are human-understandable is pivotal in the evaluation of an explanation; alas, interpretability is often only vaguely defined in literature as correlated to human comprehensibility <ns0:ref type='bibr' target='#b21'>(Freitas, 2014)</ns0:ref>. Due to our quantitative approach, we decided to focus on a specific measurable aspect of interpretability: for LLE models, we introduce the conciseness metric as corresponding to the maximum number K of non-zero weights w i that are kept in the explanation presented to the user, while the other F − K features are treated as non-relevant and are excluded. For datasets with a large number of features, a compact explanation is clearly preferable, for a human reader, over one encompassing all features in the dataset.</ns0:p><ns0:p>LIME and SHAP have different approaches to the conciseness of their explanations. In LIME, the conciseness is given a priori, i.e., the number K is fixed by the user and the explanation is provided with exactly K features, with no guarantee if K is too high or too low. SHAP instead has no direct concept of conciseness, and will always produce a feature importance φ i for every feature F. However, since the values of φ already represent a ranking of the features importance, the user could keep the K-topmost features, and drop the others to obtain a concise explanation <ns0:ref type='bibr' target='#b22'>(Fryer et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Local Fidelity It measures how good is the white-box g in approximating the behaviour of the black-box f for the target sample x around its synthetic neighbourhood N(x). The fidelity score can be defined either as a global fidelity (see <ns0:ref type='bibr' target='#b28'>(Guidotti et al., 2018)</ns0:ref>) or as a local fidelity (see <ns0:ref type='bibr' target='#b52'>Ribeiro et al. (2016c)</ns0:ref>);</ns0:p><ns0:p>hereafter, we will refer to the latter. The agreement between f and g is usually measured using the F1 score <ns0:ref type='bibr' target='#b28'>(Guidotti et al. (2018)</ns0:ref>, pp.92). Being a local metric, each sample x will result in a different local fidelity score. By using the neighbourhood N(x) instead of x, local fidelity gives an indication of how g behaves in the locality of x, but as a consequence local fidelity is highly dependent on how the N(x) points are sampled. While there is no canonical way to define such sampling <ns0:ref type='bibr' target='#b1'>(Andrews et al., 1995)</ns0:ref>, we adopt the one provided in <ns0:ref type='bibr' target='#b52'>Ribeiro et al. (2016c)</ns0:ref>, which is the same as Eq. ( <ns0:ref type='formula'>2</ns0:ref>). It is worth noting that this definition of local fidelity, albeit common, favours local importance and could therefore be biased toward LIME-like methods. This happens because LIME fits the white-box model g minimizing the classification loss for a neighborhood N(x), thus local fidelity captures the optimization target of LIME.</ns0:p><ns0:p>Local Concordance It measures how good g is in mimicking f for the sole instance x under the conciseness constraint. Local concordance is defined as:</ns0:p><ns0:formula xml:id='formula_6'>ℓ f (x) − g(x) , where ℓ(k) = max(0, 1 − k)</ns0:formula><ns0:p>is the hinge loss function <ns0:ref type='bibr' target='#b53'>(Rosasco et al., 2004)</ns0:ref>, so that the score ranges from 0 for total disagreement, to 1 for a perfect match. In literature there are several similar metrics with different names, like completeness <ns0:ref type='bibr' target='#b61'>(Sundararajan et al., 2017)</ns0:ref>, summation-to-delta <ns0:ref type='bibr' target='#b57'>(Shrikumar et al., 2017)</ns0:ref>, or the axiomatic definition of local accuracy defined in <ns0:ref type='bibr' target='#b37'>(Lundberg and Lee, 2017)</ns0:ref>. However, all these definitions overlook the relationship with conciseness. An explanation is typically pruned to encompass a reduced set K of features. We clarify this ambiguity by evaluating the local concordance for white-box models with only the K requested features. Local concordance is different from local fidelity because it also considers the conciseness constraint, i.e., only the K requested features are considered.</ns0:p><ns0:p>SHAP claims <ns0:ref type='bibr' target='#b37'>(Lundberg and Lee, 2017)</ns0:ref> the local concordance of g to be always 1. Unfortunately, this is true only in the trivial case when K = F, i.e., when all features belong to the explanation. When a conciseness constraint requires K to be less than F, deterministic methods like SHAP may also fail to provide a local concordance of 1. We shall provide a detailed analysis of this behaviour in Section 4.</ns0:p><ns0:p>LIME provides no guarantees about the local concordance of its LLE models. Since for both techniques f (x) and g(x) are not guaranteed to be exactly the same, the label of the explanation could be inconsistent with the label predicted by the black-box model. We propose to measure the similarity of a set of explanations of a single instance x as a measure of similarity across multiple reiterations of the explanation process. Given two explanations g, g ′ , let J(g, g ′ ) be the Jaccard similarity between the sets Φ(g) and Φ(g ′ ) of non-zero weight indices of g and g ′ .</ns0:p><ns0:p>Given a set of R explanations of x, namely G x = g i i = 1 . . . R , we define the reiteration similarity metric as E J(g, g ′ ) , ∀ g, g ′ ∈ G x . Note that this definition only considers the selected features, not their weight in the LLE model. A fine-tuned definition that includes also the feature weights and/or the feature rankings <ns0:ref type='bibr' target='#b65'>(Webber et al., 2010</ns0:ref>) could be designed, but it is not considered in this paper.</ns0:p><ns0:p>Prescriptivity Most literature in the field deals with the subject of building explanations, without considering how to use such explanation models to take decisions. One intuitive information that an explanation should provide is: 'since x is classified as class A, what are the minimum changes on x according to the explanation g that lead to a new instance x ′ that belongs to class B?' While there are several examples of such usage of explanations <ns0:ref type='bibr' target='#b26'>(Grath et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Joshi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Karimi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b64'>Ustun et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Fernandez et al., 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_7'>x x 0 h f D g 1 2 Figure 2. Finding the prescriptive point x ′ on the LLE boundary D g ( 1 2 ).</ns0:formula><ns0:p>generality, we will consider the target classification boundary y ′ to be 1 2 , but y ′ could assume any value.</ns0:p><ns0:p>Let D g (y ′ ) = x ∈ R F g(x) = y ′ be the set of points in the domain of g whose codomain is the constant y ′ . The boundary of g is then the set D g ( 1 2 ). Given x and the LLE model g, let x ′ be the projection of x on the boundary of g, such that d(x, x ′ ) is minimal. The intuition is that x ′ is the closest point to x that switches its classification, according to the explanation model g. Let h = x ′ − x be the difference vector. Since g(x) = w 0 + ∑ F i=1 w i • x i , the point x ′ is the solution to the equation</ns0:p><ns0:formula xml:id='formula_8'>w 0 + F ∑ i=1 (x i + h i ) • w i = 1 2 ⇒ h i = 1 2 − g(x) • w −1 i (5)</ns0:formula><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows how x ′ is found on the LLE model g that gives the local explanation of x, and it is loosely based on a similar picture in <ns0:ref type='bibr' target='#b52'>Ribeiro et al. (2016c)</ns0:ref>. The projection x ′ of x on the boundary</ns0:p><ns0:formula xml:id='formula_9'>D g ( 1 2</ns0:formula><ns0:p>) is the closest point to x where g(x ′ ) = 1 2 . Of course, the value of f (x ′ ) could be different from 1 2 , depending on how good g is in approximating the classification boundary.</ns0:p><ns0:p>We define the prescriptivity metric as:</ns0:p><ns0:formula xml:id='formula_10'>ℓ 1 C • f (x ′ ) − g(x ′</ns0:formula><ns0:p>) , where ℓ(•) is the hinge loss function, and</ns0:p><ns0:formula xml:id='formula_11'>C = max(y ′ , 1 − y ′</ns0:formula><ns0:p>) is a normalisation factor, so that 1 means that x ′ lies on the boundary, and 0 means x ′ is at the highest distance from the boundary. Observe that by taking the absolute value, we measure both over-shoots and under-shoots of the boundary as a loss of prescriptivity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>Many reasons might contribute to the popularity of LIME and SHAP: their ability to explain a wide range of black-boxes and different kinds of data (such as images and text), a user-friendly Python implementation that includes helpful visualisations, the adoption and inclusion into some companies' XAI solutions are some of them. As a matter fact, they currently represent the standard approach to tackle explainability in machine learning pipelines. In both methods, however, complex data is always mapped into an interpretable representation in tabular form for the explanation. For instance, images could be either partitioned into a bit-vector of superpixel/pixel patches, or the pixels could be directly mapped into tabular columns; texts could be converted into a bit-vector of word occurrences. Explanations are then produced in terms of the features (i.e., the columns) of these tabular representations. Therefore, without loss of generality, our experiments focus on the explanations of tabular data, which is easier to analyse and understand, without any intermediate conversion. Moreover, experiments cover binary classification tasks only, as it is common practice among LLE methods (including LIME and SHAP) to map n-ary classification using n one-vs-all binary classifiers.</ns0:p><ns0:p>We start by considering the problem P1 defined in Section 3 and addressed it with LEAF key metrics.</ns0:p><ns0:p>Given an instance x to be explained, for each explainability method considered, LEAF generates R explanations of x. It then shows a summary of the explanation metrics along with an explanation, to help the user understand if the local model g can be trusted.</ns0:p><ns0:p>As an example, we consider the drug consumption dataset <ns0:ref type='bibr' target='#b18'>(Fehrman et al., 2015)</ns0:ref>, where the classifier tries to predict whether a patient is inclined to be a frequent consumer of a specific drug (caffeine in this case) based on personality traits (F = 10 features). Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> shows how LEAF works on a data point Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science First, LIME and SHAP focus on different sets of features. Second, LIME is not very stable, so the trustability of a single explanation is limited. Third, SHAP is perfectly stable and overall achieves better scores than LIME, even if the low local fidelity is something to be considered.</ns0:p><ns0:p>It is worth noting that the explanation local fidelity for this sample (in particular for the explanation provided by SHAP) is not very good (it is 0.762 for LIME and only 0.375 for SHAP). However, when we actually follow the indication of the explanation to reach the new boundary f (x ′ ), we see that it is quite close. This could happen because the local fidelity evaluates two different aspects at once: 1) it could be low because the white box model g is not a good model for the neighborhood N(x); 2) the neighborhood N(x) lies close to a non-linear classification boundary, which is not fully captured by g. This example</ns0:p><ns0:p>shows that the widely-used local fidelity score does not capture the prescriptive use of an explanation, and it is limited in the local evaluation of the white-box model.</ns0:p><ns0:p>This example shows how the metrics provided by LEAF allow a domain expert to take decisions about the trustworthiness of an explainer for the problem under scrutiny.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Evaluating the Reiteration Similarity of XAI Methods</ns0:head><ns0:p>To be trusted, an explanation needs to be stable, i.e., the explainability method should not provide entirely different sets of relevant features Φ(g) if called multiple times to explain the same instance x. Reiteration similarity is therefore a precondition that needs to be verified. This is even more evident when considering the GDPR remark about the right for individuals to obtain information about the decision of any automated system: clearly, if an algorithmic decision-support system provides inconsistent explanations for the same data point, the single explanation provided to the user cannot be trusted.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> shows the reiteration similarity metric distribution measured on 100 instances, computed for both LIME and SHAP, for four datasets and six classifier categories <ns0:ref type='bibr'>(lin, log, rf, kn, mlp, svc)</ns0:ref>, also reported in the legend. Linear classifiers are transparent by design and do not need additional explanations -we have included one as a baseline. Each boxplot shows the reiteration similarity distribution of the explainer on a classifier f on a dataset, for a fixed value of K (4 and 8 on the left and right, respectively). Each of the 100 values is also shown as a small dot, and is again the result of R = 50 explanations for each instance, thus resulting in 100×50 explanations per boxplot. The datasets are: drug consumption, arrhythmia <ns0:ref type='bibr' target='#b16'>(Dua and Graff, 2017)</ns0:ref>, heartrisk <ns0:ref type='bibr' target='#b12'>(Doddamani, 2018)</ns0:ref>, and the breast cancer Wisconsin dataset <ns0:ref type='bibr' target='#b16'>(Dua and Graff, 2017)</ns0:ref>, with F = 10, 279, 15 and 30 features, respectively.</ns0:p><ns0:p>We used scikit-learn 1 classifiers to test the LEAF framework. The classifiers used in the tests are:</ns0:p><ns0:p>lin (simple linear classifier); log (logistic classifier with liblinear solver); rf (random forest classifier with 50 estimators, 5 levels max depth); kn (k-neighbors with 3 neighbors); mlp (simple neural network with 1 hidden layer and 100 neurons); svc (support vector classifier with rbf kernel and γ = 2). Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> summarises the used datasets and the out-of-samples accuracy reached by the tested classifiers. The train/test ratio used to compute the accuracies is 80/20.</ns0:p><ns0:p>The data shows that the considered methods may produce unstable explanations on some pathological instances or for some classifier categories. The classifiers mlp and svc appear to be the hardest ones to explain. The difficulty of explaining ANN models was also noted in <ns0:ref type='bibr' target='#b59'>Štrumbelj and Kononenko (2010)</ns0:ref>, as well as for svc in <ns0:ref type='bibr' target='#b9'>Caragea et al. (2001)</ns0:ref>. Nevertheless this remains a bit surprising, since explainability methods have been promoted to explain ANNs and to be model-agnostic in general. SHAP appears to have slightly higher reiteration similarity values on average than LIME, but pathological cases are observed with both approaches.</ns0:p><ns0:p>To further evaluate the impact of non-linearity to the reiteration similarity, we consider in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> the same metric on the heartrisk dataset for multiple neural network architectures. The plot shows the distribution of the reiteration similarity metric on 500 data points. Data points are split into correctly and incorrectly classified (i.e., f (x) = y or f (x) = y), to further investigate whether the original data point classification is relevant for the observed instability. Reiteration similarity seems to decline with the increasing non-linearity of the classifier. However, the relation between the non-linearity (both in terms of neurons and hidden layers) and the reiteration similarity score follows a non-trivial pattern. In fact even simpler models may experience instabilities in the explanations over multiple reiterations. We have decided to separately investigate correctly and incorrectly classified data points with respect to their reiteration similarity; empirical observations show that in this case the correctness of the sample classification does not seem to be a major factor for the reiteration similarity scores.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluating the Prescriptivity of Explanations</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> shows an example of using an explanation (generated by SHAP) of an instance x to identify a new synthetic instance x ′ which is closer to the classification boundary than x. This example uses the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science heartrisk dataset <ns0:ref type='bibr' target='#b12'>(Doddamani, 2018)</ns0:ref>, which associates heart risks with personal health traits, and we use a mlp regressor for f with 100 neurons in the hidden layer and ReLU activation. The example focuses on a fictional patient: female, aged 49, smoker and with the other health attributes reported in the feature table in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>(B), column x. The SHAP explanation on Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>(A) shows the heart risk of x. LEAF uses the selected features of SHAP to generate x ′ using Equation (5), which is then explained again by SHAP (bottom of (B)). The initial at risk classification ( f (x) = 0.81) is driven below the risk boundary with f (x ′ ) ≈ 0.49 by changing the K = 3 features (systolic blood pressure from 235 to 125.5, etc...). A prescriptivity close to 1 indicates that the boundary D g ( 1 2 ) identified by the LLE model g is a reliable indicator of the boundary position.</ns0:p><ns0:p>For this example, we have selected an instance with high prescriptivity, showing how the explanation can be trusted proactively. Since this is not always the case, it is important to quantify whether the explanation can be used in a prescriptive way.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Selecting an Explainable Classifier</ns0:head><ns0:p>We now consider the problem P2 defined in Section 3. If a classifier may be chosen among different options, LEAF can be used to identify the one that will provide the best results with a XAI method, and the lowest complexity K.</ns0:p><ns0:p>As example we consider again the drug consumption dataset. Instead of evaluating a single instance for a fixed classifier/conciseness value K, in the P2 scenario we consider six classifiers and three values for K. • What explainability method should be selected in order to have stable explanations? LIME is highly stable only for linear/logistic classifiers, and for lower conciseness levels (higher K values).</ns0:p><ns0:p>SHAP instead shows excellent reiteration similarity even at low conciseness.</ns0:p><ns0:p>• What classifier should be trained to have high accuracy in the LLE models? LIME has low local concordance for some classifiers (kn, mlp, svc), even for high values of K. SHAP instead shows increasing concordance levels at the increase of K, for all classifiers. Therefore SHAP is a better choice for local concordance, unless the black-box classifier is a linear or a logistic one.</ns0:p><ns0:p>• What choices should be made to have explanations with high local fidelity? Surprisingly, high local fidelity explanations can only be achieved by using some classifier categories (linear, logistic, and random forest for LIME). Increasing the conciseness does not appear to increase the local fidelity significantly, at least in the tested range of K. Some classifier categories (mlp, svc) show very poor explanation fidelities, regardless of the explainability method used.</ns0:p><ns0:p>• What choices should be made to have prescriptive explanations? Again, explanations can be used in a prescriptive way only for some classifier categories (lin, log Manuscript to be reviewed</ns0:p><ns0:p>Computer Science explanations consistently. Other classifier categories (kn, mlp, svc) have poorly prescriptive LLE models, independently of the value of K and the method used.</ns0:p><ns0:p>To summarise the example in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>, SHAP should be preferred because it shows a higher reiteration similarity and local concordance. It also shows good prescriptive power, at least for some classifiers.</ns0:p><ns0:p>LIME should be preferred only if the user wants a higher local fidelity with low values of K.</ns0:p><ns0:p>Since the drug dataset used in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref> has only F = 10 features, SHAP exhibits a deterministic behaviour and perfect reiteration similarity, local concordance and prescriptivity are more favorable for SHAP. Figure <ns0:ref type='figure' target='#fig_9'>8</ns0:ref> shows the results for the same metrics on the other three considered datasets (with F = 279, 15 and 30 respectively), computed using the same experimental settings of Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. For these datasets, SHAP does not use the deterministic algorithm, resulting in unstable explanations (B), with limited local concordance (D).</ns0:p><ns0:p>These observations lead us to conclude that SHAP is not more stable than LIME in the general case, </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>Modern ML techniques are applied to an ever-growing number of domains, but often these ML models are intrinsically unable to provide explanations of their inner decision processes. XAI algorithms tackle this problem by enriching black-box labels with human-understandable explanations in the form of decision trees, rules or linear approximations. Many explainability methods have been proposed over the past years, and the necessity of comparing and evaluating the different explanations they produce has emerged.</ns0:p><ns0:p>Several metrics have been proposed in literature for this task; however, when focusing on local linear explanations, there are no quantitative and systematic ways to assess and compare LLEs. We argue that a toolkit for explanation assessment is fundamental for the adoption of XAI algorithms and, as a consequence, of black-box ML models.</ns0:p><ns0:p>In this paper, we propose a general framework that we believe can be of great interest to most users, practitioners or researchers struggling with interpreting machine learning models via post-hoc explanations. It is worth stressing that the results of our work are not intended to identify the best explainable approach that fits every scenario, but rather to provide a well-grounded framework to provide such assessment case by case. In this direction, we decided to not be constrained to a specific prediction task, but rather to test LEAF on different datasets and models, to fully describe the potential of a systematic approach to evaluate explanations. Our detailed experiments show that even widely adopted methods such as LIME and SHAP are prone to lack of reiteration similarity, low conciseness or even provide insufficient explanations for the correct label (i.e., the K selected features are not enough to categorise the datapoint).</ns0:p><ns0:p>This supports our claim that explanations should always be paired with quality metrics to ensure they meet the requirements set by the user.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One of the major limitations of the present study is that we mainly focused on LIME and SHAPhowever, we argue these are the two state-of-the-art model-agnostic XAI techniques for Local Linear Explanations. However, the described approach can be extended to analyse other explainability methods as well, both model-aware and model-agnostic ones, as long as the provided explanations are LLEs.</ns0:p><ns0:p>Moreover, we restricted our attention to tabular data only, since less structured data are typically converted into tabular form to build interpretable models.</ns0:p><ns0:p>We argue that further analyses of the introduced metrics would be an interesting direction for future work. For instance, the idea of reiteration similarity has been developed as a comparison between sets of features, without taking into account their relative importance, but if the rank of the explanatory features is considered particularly relevant for a specific domain, a weighted reiteration similarity can be defined.</ns0:p><ns0:p>Moreover, reiteration similarity is shown as one of the major weaknesses of the analysed XAI methods.</ns0:p><ns0:p>In particular, explanations of some classifier categories, like neural networks, seem to be plagued by unstable explanations. In our experiments, we observed that the LIME method is highly affected by unstable explanations and low local concordance, and SHAP deterministic behaviour is in practice limited to simple datasets only. We believe that the lack of a reiteration similarity control in XAI tools is a major obstacle in their adoption. Future works should target the reiteration similarity metric to built novel ensemble approaches, e.g. providing bagged/boosted explanations, and/or design explanations methods that provide some form of statistical accuracy/precision. Moreover, more complex refinements of this metric could be defined, as already mentioned in Section 3, to target additional aspects like feature rankings <ns0:ref type='bibr' target='#b65'>Webber et al. (2010)</ns0:ref> or feature weights.</ns0:p><ns0:p>Local fidelity as defined in <ns0:ref type='bibr' target='#b52'>(Ribeiro et al., 2016c</ns0:ref>) also shows some limitations: the synthetic neighbourhood generates potentially out-of-sample data points which could lead to unreliable black-box model behaviours, influencing the overall explanation quality. The impact of a local fidelity score with a different sampling distribution is an important research direction that has not been fully addressed in this work.</ns0:p><ns0:p>We also believe that explanations interpreted as prescriptive control strategies could become an important source of information for practical action planning, as illustrated by the hearth risk example. To the best of our knowledge, such usage of LLE models has not been fully exploited so far. Measuring the effectiveness of a LLE to reach the decision boundary is a crucial part of any prescriptive strategy, and this is captured by the prescriptivity metric. Future work in this direction should involve the concept of actionable features, i.e., only consider changes on controllable features inside a constrained range, as well as measuring some form of reiteration similarity on the boundary or considering a trajectory instead of a single boundary point, similarly to individual recourses <ns0:ref type='bibr' target='#b33'>(Joshi et al., 2019)</ns0:ref>.</ns0:p><ns0:p>It is worth noting that the actual prescriptivity definition could indicate out-of-distribution x ′ , which have limited to no practical value. Reducing the feature space to an actionable space could therefore constrain the problem to avoid generating inconsistent x ′ . Moreover, we acknowledge that the blind interpretation of the prescribed sample x ′ should not be considered in a causal way, since the target only flips the black-box prediction. For instance, in Figure <ns0:ref type='figure'>2</ns0:ref> the prescribed change in the highlighted features does not automatically imply that the patient will heal from her underlying health conditions, but just that the modified behaviours and features flip the classification in the decision support system.</ns0:p><ns0:p>A first, organic direction for future work is the inclusion in LEAF of other metrics for explanation evaluation that have been recently introduced in literature. For instance, in (Alvarez-Melis and Jaakkola, 2018) the faithfulness metric is introduced, with the aim of testing if the relevance scores are indicative of 'true' importance. A second candidate is the Area Over the Perturbation Curve <ns0:ref type='bibr' target='#b55'>(Samek et al., 2017)</ns0:ref>,</ns0:p><ns0:p>as it has been shown to be valuable for assessing explanation performances of saliency maps but could be generalized for dealing with tabular data. Moreover, it would be interesting to integrate LEAF with the RemOve And Retrain <ns0:ref type='bibr' target='#b30'>(Hooker et al., 2019)</ns0:ref> iterative process, as it provides an empirical (albeit computationally expensive) method to approximate the actual relevance of the features selected by an explanation method. In principle this could be used to improve the interactive feedback loop proposed for the (P2) scenario. Another direction for future work is to complement the metrics of LEAF framework with a set of sanity checks based on measures of statistical reliability, as suggested in <ns0:ref type='bibr' target='#b63'>(Tomsett et al., 2020)</ns0:ref> for saliency metrics. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1, i.e., final decision process and the model development:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. How LEAF can support the development of a supervised explainable machine learning pipeline. The standard steps for a supervised machine learning pipeline are: given a labeled data set, features are engineered and models are tested to provide the best prediction accuracy. Once the best model (that might also be an intrinsically black-box model) is identified, it can be used to take decisions and classify previously unseen data points (black arrows). Here, post-hoc explanation techniques can be used to understand the black-box decisions by means of white-box explanations. The LEAF framework can induce two feedback loops in the whole pipeline, both to quantify the goodness and choose the more suitable explanation approach (P1, blue arrows), or to sort out which model is providing the most accurate post-hoc explanations, according to the explanation metrics (P2, red arrows).</ns0:figDesc><ns0:graphic coords='6,152.07,229.59,392.88,134.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. LEAF applied to evaluate the quality of the explanations provided by LIME and SHAP on the same data point x, using the same ML model f . The black-box prediction f (x) is 0.72 for the frequent class. (A) shows the output for LIME with K = 4, followed by the four LEAF metrics. (B) shows the SHAP output restricted to the K = 4 most relevant features, whose output values sums up to 0.67 instead of 0.72 due to truncation.</ns0:figDesc><ns0:graphic coords='9,203.77,63.78,289.52,381.58' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Reiteration similarity metric on four datasets and six classifier categories, for two conciseness levels K = 4 and 8.</ns0:figDesc><ns0:graphic coords='10,152.07,63.78,392.91,348.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>1 https://scikit-learn.org/ 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Reiteration similarity for LIME (left) and SHAP (right) for multiple Neural Network architectures on the Heartrisk dataset, with K = 4. (A)-(B): single layer architecture, the number of neurons in the single hidden layer changes. Out-of-sample accuracies: 0.677, 0.687, 0.685, 0.745, 0.804 and 0.865. (C)-(D): multiple layers with 5 neurons each. Out-of-sample accuracies: 0.687, 0.695, 0.709, 0.705 and 0.720.</ns0:figDesc><ns0:graphic coords='12,152.07,63.78,392.90,260.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Prescriptive use of an explanation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. LEAF used to support the decision of the best explainable classifier (case P2).</ns0:figDesc><ns0:graphic coords='14,183.09,63.78,330.87,468.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Observed metrics values across different classification tasks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,203.77,63.78,289.50,401.96' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the datasets used in the experimental section.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>). SHAP appears to have high prescriptive power even for low values of K, but only for a few classifier categories (lin, log and moderately for rf). LIME requires higher values of K than SHAP to generate prescriptive</ns0:figDesc><ns0:table /><ns0:note>12/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55625:2:0:NEW 8 Mar 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "We are glad that the reviewers found that all their previous observations were properly
addressed and the paper “may contribute well towards the XAI community by providing
more motivations to define the scope of explainability”. Moreover, we want to thank the
editor for his valuable comments. We have fixed all the typos listed below, for which we
do not provide a specific answer in this letter (but their revision can be verified in the
latexdiff). For all remarks that required a more thoughtful discussion, we have written
specific responses below in order to explain how we have addressed them in the paper.
Editor comments (Eibe Frank)
What is the meaning of 'input dataset' in Section 2.1 and how does it affect the standard
deviation used in Equation 2? Surely, it is not the global standard deviation of a feature in the
training data, for example.
We have clarified that \sigma_X is the vector of feature variances, and the neighborhood
is built using the multivariate normal distribution with each feature variance computed
from the global training dataset, regardless of whether the datapoint x to be explained
belongs to the training data or is out-of-sample. This comes directly from LIME paper and
implementation, which uses exactly Equation (2) when sampling around the instance x to
be explained (using the default ‘gaussian’ method, see
https://github.com/marcotcr/lime/blob/master/lime/lime_tabular.py lines 511-541).
Moreover, we noticed that there was some overlap with the \sigma symbol for kernel
width: this has been changed with \gamma to avoid confusions.
'An important observation is that SHAP is deterministic only in its theoretical formulation' -Incorrect! TreeShap is deterministic!
Yes, we were considering the KernelExplainer of SHAP, and not SHAP in the general
case. We have clarified this in the paper.
'an information criterion (like the Akaike criterion or the Bayesian criterion) to get the most
relevant,' -- do you have a reference for how to do this?
We have reformulated the sentence, since conciseness is obtained by keeping the top K
features. We have added a reference to a paper that considers the same feature drop
using Shapley values, and that also considers the general case of feature selection.
'usually measured using the F1 score' -- do you have a reference for this?
We have added a reference to: (Guidotti et al. (2018), pp.92)
'is on the extreme side of the classification.' --- what does this mean?
We have replaced the sentence with: at the highest distance from the boundary.
'R = 50 explanations.' -- It would be good to clarify that this does not refer to the number of
trees in the forest in this context.
We have clarified that R=50 explanations is different from the E=50 estimators. These are
two independent parametric values.
'Only the first explanation is shown, even if LIME/SHAP compute 50 explanations each to get
the metrics.' -- '... even though LIME/SHAP are run 50 times to obtain 50 explanations so that
the metrics, such as reiteration similarity, can be computed.' (the wording here is quite
misleading!)
We have implemented the suggested change.
'kn (k-neighbors with 3 clusters);' -- really? Clusters, not neighbors?
Yes, this was a mistake. We have fixed it.
Figure 8 does not show local fidelity! Explain why.
We have added local fidelity to figure 8, and sorted the metrics as in Figure 7.
Reviewer 2 (Richard J Tomsett)
[...] The only minor point I would raise now concerns the new final paragraphs in Section 5
(Discussion), added to address my point about citing prior metric work in this area. I think the
metrics mentioned - 'faithfulness' (Alvarez-Melis and Jaakola 2018) and AOPC (Samek et al.
2017) - are not quite as specific to particular settings as the authors suggest. Faithfulness does
not have to be used in a self-explaining loop - see e.g. the IBM AIX360 toolbox for a generic
implementation: https://aix360.readthedocs.io/en/latest/metrics.html . Similarly AOPC does not
necessarily have to be used for images & saliency map explanations - its definition is generic,
though it has usually been applied to images/saliency maps. Finally, the reference Tomsett et al.
2020 is about the evaluation of the statistical reliability of explanation metrics, rather than
describing any metrics itself.
This section just needs a small edit to clarify these points and ensure the references are
incorporated correctly. The point about other metrics is not necessarily that they could be added
to the toolbox as further work, but that these represent previous efforts at creating explanation
metrics and so should be cited for completeness. The point in Tomsett et al. 2020 is that any
proposed metrics for explanations need assessing to evaluate their statistical reliability - though
this can certainly be left to future work.
In the previous version of the paragraph we mixed the concept of interactive feedback
loop for model development with a set of metrics to evaluate the model in terms of
explanation quality. The paragraph was probably unclear, and we have rephrased it with a
focus on possible future works.
" | Here is a paper. Please give your review comments after reading it. |
62 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The European Union funded project SAMS (Smart Apiculture Management Services) enhances international cooperation of ICT (Information and Communication Technologies) and sustainable agriculture between EU and developing countries in pursuit of the EU commitment to the UN Sustainable Development Goal 'End hunger, achieve food security and improved nutrition and promote sustainable agriculture'. The project consortium comprises four partners from Europe (two from Germany, Austria, and Latvia) and two partners each from Ethiopia and Indonesia. Beekeeping with small-scale operations provides perfect innovation labs for the demonstration and dissemination of cost-effective and easy-to-use open source ICT applications in developing countries. Within this frame SAMS allows active monitoring and remote sensing of bee colonies and beekeeping by developing an ICT solutions supporting the management of bee health and bee productivity as well as a role model for effective international cooperation. By following the User Centred Design (UCD) approach SAMS addresses requirements of end-user communities on beekeeping in developing countries. And includes findings in its technological improvements and adaptation as well as in innovative services and business creation based on advanced ICT and remote sensing technologies. SAMS enhances the production of bee products, creates jobs (particularly youths/women), triggers</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Pollination through insects is basic to agricultural and horticultural plants. It has been estimated that 66% of the world's crop species are pollinated by a diverse spectrum of pollinators, including the polylectic honey bee <ns0:ref type='bibr' target='#b37'>(Kremen, Williams and Thorp, 2002;</ns0:ref><ns0:ref type='bibr' target='#b49'>Partap, 2011)</ns0:ref>. The symbiosis of pollinated species and pollinators is in a sensitive balance and the reduction and/or loss of either will affect the survival of both <ns0:ref type='bibr' target='#b1'>(Abrol, 2011;</ns0:ref><ns0:ref type='bibr' target='#b48'>Panday, 2015)</ns0:ref>. The pollination value was estimated to make up between 1 and 2 percent of the global GDP <ns0:ref type='bibr'>(Lippert, Feuerbacher, Narjes, 2020)</ns0:ref>. Thus, the conservation of honey bees and other pollinators is of great interest to maintain biodiversity, to provide the world's food security, and in a broader sense to ensure our existence <ns0:ref type='bibr' target='#b52'>(Potter et al., 2019)</ns0:ref>. The pollination process is crucial for the reproduction of cross-pollinated plant species, increases the yields and enhances their quality <ns0:ref type='bibr' target='#b23'>(Fichtl and Adi, 1994;</ns0:ref><ns0:ref type='bibr' target='#b20'>Eilers et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Admasu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b34'>Klatt et al., 2014)</ns0:ref>. Besides the important aspect of pollination, honey bees also produce a variety of bee products, including honey, beeswax, pollen, royal jelly or propolis which also leads to an economic benefit for the beekeeper (E <ns0:ref type='bibr' target='#b11'>Crane, 1990)</ns0:ref>. Therefore, honey bees do not only play a key role in preserving our ecosystems, but also contribute to a greater income <ns0:ref type='bibr' target='#b8'>(Bradbear, 2009)</ns0:ref>. During the last decade, honey bees got further into the center of the world's attention due to higher colony losses than usual <ns0:ref type='bibr' target='#b47'>(Oldroyd, 2007;</ns0:ref><ns0:ref type='bibr' target='#b63'>van der Zee et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b9'>Brodschneider et al., 2016;</ns0:ref><ns0:ref type='bibr'>2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Gray et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b70'>2020)</ns0:ref>. In 2007, the term colony collapse disorder (CCD) was coined for the depopulation of a honey bee colony <ns0:ref type='bibr' target='#b47'>(Oldroyd, 2007;</ns0:ref><ns0:ref type='bibr' target='#b22'>vanEngelsdorp et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b12'>Dainat, vanEngelsdorp and Neumann, 2012)</ns0:ref>. The reasons for this phenomenon are not yet well understood, but it is suggested that proper hive management lowers the risk of CCD and colony losses <ns0:ref type='bibr'>(Steinhauer, vanEngelsdorp, Saegermann, 2020)</ns0:ref>. Meanwhile, the role of bees for the world's economy and food security is undoubted and therefore not only scientists, but also farmers, ecologists, and policy makers join forces to make efforts in preserving them <ns0:ref type='bibr'>(EFSA, 2013)</ns0:ref>. Proper hive management and monitoring for pests, parasites, and diseases, as well as for colony strength, were identified to be crucial factors for honey bee health and productivity and therefore are regarded as vital elements of successful beekeeping <ns0:ref type='bibr'>(EFSA, 2013;</ns0:ref><ns0:ref type='bibr'>Steinhauer, vanEngelsdorp, Saegermann, 2020)</ns0:ref>. To assess those parameters, beekeepers must open the hive and visually inspect it regularly (van der <ns0:ref type='bibr' target='#b63'>Zee et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b14'>Delaplane, van der Steen and Guzman-Novoa, 2013)</ns0:ref>. However, manual monitoring of beehives is a time-consuming process for beekeepers and stressful to bee colonies. Time-consumption even increases with the beekeeping sites' distance to the homesteads, so every inspection also incurs travel costs to beekeepers <ns0:ref type='bibr' target='#b45'>(Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b69'>Zetterman, 2018)</ns0:ref>. Further, honey bee species and subspecies differ in their behavior <ns0:ref type='bibr' target='#b30'>(Gupta et al., 2014)</ns0:ref>. While the Asian honey bee Apis cerana is known for its gentle temperament and easy handling, African Apis mellifera subspecies are very aggressive, causing safety issues for the beekeepers during hive operation. To facilitate the hive management procedure, the implementation of smart apiary management services is believed to be the future <ns0:ref type='bibr' target='#b6'>(Bencsik et al., 2011;</ns0:ref><ns0:ref type='bibr'>Edwards-Murphy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b45'>Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b68'>Zacepins et al., 2016)</ns0:ref>. Differing from previous funded European Union projects which focused mainly on European countries SAMS (Smart Apiculture Management Services) received its funding under the specific purpose to target requirements of low and middle income countries in sub-Saharan Africa and ASEAN. In order to reach this goal, information and communication technology (ICT) tools based on remote sensing to monitor the bee colony's health and productivity are used <ns0:ref type='bibr' target='#b67'>(Zacepins et al., 2015)</ns0:ref>. So far, several multi-dimensional monitoring information systems have been developed and applied in 'Precision Beekeeping' <ns0:ref type='bibr'>(Kviesis et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b67'>Zacepins et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rodriguez et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Komasilovs et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Kontogiannis, 2019)</ns0:ref>, but only a few implemented solutions for honey bee data collection offer basic functionality for data analysis and decision making, and hence still need to be improved <ns0:ref type='bibr' target='#b40'>(Kviesis, Zacepins and Riders, 2015)</ns0:ref>. Precision beekeeping is increasingly implemented in Europe, but lags behind in Africa and Asia. The SAMS project focuses on beekeeping in Ethiopia <ns0:ref type='bibr' target='#b15'>(Demisew, 2016;</ns0:ref><ns0:ref type='bibr' target='#b46'>Negash and Greiling, 2017;</ns0:ref><ns0:ref type='bibr' target='#b64'>Wakjira and Alemayehu, 2019)</ns0:ref> and Indonesia <ns0:ref type='bibr' target='#b27'>(Gratzer et al., 2019)</ns0:ref> as in those countries a huge beekeeping potential is recognized but not unlocked yet. A combined biological, sociological, and technical approach is made within the SAMS project. It enhances international cooperation of ICT and sustainable agriculture between the EU and developing countries to pursue the EU commitment to the UN Sustainable Development Goal 'End hunger, achieve food security and improved nutrition and promote sustainable agriculture'. The main objectives of SAMS are to develop, refine, and implement an open source remote sensing technology for monitoring the health and productivity of bee colonies. SAMS also aim to foster the regional added benefit and gender equality in employment. Furthermore maintaining honey bees has a high potential to foster sustainable development also in other economic sectors, such as the beekeeping supply chain, forestry, agriculture or the beauty (cosmetics) sectors of developed and developing countries <ns0:ref type='bibr' target='#b8'>(Bradbear, 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Gupta et al., 2014)</ns0:ref>. An important asset of this project is the co-creation of local systems to avoid falling into the same trap as other beekeeping programs in developing countries, like ignoring local skills and knowledge <ns0:ref type='bibr'>(Schouten and Lloyd, 2019)</ns0:ref>. Furthermore, SAMS supports cooperation at international and national levels to promote mutual learning and research on open source bee-keeping technology, and best practice bee management for Africa and Asia. This creates jobs, added value products, income and hence contributes to the global fight against hunger <ns0:ref type='bibr' target='#b48'>(Panday, 2015;</ns0:ref><ns0:ref type='bibr' target='#b54'>Roffet-Salque et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b50'>Patel et al., 2020)</ns0:ref>. The aim of this paper is to give an overview of the SAMS project and present ideas and concepts that have been developed considering the needs and requirements of beekeepers, business facilitators, researchers and other stakeholders. The conceptual goals of SAMS and its methodology, which are based on the principles of User Centered Design (UCD) are introduced first, followed by a description of the developed standardized SAMS beehive, and hive monitoring system, which meet the needs of beekeepers in Indonesia and Ethiopia. Complementary to the SAMS hive monitoring system, insights on the developed data warehouse model to facilitate decision support for beekeepers, and SAMS activities, which support the sustainable growth of beekeeping, apiary construction businesses and the bee product market in these countries, are provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Concept of the SAMS HIVE monitoring</ns0:head><ns0:p>Advanced ICT and remote sensing technologies enhance precision apiculture and help to increase the role of bees in pollination services as well as the production of hive products while maintaining a healthy environment. Precision apiculture is an apiary management strategy based on the monitoring of individual colonies without hive inspection to maximize the productivity of bees <ns0:ref type='bibr' target='#b67'>(Zacepins et al., 2015)</ns0:ref>. Driven and based on the User Centered Design approach, SAMS is an apiary management service based on three pillars:</ns0:p><ns0:p>1. Development of modern and modular hives, adapted to the local context, equipped with a remote measurement system for bee colony behavior, productivity and health status monitoring, 2. Development of a cloud-based Decision Support System (DSS) to implement a management Advisory Support Service (ASS) for the beekeepers, 3. Development of adapted bee management guidelines about seasonal changes, available forage plants, and an ICT-data driven model for needed beekeeping actions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Human Centered Design (HCD) within SAMS</ns0:head><ns0:p>The whole process within SAMS followed a human centered design approach (HCD), (Deutsche Norm, ISO /FDIS 9241-210:2019. Human centered design is a multi-step iterative process (see Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>) which requires defined steps and includes understanding and analysing the context of use, specifying the user requirements, producing design solutions, and evaluating them against those user requirements, if possible, with user participation. All actions and developments within the project were performed in close cooperation and collaboration with the end-users, especially with the focus user group: beekeepers. A thorough user research and context of use analysis has been conducted to understand the preconditions of the local environment as well as the potentials and challenges for a successful technology supported apiculture. In order to understand beekeepers as SAMS focus users better, empirical methods like contextual interviews, observations, surveys, workshops, focus group discussions, and field studies have been undertaken. Results have been documented in the form of personas (https://wiki.sams-project.eu/index.php/Personas, last accessed: 02.12.2020) and as-is scenarios (https://wiki.sams-project.eu/index.php/AS-is_Scenarios, last accessed: 02.12.2020) and presented to all SAMS team members and beekeepers for review and refinement. Based on the review, the SAMS team and beekeepers identified and described user requirements and started a collaborative design thinking process to produce conceptual design solutions and low-level prototypes for essential products around the decision support system and the advisory support service for beekeepers. Those design solutions were iteratively evaluated and refined. With the diverse contexts of implementation in Indonesia and Ethiopia, SAMS must meet the challenge of including culture specific variations in the prototyping process. These culture specific variations considered different beekeeping traditions, different bee types, and climate conditions as well as different languages, different social and political contexts. Multidisciplinary exchange of information and collaboration between local culture experts, beekeeping experts, hardware specialists, database architects, and software engineering specialists were essential. The collaboration was motivated by a common goal to develop technically robust, reliable, easy-to-use, easy to maintain under the specific conditions and affordable services that provided added economic value to the beekeepers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Development and standard of SAMS beehive</ns0:head><ns0:p>One aspect of SAMS is to develop and standardize beekeeping practices within Ethiopia and Indonesia, respectively. To achieve this, the SAMS team constructed and developed a standard SAMS beehive, which can be used in future beekeeping and enables sensor placement and information technology implementation. A modern beehive is an enclosed, man-made structure in which honey bee colonies of the genus Apis are kept for man's economic benefit <ns0:ref type='bibr' target='#b3'>(Atkins, Grout and Dadant & Sons., 1975;</ns0:ref><ns0:ref type='bibr' target='#b11'>Crane, 1990)</ns0:ref>. The design of such a hive should balance the requirements of the colony and convenience for the work of beekeepers. In traditional African hives, honey bees build their natural nest by constructing parallel combs vertically downwards from the roof of the nest cavity almost the same way as they do in wild nests. During comb construction, a space -called 'bee space' -is left between the combs. Bee space, and comb spacing (midrib to midrib distances), and lots of other striking features are found to vary from species to species and among the different subspecies of a species <ns0:ref type='bibr' target='#b58'>(Seeley, 1977;</ns0:ref><ns0:ref type='bibr'>Jensen, 2007)</ns0:ref>. To gain insight into details of the requirements of honey bees, preliminary studies on bee space measurements from different agro ecologies of Ethiopia and assessment of dimensions of different beehive components manufactured in different workshops have been conducted for A. mellifera colonies. For A. cerana requirements, different literatures were assessed and consulted, needs and requirements were analysed <ns0:ref type='bibr'>(Jensen, 2007;</ns0:ref><ns0:ref type='bibr'>Schouten, Lloyd, & Lloyd, 2019)</ns0:ref>. The results from these studies were used in determining the bee space, comb spacing, and other hive dimensions to develop standards and material specifications for new beehives according to the needs and nature of the two honey bee species targets by SAMS. In selecting the prototype to design and develop a standard beehive for SAMS, various available prototypes have been considered. Improved modern beehives such as Langstroth, Dadant, Foam, Zander, and modified Zander have been assessed for their advantage and ease of construction.</ns0:p><ns0:p>All of these prototypes were designed and optimized for A. mellifera and A. cerana. From the preliminary study and literature analysis, dimensions of different parts and procedures required for hive construction were carefully organized for the standard SAMS beehive so that a complete hive system can easily be produced locally and used in the beekeeping industry. For this purpose and the required criteria, Langstroth and its modified version, the Dadant model, were chosen for the standard SAMS beehive. The reasons for choosing these two prototypes were: 1) both hive systems have several hive boxes that can be stacked one above another to expand the hive volume, and have the possibility of confining the queen to the lowest chamber (brood box) by using a queen excluder; 2) familiarity of the hive systems in project countries and beyond. Almost all-commercial beekeeping operations throughout Europe, North America, Australia, and parts of South America and Asia and some African countries, operate based on the Langstroth and Dadant types <ns0:ref type='bibr' target='#b3'>(Atkins, Grout and Dadant & Sons., 1975;</ns0:ref><ns0:ref type='bibr' target='#b58'>Segeren and Mulder, 1997)</ns0:ref>. This universality can help to easy adopt the new SAMS beehive system among the beekeeping community, ensuring sustainability of the project; 3) these two beehive types can generate the highest honey yield, due to the option to add supers one above the other easily; 4) standardizing enables consistency of parts production across manufacturers in different workshops in different regions. This will bring hive parts prices down to reasonable levels and opens the opportunity to do business out of beehive production. Therefore, this can assure sustainability and create an impact on productivity and bee health, as this innovation can transform beekeeping activity into a full-scale industry. The proposed beehive system is sketched in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. The complete system consists of a loose bottom board, bottomless brood chamber, supers above brood chamber, inner cover, and outer cover. The bottom or lower chamber is used for the queen to lay eggs, and the supers serve as honey stores. The volume of each chamber is based on the assumption of 10 vertically hanging frames. Between the frames, other parts, and each frame, a bee space of 10 mm for A. mellifera and 9 mm for A. cerana, allows movement of individual workers for comb construction, brood rearing, and storing food. However, the major difference in this development compared to previous prototypes is that the bottom board and inner cover are designed to serve additional purposes. The top part of the bottom board is covered by a wire grid with a 3 x 3 mm mesh size. The mesh allows debris to fall out of the beehive. The mesh floor also allows air circulation in the hive. From the rear side of the bottom board, a slot for placing a mite floor is created for the diagnosis of small arthropod pests like varroa mite, small hive beetle, or sugar ants. The mite floor contains a piece of waterproof plywood of similar size to the bottom area of the brood chamber. For pest control, any glue harmless to bees and products is smeared on the mite floor's upper side. The sticky materials then trap any pests. Another modification in the SAMS beehive is to fit the hive with an inner cover primarily used to cover the uppermost super before the outer cover. The inner cover is designed to prevent death of worker bees during hive operation due to breaking of propolis seal if the only outer cover is used. In this beehive system, the inner cover is designed to additionally serve as a feeder to supply bees with sugar syrup or pollen patty during dearth periods. Proposed dimensions and detailed views of the beehive bottom board is described in the SAMS manual on beehive construction and operation (https://wiki.sams-project.eu/index.php/Bee_Hive_Manual, last accessed: 02.12.2020).</ns0:p></ns0:div>
<ns0:div><ns0:head>SAMS HIVE monitoring system</ns0:head><ns0:p>In modern beekeeping in Europe, precision beekeeping is well established with many commercial systems available for remote bee colony monitoring, mainly recording and transmitting weight measurements <ns0:ref type='bibr' target='#b41'>(Lecocq et al., 2015)</ns0:ref>. Some of these commercial solutions are expensive, and Ethiopian or Indonesian beekeepers cannot afford them. Some systems do not provide data transfer capabilities using mobile networks, and others do not work without a standard power supply. Thus, the SAMS HIVE monitoring system considers specifics of the two target countries and developing countries, based on the local beekeepers' needs. The system contains several functional groups:</ns0:p><ns0:p>1. A power supply with a router to run up to 10 monitoring units; 2. A central computer unit where the sensors are connected; 3. A sensor frame placed in the beehive, including temperature and humidity sensor as well as a microphone; 4. A scale unit positioned beneath the beehive with an outdoor temperature and humidity sensor optionally. The power supply for the monitoring units is provided by a photovoltaic system (power unit) via cables. It consists of the standard components: solar module, charging controller, and battery. The power unit also supplies a mobile GSM Wi-Fi router, which is used as a hotspot for the monitoring units to transfer data to a web server (SAMS data warehouse). The flow chart of the SAMS HIVE system is shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. The monitoring unit consists of a printed circuit board (PCB) with Raspberry Pi Zero W singleboard computer, a step-down converter to change the voltage of the power unit to 5V, and a 24bit analog-to-digital converter (ADC) that converts the Wheatstone bridge signals of the load cell to a digital format. The load cell measures the weight of the colony. The sensor frame with temperature and humidity sensor as well as a microphone is also connected to the computer. This module allows acoustic signals and colony parameters like temperature to be recorded. The acoustics are recorded over a certain timespan and uploaded as a Fast Fourier Transformed (FFT) spectrum and transferred to the SAMS data warehouse. It is recorded with 16 kHz sampling frequency, covering a frequency range from 0 kHz to 8 kHz. The FFT is made with 4096 points resulting in a frequency resolution of approximately 3.9 Hz. The computer can be extended with additional sensors. For example, it is possible to connect a small weather station to collect region-specific climate data or additional temperature sensors to be placed in different hive locations (top, bottom, in frames). A deep sleep mode can be used in between the measuring intervals utilizing a power control unit (WittyPi) in order to reduce energy consumption considerably. As soon as the computer receives power from the power unit, it starts the measuring routine. The measuring routine and the interval can be adjusted remotely via online configuration as required. After a successful recording, the data is transferred via Wi-Fi to the mobile GSM router and sent to the web server (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). If the real time upload is not possible, the data remains on the SD card until a successful upload or remote collection has been performed. In this case, a new upload attempt starts after 30 seconds. Each device has its ID so that it can be uniquely assigned to the web server. Individual sensors can also be added to users, locations, or groups on the web server. Successful recording, data storage, uploads or errors are logged and transferred to the web server. Events for troubleshooting can be viewed there by administrators. On the device, 2 LEDs indicate working or deep sleep mode. Plug connections ensure easy installation. The sensor frame is connected to the computer via flat cable and IDC connectors. As a power supply connection, a standard DC power plug was selected. In addition to the sensor frame, a case was designed to place the monitoring unit's components. Both cases are 3D printable models (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). A software was developed to operate the Raspberry Pi and its components as a monitoring system. In order to ensure the simple and long-term availability of the code, a separate SAMS page was created on the GitHub developer platform. The code (sams-app 2.47) can be found open source at https://github.com/sams-project. The GitHub page contains the code to operate the monitoring system, a web application to calibrate the functions and the code to set up a data warehouse. Also, the files to print and build the PCB and cases are available there. The recommended installation is to use a sensor frame placed in a brood frame (Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>). The sensor frame is installed centrally in a brood frame so that the sensors are located in the middle of the brood nest. The price of the SAMS HIVE monitoring system (current version 2) is about 170 €. In addition, there are the expenses for power supply and GSM. The dimensioning of the photovoltaic system for power supply depends on the location, the number of monitoring units and the measuring intervals. The cost of the photovoltaic system is about 200 € and up to ten monitoring units can be powered by it. Modular electronic components were used to ensure the sustainability of the monitoring system. The components can be replaced independently and also be used for other purposes. A recycling plan should support this if necessary. In addition to its expandability, the system can also be set up for other academic and research applications and bee institutes to collect sensor data. Some adjustments and findings are discovered within the SAMS Hive monitoring system's implementation process and usability testing during the UCD process. These findings will significantly contribute to business potential mapping and development. Some of the main findings are: 3. Cheaper monitoring system that is simple and easy to augment to the existing modern beehive is preferable. Some aspects concerning the beekeeping ecosystem in target countries also need to be considered; for example, the Indonesian beekeeping ecosystem is not yet developed as the beekeeping ecosystem in Ethiopia or Europe. This immaturity of the ecosystem resulted in a lack of integrated support from beekeeping stakeholders. So simple technology is considered a better option first to improve the ecosystem.</ns0:p></ns0:div>
<ns0:div><ns0:head>SAMS data warehouse and decision support system</ns0:head><ns0:p>All the measured data about the behavior of bee colonies, gathered from the HIVE monitoring system, can be stored for further analysis and decision support. For the data storage dedicated data warehouse is developed <ns0:ref type='bibr' target='#b35'>(Komasilovs et al., 2019)</ns0:ref>, which can be considered as an universal system and is able to operate with different data inputs and have flexible data processing algorithms <ns0:ref type='bibr' target='#b39'>(Kviesis et al., 2020)</ns0:ref>. Architecture of the developed DW is demonstrated in Figure <ns0:ref type='figure' target='#fig_9'>8</ns0:ref>. The DW is a fully operational solution, it is storing incoming data in real-time and is providing the infrastructure for the future data analysis, processing and visualisation. The SAMS data warehouse is accessible by the link: https://sams.science.itf.llu.lv/. It is an open source software and it can be used by others to further extend its functionality, develop different user interfaces and/or native mobile applications, and use in new business opportunities. Data warehouse source code is accessible in the GitHub repository: https://github.com/sams-project. For the data analysis several approaches can be used, within the SAMS project a Decision Support System was implemented. Aim of the DSS is to analyze data and compile it into useful knowledge understandable by end-users. For the beekeepers the raw sensory data must be analyzed, interpreted and translated into clear instructions that consider the operational ability and beekeeping knowledge of the users. The main aim of the DSS is to detect and recognize various bee colony states <ns0:ref type='bibr'>(Zacepins et al., 2015)</ns0:ref> and inform the beekeeper about them. Still it needs to be noted that beekeepers remain as the final decision makers and can choose appropriate action and when to take it. For the SAMS project each country context and environmental factors should be thoroughly analyzed to develop specific algorithms that allow safe interpretation. The SAMS DSS has a modular design, consisting of a comprehensive expert interface, which has been developed and adapted together with local beekeepers and which can be used by apiculture experts, e.g. in a service and advisory support centers, to analyze and monitor data. Also, easy to use and understandable applications on smartphones or SMS services are required to alert beekeepers about hives that need attention. The user centred design approach makes sure that technical layout and user interfaces are developed in parallel, based on shared research results. With such a structure of advisory support, local beekeeping experts can assist the beekeepers if needed. At this moment some of the models required for DSS are implemented into the SAMS data warehouse. Also, within the user centred design research a mockup of mobile application interface was created according to local user needs and is publicly available for further elaboration to all interested parties.</ns0:p></ns0:div>
<ns0:div><ns0:head>Api-management within SAMS</ns0:head><ns0:p>Api-management is central to the SAMS project, including the contextualizing of local systems focusing on the two target countries Ethiopia and Indonesia, the development of an open source and agile database and a honey bee health and management related capacity building strategy. Even though Europe's beekeeping sector is comparably strong, it relies on honey imports from third countries as its production is not sufficient enough to saturate the market <ns0:ref type='bibr' target='#b25'>(García, 2018)</ns0:ref>. While governmental involvement and subsidized national programs aim to strengthen the stagnated European bee product market, such programs lack completely in Indonesia <ns0:ref type='bibr' target='#b27'>(Gratzer et al, 2019)</ns0:ref>, and are not carried out sustainably enough to set the beekeeping sector of Ethiopia on a par with those of other global players. In Europe, beekeeping has a long tradition and knowledge is accessible by numerous books and journals. Bee health is affected by a diverse spectrum of organisms (protozoa, fungi, bacteria, insects, mites, etc.) <ns0:ref type='bibr' target='#b5'>(Bailey and Ball, 1991;</ns0:ref><ns0:ref type='bibr' target='#b26'>Genersch, 2010)</ns0:ref>, but the parasitic mite Varroa destructor, introduced to Europe, is the major threat to European honey bees <ns0:ref type='bibr' target='#b55'>(Rosenkranz, Aumeier and Ziegelmann, 2010)</ns0:ref>. The varroa mite seems to be no big issue for Ethiopian <ns0:ref type='bibr'>(Gebremedhn et al., 2019)</ns0:ref> nor for Indonesian honey bees but this is not well documented. However, several other organisms affect Ethiopia's bees, including protozoa, fungi, insects, birds and mammals, but with the exception of ants or wax moths, mostly no control methods are applied <ns0:ref type='bibr' target='#b21'>(Ellis and Munn, 2005;</ns0:ref><ns0:ref type='bibr'>Awraris Getachew Shenkute et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b62'>Tesfay, 2014;</ns0:ref><ns0:ref type='bibr' target='#b51'>Pirk et al., 2015)</ns0:ref>. In Ethiopia, beekeeping dates back ~5000 years <ns0:ref type='bibr' target='#b61'>(Tekle and Ababor, 2018)</ns0:ref>, and more than one million households maintain around six million honey bee (A. mellifera) colonies producing more than 50,000 tons of honey per year, making Ethiopia Africa's leading honey and beeswax producer (Degu & Megerssa, 2020). However, Ethiopia's honey sector is far behind its potential of 500,000 tons per year. The reasons include limited access to modern beekeeping practices and equipment, a shortage of trained people, the use of agriculture chemicals, the impact of droughts, absconding and the lack of infrastructure and market facilities <ns0:ref type='bibr' target='#b65'>(Yirga et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b42'>Legesse, 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Fikru and Gebresilassie, 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>Degu & Megerssa, 2020)</ns0:ref>. The vast majority of hive systems in Ethiopia are traditional, some are classified transitional (top bar hives), only few are classified as modern hives. Traditional hives are made from locally available, but often non-durable materials <ns0:ref type='bibr'>(clay, straw, bamboo, logs, etc.)</ns0:ref>. Even though this kind of hive system requires low starting costs and skills, honey harvesting is always accompanied by destroying large parts of the bees's nest. Furthermore, the productivity is considered to be low <ns0:ref type='bibr' target='#b66'>(Yirga and Teferi, 2010;</ns0:ref><ns0:ref type='bibr' target='#b7'>Beyene et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>Degu and Megerssa, 2020)</ns0:ref>. Traditionally, beekeepers gain their knowledge from the family or village <ns0:ref type='bibr' target='#b23'>(Fichtl and Adi, 1994)</ns0:ref>. As training centres are rare in Ethiopia and beekeepers from rural regions often lack infrastructure, the intellectual access to modern beekeeping techniques is restricted. One of the largest bee research institutions in the country is a one hour drive away from the capital Addis Ababa. The Holeta bee research center is involved in educating beekeepers and connecting them by offering training and hard copies of training manuals for beginners and advanced beekeepers including now the SAMS manual for beekeeping equipment production. So far, classic beekeeping training centers do not exist in Indonesia. To be able to establish one, one must face political and social issues first as the awareness of the importance of bees for the ecosystem was reported to be low in the country. Furthermore, in relation to the large Indonesian population size, beekeeping is not widespread and beekeeping-related literature is not readily available <ns0:ref type='bibr' target='#b27'>(Gratzer et al., 2019)</ns0:ref>. Honey hunting has tradition in parts of the country, but managing honey bees in hives is a comparatively young activity in Indonesia. Most beekeepers keep the native Asian honey bee A. cerana, followed by the introduced A. mellifera which is mainly used for migratory beekeeping. While A. cerana is regarded less productive than A. mellifera, it is known for its easy handling and gentle behavior. One major problem identified, similar to Ethiopia, is the absconding behavior of bees. During unfavorable conditions, the colonies leave their hives, resulting in financial losses for beekeepers. Although many reasons for the underdeveloped beekeeping sector overlap with those of Ethiopia, others are specific to Indonesia, such as a lack of quality standards for bee products <ns0:ref type='bibr' target='#b11'>(Crane, 1990;</ns0:ref><ns0:ref type='bibr' target='#b43'>Masterpole et al., 2019)</ns0:ref>. Overall, there has been a sharp increase in beekeeping development publications over the past five years, but compared to Sub-Saharan Africa, the absolute number of publications for South Asia including Indonesia is rather low <ns0:ref type='bibr'>(Shouten, 2020)</ns0:ref>. Due to the limited access and availability of literature, little information is given on bee health issues, control methods or management of honey bees in Indonesia, and therefore more research and lobbying efforts are highly recommended <ns0:ref type='bibr' target='#b27'>(Gratzer et al., 2019)</ns0:ref>. As contextualizing is an ongoing process, an open source knowledge database was developed -the 'SAMSwiki' (https://wiki.sams-project.eu, last accessed 04.12.2020). During the set-up, the SAMSwiki was fed with more than 200 literature sources including a variety of beekeeping related topics like Indonesian and Ethiopian bee sector parameters, bee forage, management options, bee health, as well as funding opportunities for businesses and SAMS-system related content. With its wiki-like approach, the readers can easily become members and contributors and are able to share their expertise with the remaining community. Extension of this database to other countries is planned for the future.</ns0:p></ns0:div>
<ns0:div><ns0:head>Possibilities for smart bee management</ns0:head><ns0:p>Managed honey bee colonies need regular monitoring actions. Especially during the active foraging season, external and internal hive inspection is a necessary task for each beekeeper. Those actions are time-consuming and regular opening of the beehive is a stress factor for the whole colony. With smart management, or precision beekeeping, those mandatory interferences are reduced to a minimum <ns0:ref type='bibr' target='#b6'>(Bencsik et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b45'>Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b67'>Zacepins et al., 2015)</ns0:ref>. Smart bee management possibilities can be manifold and some of them, including the most relevant ones for the SAMS-project, are represented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. We elaborated what-if scenarios PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for the four most important events. For example, the start of a mass nectar flow indicates honey yield in the near future and beekeepers estimate this event either by knowing the vegetation in the surroundings by observing the flight entrance or by checking the food stores inside the hive; but a technical solution would make the beekeepers' work more efficient. Easy to understand illustrations have been developed for each important bee colony state, including basic recommendations for the beekeepers. One example can be seen in Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>. The beekeeper gets informed as soon as an increase in weight of the monitored beehive by a certain, prior defined, percentage-value occurs. On detection of this event, further actions can be planned without even being present at the apiary. A typical event occurring only in African or Asian colonies is absconding, which has not been studied before using precision beekeeping approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Business models within SAMS</ns0:head><ns0:p>In addition to the open source remote sensing technology for monitoring the health and productivity of bee colonies, SAMS fosters the regional added benefit by identifying business opportunities and challenges, supporting business model development and thus assisting job creation. Enabling the SAMS team to identify SAMS business models several methods such as co-creation, ideathlon and observation of existing businesses were used. Ethiopia with its great potential in the apiculture sector has a wider range of business compared to Indonesia, and mainly focuses on beekeeping management. There are only few businesses that offer derivative products, while Indonesia has only few businesses that could improve beekeeping management as well as technology-based business. One aspect became very clear during this project sequence -business development in the apiculture sector depends on the country readiness. Several factors indicate this country readiness, e.g. the maturity of the apiculture industry, government support, and age structure (children and young adolescents, the working-age population, and the elderly population). The more mature the apiculture sector in one country, the bigger the support given by the government, the more resources flow, the more flourishing the industry will be. The bigger the working-age population in one country, the more labor is available, the more industries are thriving. The working-age population factor is believed as one of the main factors that determine the growth of the creative industry. In 2018, the working-age population in Ethiopia was 55.26%, in Indonesia 67.59%, and in EU 64.69%. As one of the SAMS goals is to provide a platform for concepts and ideas for local business developments, in order to have a sustainable long-term impact, an overall concept of SAMS business model was created and main obstacles in Ethiopia and Indonesia were identified. The 54 identified SAMS business models are rated based on its correlation to SAMS objectives and are recognized as SAMS business models that contribute in giving added value to this project aims and impact. All SAMs business models remain freely available on the SAMSwiki (https://wiki.sams-project.eu/index.php/SAMS_-_Business_Models, last accessed on 09.12.2020) also after the project end to enable stakeholders around the world to take up SAMS PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science ideas and business concepts and to create a greater position of the apiculture sector in their own countries. Figure <ns0:ref type='figure' target='#fig_11'>10</ns0:ref> illustrates the overall concept of the SAMS business model that involves various stakeholders in the process. In the context of SAMS, the ecosystem pattern can be developed and contextualized in the development of SAMS which is directed to have a wider impact on the development of honey bee businesses. Ecosystem is developed by involving various stakeholders who carry out their respective roles. The SAMS data, research & theory cloud represents all the knowledge acquired & collected during SAMS Project. SAMS technology produced from the research process aims to make beekeeping activities more effective and efficient. To implement this product to its beneficiary; beekeepers; the high cost of its production makes it difficult to promote it directly unless funding schemes from collaboration between government and business people and research institutions/universities are considered. The SAMS data, research & theory can be utilized by a wider community, it can be by the government, business people, researchers and universities themselves and bring impact to social value which is illustrated by 'raining impact'. The 'global wind', 'NGO cloud' and 'shared data cloud' represent opportunity and possibility in join research in the future since the SAMS data is available for free. SAMS data that utilized by the government (described as institution mountain) is useful for policies making in the fields of forestry, animal husbandry, agriculture, and the environment. The policy is then derived as an intake of community empowerment, leaders and other driving nodes. This concept is also expected to provide valuable benefits for the stakeholders involved. For beekeepers, bee colony management technology (SAMS) developed is obtained free of charge, as well as raising awareness in protecting the environment and government policies that support beekeepers and environmental communities. For governments, universities and businesses as funders, getting data from the technology applied to the colonies maintained by beekeepers for research and policy making. There are three main directions that support this concept:</ns0:p><ns0:p>1. Practice -The role that individuals play in driving institutional change is the key in building SAMS ecosystem. Much remains to be identified as a potential for development involving many stakeholders. therefore, it need to recognize the importance of key individuals in driving the SAMS ecosystem, and empowering them to further expand (and more importantly to facilitate others to expand). </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The SAMS project developed an open source information and communication technology that allows active monitoring and managing of bee colonies to ensure bee health and bee productivity. For the first time, focus was given to special conditions of Africa and Asia, including thorough research on actual user needs. Continuous monitoring of variables associated with honey bee colonies, including weight changes, temperature, humidity, acoustics, activity at entrance for detection of different bee colony states like swarming, broodless stage, and others becomes feasible for most practical applications. Established European or North American systems are not designed for the peculiarities that can be expected when monitoring colonies in Africa or Asia. Application of the SAMS design process allows the requirements of beekeeping in different countries and settings to be met, enhancing sustainable agriculture worldwide. To develop SAMS for local contexts, the project collected data from different user groups (individual beekeepers, beekeeping cooperatives, private and public input suppliers like beehive producers, beekeeping experts and researchers and others) within the UCD processed and enabled the team to adapt the system to specific requirements. At the end of the project, a greater awareness will be created in Indonesia and Ethiopia in regard to beekeeping and its activities and opportunities for greater income. There will also be the possibility to use collected data from different regions to understand the behavior of bees and the environmental aspect better and to ensure food production and bee farming activities. In addition, an international partnership network will ensure knowledge exchange and mutual learning. Main results of the SAMS project are: a) a manual for the SAMS monitoring beehive model, that is locally produced and adapted to local conditions, including integrated open source sensor and information transition technology, as well as energy-supply solution; b) the SAMS data warehouse which can be individually adapted; c) a decision support system interface that can combine the sensor-based data-outputs with other information sources and predictive models to measure, analyze and describe different states of the bee colony such as health, vitality and production, d) the SAMSwiki which provides knowledge on beekeeping in Ethiopia and Indonesia but also for other regions and e) 54 SAMS business models for greater income opportunities and related upscaling potential. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia. States are ranked based on the importance to the beekeepers in target countries. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation (if a solution that could be used for specific state detection already exists) and predictability of each event or colony state. Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia</ns0:p><ns0:p>States are ranked based on the importance to the beekeepers in target countries. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation (if a solution that could be used for specific state detection already exists) and predictability of each event or colony state.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table 1: Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation and predictability of each event or colony state. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1.</ns0:head><ns0:label /><ns0:figDesc>Beekeepers have a limited budget, and technology is not yet considered in beekeeping practices. PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020) Manuscript to be reviewed Computer Science 2. Local beekeepers found it valuable to monitor trap-hives (modern beehives used to trap new bee colony), placed deep in the forest, so power source became the main concern for such systems.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>2. Institutional -International partnerships were initiated to support SAMS ecosystem on business development, bee colony data & knowledge exchange, apiculture technology & services. Furthermore, SAMS Technology established a social innovation to engage more socially aspirational younger generations (i.e. their customers) to be more involved in the Honey & Bee Industry. PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020) Manuscript to be reviewed Computer Science 3. Systemic -The key social problems facing the SAMS Technology application in Indonesia. The market survey supported the research by mapping participant surveyresponses including all respondent-identified potential in supporting the future business model of SAMS application. Wealth was also identified in the interviews as a key determinant of all these other issues, how to develop SAMS Business and maintain its sustainability showing the interrelated nature of technology and also social problems, reinforcing the need for a collaborative, multi-agency approach to solving the challenges in implementing the SAMS technology.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Human Centred Design Process applied in SAMS project for development of interactive systems. Significant is the user participation in this process, iterations of product design as well as iterations of context of use analysis are driven by user feedback. Interdependence of human-centred design activities [ISO /FDIS 9241-210:2019].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. A complete proposed SAMS beehive system sketch. Sketch describes all parts of the beehive -bottom board with bee entrance, brood chamber with frames, honey suppers and the top cover.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Flow chart of the SAMS HIVE system. Power unit, scale unit, sensor frame and data warehouse.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. SAMS HIVE device. Measurement device with ports and status LED.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. SAMS HIVE case. PCB and components placed in a 3D printable case.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. SAMS HIVE sensor frame. Sensors are installed in a 3D printable case placed in a regular brood frame and connected with flat cable to SAMS HIVE device.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Placement of SAMS HIVE system. Sketch of a common Dadant beehive with placement of: (1) Sensor frame in a brood frame, (2) HIVE case and (3) Scale unit.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Architecture of the developed SAMS data warehouse. Main DW components are shown in frames (Core, WebApi, User interface). Cubes represent various processing units interacting with each other, cylinders represent persistent storage, pipes (horizontal cylinders) represent communication channels. Vaults and Reports in DW Core are independent processing units with dedicated storage<ns0:ref type='bibr' target='#b35'>(Komasilovs et al., 2019)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Exemplary illustration of the nectar flow as one smart bee management possibility. Mass nectar flow is detected by the SAMS hive monitoring and decision support system, which triggers an alert on smartphones and recommendations for beekeepers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure10. Overall concept of the SAMS business model. Collaboration between government, university and business for achieving the specific goals is demonstrated in the concept.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 1 Human</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 10 Overall</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,229.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,204.37,525.00,312.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,426.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,229.87,525.00,403.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,280.87,525.00,177.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,229.87,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:1:1:NEW 17 Dec 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Rebuttal letter: Smart apiculture management services for developing countries - the case of SAMS project in Ethiopia and Indonesia
Dear Editor,
We thank you for the comments provided by the two reviewers and the editor review. We changed our article accordingly, and also revised many other parts, including the linguistics. We hope that the article is now much better readable. Some new references and two new figures were also included, as well as some figures were improved/updated. The section SAMS service design was deleted.
A point by point rebuttal can be found below.
On behalf of all authors,
Asoc.prof. dr.sc.ing. Aleksejs Zacepins (Latvia Unviersity of Life Sciences and Technologies)
Editor review:
It is my opinion as the Academic Editor for your article - Smart apiculture management services for developing countries - the case of SAMS project in Ethiopia and Indonesia - that it requires a number of Major Revisions.
Firstly, I must apologise for the delay in returning these reviewer comments to you. I was pleased to see that both reviewers have welcomed your manuscript, and R2 in particular offers a number of minor revision suggestions. I also found the manuscript very informative, and clearly indicates that a considerable amount of work has been accomplished by the SAMS project stakeholders.
Unfortunately, I have noted several areas where the text of the manuscript need revision in order to improve both the paper's clarity and rigor.
1. Please take a look at the attached PDF where I have provided suggested rewordings, and noted sections that should be rewritten or restructured (particularly the final sections describing the SAMS Business Model and Conclusions sections). If possible, I recommend a proof reader with excellent written English is consulted prior to submitting your revised manuscript.
• We thank you very much for your 163 comments in the pdf. We incorporated almost all of these changes as requested, and believe they greatly improve the understanding of this article. Some comments were obsolete, as we decided to completely re-write the section, following major comments.
2. Many of the figures lacked clear titles and legends describing their essential message. I have added notes explicitly to some, but please also refer to the instructions to authors in this regard https://peerj.com/about/author-instructions/cs#figures
• The captions of all Figures have been improved. They now follow the concept of a heading sentence at the beginning.
3. Please ensure all code, design documents and data files cited in the paper are properly versioned (e.g. tagged in a git repository and/or uploaded to Zenodo https://zenodo.org/, or given a versioned URI hosted on SAMS own servers). Where data or statistics have been obtained from other sources (e.g. FAOSTAT - http://www.fao.org/faostat/en/#home) please give date of access and a specific URL to allow others to view the data at its source. Data may also be downloaded and archived (e.g. via Figshare or Zenodo) if a specific URL cannot be obtained.
• FAO - We removed the link to FAO and replaced it with a new publication (Degu & Megerssa, 2020)
• Unspecified URLs have been removed or specified
• Now all SAMS source code is publicly available in a github repository: https://github.com/sams-project
◦ sams-dw-core
◦ sams-dw
◦ sams-app
4. Clearly and consistently indicate the progress of each part of the project - e.g. the Data warehouse is variably described in the text as 'proposed', and a smart phone app is mentioned but no further details are presented. Ongoing work and future plans are certainly relevant for a paper like this, particularly if there is an opportunity for others to contribute (e.g. through open source software development), so please consider highlighting such further work in the final section of the paper.
• Section data warehouse is renamed to the SAMS Data warehouse and decision support system. This section is rewritten based on Your provided comments and suggestions.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
[# PeerJ Staff Note: The Academic Editor has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]
• English language is improved in the manuscript by the authors.
Reviewer 1 (Anonymous)
Basic reporting
I really enjoyed this article. It is well done and clearly presented. The last paragraph of text in the conclusion section seemed to have some formatting issues, but otherwise did not see any issues.
• Thank you for the kind words, we fixed the formatting issues.
Experimental design
This was a case study, so there was not much in the way of experimental design. The project was presented and explained well and clearly, including acknowledging and address potential weaknesses.
Validity of the findings
Very nice as a case study. Hard to question the validity of their experience.
• As some time passed since original submission, more validated results were added to the revised article, for example related to the HIVE monitoring system, where more specifications and two figures were added.
Comments for the Author
I very much enjoyed reading this article.
• Thank you.
Reviewer 2 (Anonymous)
Basic reporting
The paper presents a useful tool for developing beekeeping solutions.
The process of design/development was well described and defined.
Moreover, the authors also proposed a standard beehive, hopefully, useful for the two target countries. Other strengths of the work:
- Clear and unambiguous, professional English used throughout.
- Literature references, sufficient field background/context provided.
- Professional article structure, figures, tables.
Work weakness:
- Lack of field experimentation.
• Description of the field experimentation and validation of the SAMS monitoring system, including first field results are part of another publication, as inclusion this part to this manuscript would greatly exceed the limits of the article.
Experimental design
Strengths of the work related to experimental design:
- Original primary research within the aims and scope of the journal.
- Methods described with sufficient detail.
Work weakness
The authors must define a research question.
• The interdisciplinary approach used during this three-year project followed the human centered design approach summarized in the respective section. We hesitate to define an explicit research question section, as the whole study process did not test hypotheses, therefore also does not contain a classic ‘results’ section. The main goal, development of hive monitoring systems for developing countries, including user needs, is thoroughly stated.
Validity of the findings
Conclusions are well stated, but not linked to the research question.
• See comment above about the research question.
Comments for the Author
All text - Put the names of species os bee highlighted. Preferably use italics.
• Done, all species names are now in Italics
Line 58 - Please verify the Eva Crane format citation.
• Done, format citation was verified.
Line 111 - Which other?
• Changed to “other stakeholders”.
line 113 - API management.
• Changed
Line 137 to 142 - please avoid the use of quotations.
• Changed
figure 1 - the 'design solution' shouldn't be before 'producing solution'?
• The figure has been changed.
line 300 - Is it 200 € for each monitoring unit?
• This has been clarified in the text
line 308 - Should be stored or was stored.
• Changed to “can be stored”
In figure 6 - what means the blue and red cubes in this figure? Please insert a subtitle for each element.
• The changes have been made to the description of the figure
line 355 - Around 95% of …
• The requested revision became obsolete, as we re-wrote this part.
lines 478 to 484 - a problem in the draft formation.
• corrected!
" | Here is a paper. Please give your review comments after reading it. |
63 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The European Union funded project SAMS (Smart Apiculture Management Services) enhances international cooperation of ICT (Information and Communication Technologies) and sustainable agriculture between EU and developing countries in pursuit of the EU commitment to the UN Sustainable Development Goal 'End hunger, achieve food security and improved nutrition and promote sustainable agriculture'. The project consortium comprises four partners from Europe (two from Germany, Austria, and Latvia) and two partners each from Ethiopia and Indonesia. Beekeeping with small-scale operations provides suitable innovation labs for the demonstration and dissemination of cost-effective and easy-to-use open source ICT applications in developing countries. SAMS allows active monitoring and remote sensing of bee colonies and beekeeping by developing an ICT solutions supporting the management of bee health and bee productivity as well as a role model for effective international cooperation. By following the User Centred Design (UCD) approach SAMS addresses requirements of end-user communities on beekeeping in developing countries. And includes findings in its technological improvements and adaptation as well as in innovative services and business creation based on advanced ICT and remote sensing technologies. SAMS enhances the production of bee products, creates jobs (particularly youths/women), triggers investments, and establishes knowledge</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Pollination through insects is basic to agricultural and horticultural plants. It has been estimated that 66% of the world's crop species are pollinated by a diverse spectrum of pollinators, including the polylectic honey bee <ns0:ref type='bibr' target='#b37'>(Kremen, Williams and Thorp, 2002;</ns0:ref><ns0:ref type='bibr' target='#b48'>Partap, 2011)</ns0:ref>. The symbiosis of pollinated species and pollinators is in a sensitive balance and the reduction and/or loss of either will affect the survival of both <ns0:ref type='bibr' target='#b0'>(Abrol, 2011;</ns0:ref><ns0:ref type='bibr' target='#b47'>Panday, 2015)</ns0:ref>. The pollination value was estimated to make up between 1 and 2 percent of the global GDP <ns0:ref type='bibr'>(Lippert, Feuerbacher, Narjes, 2020)</ns0:ref>. Thus, the conservation of honey bees and other pollinators is of great interest to maintain biodiversity, to provide the world's food security, and in a broader sense to ensure our existence <ns0:ref type='bibr' target='#b51'>(Potter et al., 2019)</ns0:ref>. The pollination process is crucial for the reproduction of cross-pollinated plant species, increases the yields and enhances their quality <ns0:ref type='bibr' target='#b22'>(Fichtl and Adi, 1994;</ns0:ref><ns0:ref type='bibr' target='#b19'>Eilers et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b1'>Admasu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b33'>Klatt et al., 2014)</ns0:ref>. Besides the important aspect of pollination, honey bees also produce a variety of bee products, including honey, beeswax, pollen, royal jelly or propolis which also leads to an economic benefit for the beekeeper <ns0:ref type='bibr' target='#b10'>(Crane, 1990)</ns0:ref>. Therefore, honey bees do not only play a key role in preserving our ecosystems, but also contribute to a greater income <ns0:ref type='bibr' target='#b7'>(Bradbear, 2009)</ns0:ref>. During the last decade, honey bees got further into the center of the world's attention due to higher colony losses than usual <ns0:ref type='bibr' target='#b46'>(Oldroyd, 2007;</ns0:ref><ns0:ref type='bibr' target='#b61'>van der Zee et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brodschneider et al., 2016;</ns0:ref><ns0:ref type='bibr'>2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Gray et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b69'>2020)</ns0:ref>. In 2007, the term colony collapse disorder (CCD) was coined for the depopulation of a honey bee colony <ns0:ref type='bibr' target='#b46'>(Oldroyd, 2007;</ns0:ref><ns0:ref type='bibr' target='#b21'>vanEngelsdorp et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dainat, vanEngelsdorp and Neumann, 2012)</ns0:ref>. The reasons for this phenomenon are not yet well understood, but it is suggested that proper hive management lowers the risk of CCD and colony losses <ns0:ref type='bibr'>(Steinhauer, vanEngelsdorp, Saegermann, 2020)</ns0:ref>. Meanwhile, the role of bees for the world's economy and food security is undoubted and therefore not only scientists, but also farmers, ecologists, and policy makers join forces to make efforts in preserving them <ns0:ref type='bibr'>(EFSA, 2013)</ns0:ref>. Proper hive management and monitoring for pests, parasites, and diseases, as well as for colony strength, were identified to be crucial factors for honey bee health and productivity and therefore are regarded as vital elements of successful beekeeping <ns0:ref type='bibr'>(EFSA, 2013;</ns0:ref><ns0:ref type='bibr'>Steinhauer, vanEngelsdorp, Saegermann, 2020)</ns0:ref>. To assess those parameters, beekeepers must open the hive and visually inspect it regularly (van der <ns0:ref type='bibr' target='#b61'>Zee et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b13'>Delaplane, van der Steen and Guzman-Novoa, 2013)</ns0:ref>. However, manual monitoring of beehives is a time-consuming process for beekeepers and stressful to bee colonies. Time-consumption even increases with the beekeeping sites' distance to the homesteads, so every inspection also incurs travel costs to beekeepers <ns0:ref type='bibr' target='#b44'>(Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b68'>Zetterman, 2018)</ns0:ref>. Further, honey bee species and subspecies differ in their behavior <ns0:ref type='bibr' target='#b29'>(Gupta et al., 2014)</ns0:ref>. While the Asian honey bee Apis cerana is known for its gentle temperament and easy handling, African Apis mellifera subspecies are very aggressive, causing safety issues for the beekeepers during hive operation. To facilitate the hive management procedure, the implementation of smart apiary management services is believed to be the future <ns0:ref type='bibr' target='#b5'>(Bencsik et al., 2011;</ns0:ref><ns0:ref type='bibr'>Edwards-Murphy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b67'>Zacepins et al., 2016)</ns0:ref>. Differing from previous funded European Union projects which focused mainly on European countries, SAMS (Smart Apiculture Management Services) received its funding under the specific purpose to target requirements of low and middle income countries in sub-Saharan Africa and ASEAN. In order to reach this goal, information and communication technology (ICT) tools based on remote sensing to monitor the bee colony's health and productivity are used <ns0:ref type='bibr' target='#b66'>(Zacepins et al., 2015)</ns0:ref>. So far, several multi-dimensional monitoring information systems have been developed and applied in 'Precision Beekeeping' <ns0:ref type='bibr'>(Kviesis et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b66'>Zacepins et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b52'>Rodriguez et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Komasilovs et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Kontogiannis, 2019)</ns0:ref>, but only a few implemented solutions for honey bee data collection offer basic functionality for data analysis and decision making, and hence still need to be improved <ns0:ref type='bibr' target='#b40'>(Kviesis, Zacepins and Riders, 2015)</ns0:ref>. Precision beekeeping is increasingly implemented in Europe, but lags behind in Africa and Asia. The SAMS project focuses on beekeeping in Ethiopia <ns0:ref type='bibr' target='#b14'>(Demisew, 2016;</ns0:ref><ns0:ref type='bibr' target='#b45'>Negash and Greiling, 2017;</ns0:ref><ns0:ref type='bibr' target='#b62'>Wakjira and Alemayehu, 2019)</ns0:ref> and Indonesia <ns0:ref type='bibr' target='#b26'>(Gratzer et al., 2019)</ns0:ref> as in those countries a huge beekeeping potential is recognized but not unlocked yet. A combined biological, sociological, and technical approach is made within the SAMS project. It enhances international cooperation of ICT and sustainable agriculture between the EU and developing countries to pursue the EU commitment to the UN Sustainable Development Goal 'End hunger, achieve food security and improved nutrition and promote sustainable agriculture'. The main objectives of SAMS are to develop, refine, and implement an open source remote sensing technology for monitoring the health and productivity of bee colonies. SAMS also aim to foster the regional added benefit and gender equality in employment. Furthermore, maintaining honey bees has a high potential to foster sustainable development also in other economic sectors, such as the beekeeping supply chain, forestry, agriculture or the beauty (cosmetics) sectors of developed and developing countries <ns0:ref type='bibr' target='#b7'>(Bradbear, 2009;</ns0:ref><ns0:ref type='bibr' target='#b29'>Gupta et al., 2014)</ns0:ref>. An important asset of this project is the co-creation of local systems to avoid falling into the same trap as other beekeeping programs in developing countries, like ignoring local skills and knowledge <ns0:ref type='bibr'>(Schouten and Lloyd, 2019)</ns0:ref>. Furthermore, SAMS supports cooperation at international and national levels to promote mutual learning and research on open source bee-keeping technology, and best practice bee management for Africa and Asia. This creates jobs, adds value to products and income, and hence contributes to the global fight against hunger <ns0:ref type='bibr' target='#b47'>(Panday, 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Roffet-Salque et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b49'>Patel et al., 2020)</ns0:ref>. The aim of this paper is to give an overview of the SAMS project and present ideas and concepts that have been developed considering the needs and requirements of beekeepers, business facilitators, researchers and other stakeholders. The conceptual goals of SAMS and its methodology, which are based on the principles of User Centered Design (UCD) are introduced first, followed by a description of the developed standardized SAMS beehive, and hive monitoring system, which meet the needs of beekeepers in Indonesia and Ethiopia. Complementary to the SAMS hive monitoring system, insights on the developed data warehouse model to facilitate decision support for beekeepers, and SAMS activities, which support the sustainable growth of beekeeping, apiary construction businesses and the bee product market in these countries, are provided.</ns0:p></ns0:div>
<ns0:div><ns0:head>Concept of the SAMS HIVE monitoring</ns0:head><ns0:p>Advanced ICT and remote sensing technologies enhance precision apiculture and help to increase the role of bees in pollination services as well as the production of hive products while maintaining a healthy environment. Precision apiculture is an apiary management strategy based on the monitoring of individual colonies without hive inspection to maximize the productivity of bees <ns0:ref type='bibr' target='#b66'>(Zacepins et al., 2015)</ns0:ref>. Driven and based on the User Centered Design approach, SAMS is an apiary management service based on three pillars:</ns0:p><ns0:p>1. Development of modern and modular hives, adapted to the local context, equipped with a remote measurement system for bee colony behavior, productivity and health status monitoring, 2. Development of a cloud-based Decision Support System (DSS) to implement a management Advisory Support Service (ASS) for the beekeepers, 3. Development of adapted bee management guidelines about seasonal changes, available forage plants, and an ICT-data driven model for needed beekeeping actions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Human Centered Design (HCD) within SAMS</ns0:head><ns0:p>The whole process within SAMS followed a human centered design approach (HCD), <ns0:ref type='bibr'>(Deutsche Norm, ISO /FDIS 9241-210:2019)</ns0:ref>. Human centered design is a multi-step iterative process (see Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>) which requires defined steps and includes understanding and analysing the context of use, specifying the user requirements, producing design solutions, and evaluating them against those user requirements, if possible, with user participation. All actions and developments within the project were performed in close cooperation and collaboration with the end-users, especially with the focus user group: beekeepers. A thorough user research and context of use analysis has been conducted to understand the preconditions of the local environment as well as the potentials and challenges for a successful technology supported apiculture. In order to understand beekeepers as SAMS focus users better, empirical methods like contextual interviews, observations, surveys, workshops, focus group discussions, and field studies have been undertaken. Results have been documented in the form of personas (https://wiki.sams-project.eu/index.php/Personas, last accessed: 18.02.2021) and as-is scenarios (https://wiki.sams-project.eu/index.php/AS-is_Scenarios, last accessed: 18.02.2021) and presented to all SAMS team members and beekeepers for review and refinement. Based on the review, the SAMS team and beekeepers identified and described user requirements and started a collaborative design thinking process to produce conceptual design solutions and low-level prototypes for essential products around the decision support system and the advisory support service for beekeepers. Those design solutions were iteratively evaluated and refined. With the diverse contexts of implementation in Indonesia and Ethiopia, SAMS must meet the challenge of including culture specific variations in the prototyping process. These culture specific variations considered different beekeeping traditions, different bee types, and climate conditions as well as different languages, different social and political contexts. Multidisciplinary exchange of information and collaboration between local culture experts, beekeeping experts, hardware specialists, database architects, and software engineering specialists were essential. The collaboration was motivated by a common goal to develop technically robust, reliable, easy-to-use, easy to maintain under the specific conditions and affordable services that provided added economic value to the beekeepers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Development and standard of SAMS beehive</ns0:head><ns0:p>One aspect of SAMS is to develop and standardize beekeeping practices within Ethiopia and Indonesia, respectively. To achieve this, the SAMS team constructed and developed a standard SAMS beehive, which can be used in future beekeeping and enables sensor placement and information technology implementation. A modern beehive is an enclosed, man-made structure in which honey bee colonies of the genus Apis are kept for man's economic benefit <ns0:ref type='bibr' target='#b2'>(Atkins, Grout and Dadant & Sons., 1975;</ns0:ref><ns0:ref type='bibr' target='#b10'>Crane, 1990)</ns0:ref>. The design of such a hive should balance the requirements of the colony and convenience for the work of beekeepers. In traditional African hives, honey bees build their natural nest by constructing parallel combs vertically downwards from the roof of the nest cavity almost the same way as they do in wild nests. During comb construction, a space -called 'bee space' -is left between the combs. Bee space, and comb spacing (midrib to midrib distances), and lots of other striking features are found to vary from species to species and among the different subspecies of a species <ns0:ref type='bibr' target='#b57'>(Seeley, 1977;</ns0:ref><ns0:ref type='bibr'>Jensen, 2007)</ns0:ref>. To gain insight into details of the requirements of honey bees, preliminary studies on bee space measurements from different agroecologies of Ethiopia and assessment of dimensions of different beehive components manufactured in different workshops have been conducted for A. mellifera colonies. For A. cerana requirements, different literatures were assessed and consulted, needs and requirements were analysed <ns0:ref type='bibr'>(Jensen, 2007;</ns0:ref><ns0:ref type='bibr'>Schouten, Lloyd, & Lloyd, 2019)</ns0:ref>. The results from these studies were used in determining the bee space, comb spacing, and other hive dimensions to develop standards and material specifications for new beehives according to the needs and nature of the two honey bee species targets by SAMS. In selecting the prototype to design and develop a standard beehive for SAMS, various available prototypes have been considered. Improved modern beehives such as Langstroth, Dadant, Foam, Zander, and modified Zander have been assessed for their advantage and ease of construction.</ns0:p><ns0:p>All of these prototypes were designed and optimized for A. mellifera and A. cerana. From the preliminary study and literature analysis, dimensions of different parts and procedures required for hive construction were carefully organized for the standard SAMS beehive so that a complete hive system can easily be produced locally and used in the beekeeping industry. For this purpose and the required criteria, Langstroth and its modified version, the Dadant model, were chosen for the standard SAMS beehive. The reasons for choosing these two prototypes were: 1) both hive systems have several hive boxes that can be stacked one above another to expand the hive volume, and have the possibility of confining the queen to the lowest chamber (brood box) by using a queen excluder; 2) familiarity of the hive systems in project countries and beyond. Almost all-commercial beekeeping operations throughout Europe, North America, Australia, and parts of South America and Asia and some African countries, operate based on the Langstroth and Dadant types <ns0:ref type='bibr' target='#b2'>(Atkins, Grout and Dadant & Sons., 1975;</ns0:ref><ns0:ref type='bibr' target='#b57'>Segeren and Mulder, 1997)</ns0:ref>. This universality can help to ease the adoption of the new SAMS beehive system among the beekeeping community, ensuring sustainability of the project; 3) these two beehive types can generate the highest honey yield, due to the option to add supers one above the other easily; 4) standardizing enables consistency of parts production across manufacturers in different workshops in different regions. This will bring hive parts prices down to reasonable levels and opens the opportunity to do business out of beehive production. Therefore, this can assure sustainability and create an impact on productivity and bee health, as this innovation can transform beekeeping activity into a full-scale industry. The proposed beehive system is sketched in Figure <ns0:ref type='figure'>2</ns0:ref>. The complete system consists of a loose bottom board, bottomless brood chamber, supers above brood chamber, inner cover, and outer cover. The bottom or lower chamber is used for the queen to lay eggs, and the supers serve as honey stores. The volume of each chamber is based on the assumption of 10 vertically hanging frames. Between the frames, other parts, and each frame, a bee space of 10 mm for A. mellifera and 9 mm for A. cerana, allows movement of individual workers for comb construction, brood rearing, and storing food. However, the major difference in this development compared to previous prototypes is that the bottom board and inner cover are designed to serve additional purposes. The top part of the bottom board is covered by a wire grid with a 3 x 3 mm mesh size. The mesh allows debris to fall out of the beehive. The mesh floor also allows air circulation in the hive. From the rear side of the bottom board, a slot for placing a mite floor is created for the diagnosis of small arthropod pests like varroa mite, small hive beetle, or sugar ants. The mite floor contains a piece of waterproof plywood of similar size to the bottom area of the brood chamber. For pest control, any glue harmless to bees and products is smeared on the mite floor's upper side. The sticky materials then trap any pests. Another modification in the SAMS beehive is to fit the hive with an inner cover primarily used to cover the uppermost super before the outer cover. The inner cover is designed to prevent death of worker bees during hive operation due to breaking of propolis seal if the only outer cover is used. In this beehive system, the inner cover is designed to additionally serve as a feeder to supply bees with sugar syrup or pollen patty during dearth periods. Proposed dimensions and detailed views of the beehive bottom board is described in the SAMS manual on beehive construction and operation (https://wiki.sams-project.eu/index.php/Bee_Hive_Manual, last accessed: 18.02.2021).</ns0:p></ns0:div>
<ns0:div><ns0:head>SAMS HIVE monitoring system</ns0:head><ns0:p>In modern beekeeping in Europe, precision beekeeping is well established with many commercial systems available for remote bee colony monitoring, mainly recording and transmitting weight measurements <ns0:ref type='bibr' target='#b41'>(Lecocq et al., 2015)</ns0:ref>. Some of these commercial solutions are expensive, and Ethiopian or Indonesian beekeepers cannot afford them. Some systems do not provide data transfer capabilities using mobile networks, and others do not work without a standard power supply. Thus, the SAMS HIVE monitoring system considers specifics of the two target countries and developing countries, based on the local beekeepers' needs. The system contains several functional groups:</ns0:p><ns0:p>1. A power supply with a router to run up to 10 monitoring units; 2. A central computer unit where the sensors are connected; 3. A sensor frame placed in the beehive, including temperature and humidity sensor as well as a microphone; 4. A scale unit positioned beneath the beehive with an optional sensor for outdoor temperature and humidity monitoring. The architecture diagram of the SAMS HIVE system is shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. The power supply for the monitoring units is provided by a photovoltaic system (referred to as power unit) via cables. It consists of the standard components: solar module, charging controller, and battery. The power unit also supplies a mobile GSM Wi-Fi router, which is used as a hotspot for the monitoring units to transfer data to a web server (SAMS data warehouse). The monitoring unit consists of a printed circuit board (PCB) with Raspberry Pi Zero W singleboard computer, a step-down converter to change the voltage of the power unit to 5V, and a 24bit analog-to-digital converter (ADC) that converts the Wheatstone bridge signals of the load cell to a digital format. The load cell measures the weight of the colony. The sensor frame with temperature and humidity sensor as well as a microphone is also connected to the computer. This module allows acoustic signals and colony parameters like temperature to be recorded. The acoustics are recorded over a certain timespan and uploaded as a Fast Fourier Transformed (FFT) spectrum and transferred to the SAMS data warehouse. It is recorded with 16 kHz sampling frequency, covering a frequency range from 0 kHz to 8 kHz. The FFT is made with 4096 points resulting in a frequency resolution of approximately 3.9 Hz. The computer can be extended with additional sensors. For example, it is possible to connect a small weather station to collect region-specific climate data or additional temperature sensors to be placed in different hive locations (top, bottom, in frames). A deep sleep mode can be used in between the measuring intervals utilizing a power control unit (WittyPi) in order to reduce energy consumption considerably. As soon as the computer receives power from the power unit, it starts the measuring routine. The measuring routine and the interval can be adjusted remotely via online configuration as required. After a successful recording, the data is transferred via Wi-Fi to the mobile GSM router and sent to the web server (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). If the real time upload is not possible, the data remains on the SD card until a successful upload or remote collection has been performed. In this case, a new upload attempt starts after 30 seconds. Each device has its ID so that it can be uniquely assigned to the web server. Individual sensors can also be added to users, locations, or groups on the web server. Successful recording, data storage, uploads or errors are logged and transferred to the web server. Events for troubleshooting can be viewed there by administrators. On the device, 2 LEDs indicate working or deep sleep mode. Plug connections ensure easy installation. The sensor frame is connected to the computer via flat cable and IDC connectors. As a power supply connection, a standard DC power plug was selected. In addition to the sensor frame, a case was designed to place the monitoring unit's components. Both cases are 3D printable models (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). A software was developed to operate the Raspberry Pi and its components as a monitoring system. In order to ensure the simple and long-term availability of the code, a separate SAMS page was created on the GitHub developer platform. The code (sams-app 2.47) can be found open source at https://github.com/sams-project. The GitHub page contains the code to operate the monitoring system, a web application to calibrate the functions and the code to set up a data warehouse. Also, the files to print and build the PCB and cases are available there. The recommended installation is to use a sensor frame placed in a brood frame (Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>). The sensor frame is installed centrally in a brood frame so that the sensors are located in the middle of the brood nest. The price of the SAMS HIVE monitoring system (current version 2) is about 170 €. In addition, there are the expenses for power unit and GSM. The dimensioning of the photovoltaic system for the power unit depends on the location, the number of monitoring units and the measuring intervals. The cost of the photovoltaic system is about 200 € and up to ten monitoring units can be powered by it. Modular electronic components were used to ensure the sustainability of the monitoring system. The components can be replaced independently and also be used for other purposes. A recycling plan should support this if necessary. In addition to its expandability, the system can also be set up for other academic and research applications and bee institutes to collect sensor data. During UCD, implementation and testing of the SAMS Hive monitoring system, some observations were made. These observations (listed below) will significantly contribute to business potential mapping and development. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>3. Cheaper monitoring system that is simple and easy to augment to the existing modern beehive is preferable. Some aspects concerning the beekeeping ecosystem in target countries also need to be considered; for example, the Indonesian beekeeping ecosystem is not yet as developed as the beekeeping ecosystem in Ethiopia or Europe. This immaturity of the ecosystem resulted in a lack of integrated support from beekeeping stakeholders. So simple technology is considered a better option first to improve the ecosystem.</ns0:p></ns0:div>
<ns0:div><ns0:head>SAMS data warehouse and decision support system</ns0:head><ns0:p>All the measured data about the behavior of bee colonies, gathered from the HIVE monitoring system, can be stored for further analysis and decision support. For data storage, a dedicated data warehouse is developed <ns0:ref type='bibr' target='#b35'>(Komasilovs et al., 2019)</ns0:ref>, which can be considered as a universal system and is able to operate with different data inputs and have flexible data processing algorithms <ns0:ref type='bibr' target='#b39'>(Kviesis et al., 2020)</ns0:ref>. Architecture of the developed DW is demonstrated in Figure <ns0:ref type='figure' target='#fig_9'>8</ns0:ref>. The DW is a fully operational solution, it is storing incoming data in real-time and is providing the infrastructure for the future data analysis, processing and visualization. The SAMS data warehouse is accessible by the link: https://sams.science.itf.llu.lv/. It is an open source software and it can be used by others to further extend its functionality, develop different user interfaces and/or native mobile applications, and use in new business opportunities. Data warehouse source code is accessible in the GitHub repository: https://github.com/sams-project. For the data analysis several approaches can be used, within the SAMS project a Decision Support System was implemented. For the beekeepers the raw sensory data must be analyzed, interpreted and translated into clear instructions that consider the operational ability and beekeeping knowledge of the users. The main aim of the DSS is to detect and recognize various bee colony states <ns0:ref type='bibr' target='#b66'>(Zacepins et al., 2015)</ns0:ref> and inform the beekeeper about them. Still it needs to be noted that beekeepers remain as the final decision makers and can choose appropriate action and when to take it. For the SAMS project each country context and environmental factors should be thoroughly analyzed to develop specific algorithms that allow safe interpretation. The SAMS DSS has a modular design, consisting of a comprehensive expert interface, which has been developed and adapted together with local beekeepers and which can be used by apiculture experts, e.g. in a service and advisory support centers, to analyze and monitor data. Also, an easy to use and understandable application on smartphones or SMS service is also required to alert beekeepers about hives that need attention. The user centred design approach allowed the technical layout and user interfaces to be developed in parallel, based on shared research results. Through the expert interface, local beekeeping experts can assist the beekeepers if needed. At this moment some of the models required for DSS are implemented into the SAMS data warehouse. A mockup of mobile application interface was created according to local user needs and is publicly available, allowing further development by interested parties PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Api-management within SAMS</ns0:head><ns0:p>Api-management is central to the SAMS project, including the contextualizing of local systems focusing on the two target countries Ethiopia and Indonesia, the development of an open source and agile database and a honey bee health and management related capacity building strategy. Even though Europe's beekeeping sector is comparably strong, it relies on honey imports from third world countries as its production is not sufficient enough to saturate the market <ns0:ref type='bibr' target='#b24'>(García, 2018)</ns0:ref>. While governmental involvement and subsidized national programs aim to strengthen the stagnated European bee product market, such programs are lacked completely in Indonesia <ns0:ref type='bibr' target='#b26'>(Gratzer et al, 2019)</ns0:ref>, and are not carried out sustainably enough to set the beekeeping sector of Ethiopia on a par with those of other global players. In Europe, beekeeping has a long tradition and knowledge is accessible by numerous books and journals. Bee health is affected by a diverse spectrum of organisms (protozoa, fungi, bacteria, insects, mites, etc.) <ns0:ref type='bibr' target='#b4'>(Bailey and Ball, 1991;</ns0:ref><ns0:ref type='bibr' target='#b25'>Genersch, 2010)</ns0:ref>, but the parasitic mite Varroa destructor, introduced to Europe, is the major threat to European honey bees <ns0:ref type='bibr' target='#b54'>(Rosenkranz, Aumeier and Ziegelmann, 2010)</ns0:ref>. The varroa mite seems to be no big issue for Ethiopian <ns0:ref type='bibr'>(Gebremedhn et al., 2019)</ns0:ref> nor for Indonesian honey bees but this is not well documented. However, several other organisms affect Ethiopia's bees, including protozoa, fungi, insects, birds and mammals, but with the exception of ants or wax moths, mostly no control methods are applied <ns0:ref type='bibr' target='#b20'>(Ellis and Munn, 2005;</ns0:ref><ns0:ref type='bibr'>Awraris Getachew Shenkute et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b60'>Tesfay, 2014;</ns0:ref><ns0:ref type='bibr' target='#b50'>Pirk et al., 2015)</ns0:ref>. In Ethiopia, beekeeping dates back ~5000 years <ns0:ref type='bibr' target='#b59'>(Tekle and Ababor, 2018)</ns0:ref>, and more than one million households maintain around six million honey bee (A. mellifera) colonies producing more than 50,000 tons of honey per year, making Ethiopia Africa's leading honey and beeswax producer <ns0:ref type='bibr' target='#b12'>(Degu & Megerssa, 2020)</ns0:ref>. However, Ethiopia's honey sector is far behind its potential of 500,000 tons per year. The reasons include limited access to modern beekeeping practices and equipment, a shortage of trained people, the use of agriculture chemicals, the impact of droughts, absconding and the lack of infrastructure and market facilities <ns0:ref type='bibr' target='#b64'>(Yirga et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b42'>Legesse, 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Fikru and Gebresilassie, 2015;</ns0:ref><ns0:ref type='bibr' target='#b12'>Degu & Megerssa, 2020)</ns0:ref>. The vast majority of hive systems in Ethiopia are traditional, some are classified transitional (top bar hives), only few are classified as modern hives. Traditional hives are made from locally available, but often non-durable materials <ns0:ref type='bibr'>(clay, straw, bamboo, logs, etc.)</ns0:ref>. Even though this kind of hive system requires low starting costs and skills, honey harvesting is always accompanied by destroying large parts of the bees' nest. Furthermore, the productivity is considered to be low <ns0:ref type='bibr' target='#b65'>(Yirga and Teferi, 2010;</ns0:ref><ns0:ref type='bibr' target='#b6'>Beyene et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b12'>Degu and Megerssa, 2020)</ns0:ref>. Traditionally, beekeepers gain their knowledge from the family or village <ns0:ref type='bibr' target='#b22'>(Fichtl and Adi, 1994)</ns0:ref>. As training centers are rare in Ethiopia and beekeepers from rural regions often lack infrastructure, access to modern beekeeping knowledge and techniques is restricted. One of the largest bee research institutions in the country is a one hour drive away from the capital Addis Ababa. The Holeta bee research center is involved in educating beekeepers and connecting them by offering training and hard copies of training manuals for beginners and advanced beekeepers including now the SAMS manual for beekeeping equipment production. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>So far, classic beekeeping training centers do not exist in Indonesia. To be able to establish one, one must face political and social issues first as the awareness of the importance of bees for the ecosystem was reported to be low in the country. Furthermore, in relation to the large Indonesian population size, beekeeping is not widespread and beekeeping-related literature is not readily available <ns0:ref type='bibr' target='#b26'>(Gratzer et al., 2019)</ns0:ref>. Honey hunting has tradition in parts of the country, but managing honey bees in hives is a comparatively young activity in Indonesia. Most beekeepers keep the native Asian honey bee A. cerana, followed by the introduced A. mellifera which is mainly used for migratory beekeeping. While A. cerana is regarded less productive than A. mellifera, it is known for its easy handling and gentle behavior. One major problem identified, similar to Ethiopia, is the absconding behavior of bees. During unfavorable conditions, the colonies leave their hives, resulting in financial losses for beekeepers. Although many reasons for the underdeveloped beekeeping sector overlap with those of Ethiopia, others are specific to Indonesia, such as a lack of quality standards for bee products <ns0:ref type='bibr' target='#b10'>(Crane, 1990;</ns0:ref><ns0:ref type='bibr' target='#b43'>Masterpole et al., 2019)</ns0:ref>. Overall, there has been a sharp increase in beekeeping development publications over the past five years, but compared to Sub-Saharan Africa, the absolute number of publications for South Asia including Indonesia is rather low <ns0:ref type='bibr'>(Shouten, 2020)</ns0:ref>. Due to the limited access and availability of literature, little information is given on bee health issues, control methods or management of honey bees in Indonesia, and therefore more research and lobbying efforts are highly recommended <ns0:ref type='bibr' target='#b26'>(Gratzer et al., 2019)</ns0:ref>. As contextualizing is an ongoing process, an open source knowledge database was developed -the 'SAMSwiki' (https://wiki.sams-project.eu, last accessed 18.02.2021). During the set-up, the SAMSwiki was fed with more than 200 literature sources including a variety of beekeeping related topics like Indonesian and Ethiopian bee sector parameters, bee forage, management options, bee health, as well as funding opportunities for businesses and SAMS-system related content. With its wiki-like approach, the readers can easily become members and contributors and are able to share their expertise with the remaining community. Extension of this database to other countries is planned for the future.</ns0:p></ns0:div>
<ns0:div><ns0:head>Possibilities for smart bee management</ns0:head><ns0:p>Managed honey bee colonies need regular monitoring actions. Especially during the active foraging season, external and internal hive inspection is a necessary task for each beekeeper. Those actions are time-consuming and regular opening of the beehive is a stress factor for the whole colony. With smart management, or precision beekeeping, those mandatory interferences are reduced to a minimum <ns0:ref type='bibr' target='#b5'>(Bencsik et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b44'>Meikle and Holst, 2015;</ns0:ref><ns0:ref type='bibr' target='#b66'>Zacepins et al., 2015)</ns0:ref>. Smart bee management possibilities can be manifold and some of them, including the most relevant ones for the SAMS-project, are represented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. We elaborated what-if scenarios for the four most important events. For example, the start of a mass nectar flow indicates honey yield in the near future and beekeepers estimate this event either by knowing the vegetation in the surroundings by observing the flight entrance or by checking the food stores inside the hive; but a technical solution would make the beekeepers' work more efficient. Easy to understand illustrations have been developed for each important bee colony state, including basic recommendations for the beekeepers. One example can be seen in Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>. The beekeeper gets informed as soon as an increase in weight of the monitored beehive by a certain, prior defined, percentage-value occurs. On detection of this event, further actions can be planned without even being present at the apiary. A typical event occurring only in African or Asian colonies is absconding, which has not been studied before using precision beekeeping approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Business models within SAMS</ns0:head><ns0:p>In addition to the open source remote sensing technology for monitoring the health and productivity of bee colonies, SAMS fosters the regional added benefit by identifying business opportunities and challenges, supporting business model development and thus assisting job creation. Enabling the SAMS team to identify SAMS business models several methods such as co-creation, ideation and observation of existing businesses were used. Ethiopia with its great potential in the apiculture sector has a wider range of business compared to Indonesia, and mainly focuses on beekeeping management. There are only a few businesses that offer derivative products, while Indonesia has only a few businesses that could improve beekeeping management as well as technology-based business. One aspect became very clear during this project sequence -business development in the apiculture sector depends on the country readiness. Several factors indicate this country readiness, e.g. the maturity of the apiculture industry, government support, and age structure (children and young adolescents, the working-age population, and the elderly population). The more mature the apiculture sector in one country, the bigger the support given by the government, the more resources flow, the more flourishing the industry will be. The bigger the working-age population in one country, the more labor is available, the more industries are thriving. The working-age population factor is believed as one of the main factors that determine the growth of the creative industry. In 2018, the working-age population in Ethiopia was 55.26%, in Indonesia 67.59%, and in EU 64.69%. As one of the SAMS goals is to provide a platform for concepts and ideas for local business developments, in order to have a sustainable long-term impact, an overall concept of SAMS business models was created and main obstacles in Ethiopia and Indonesia were identified. The 54 identified SAMS business models are rated based on its correlation to SAMS objectives and are recognized as SAMS business models that contribute in giving added value to the project aims and impact. All SAMS business models remain freely available on the SAMSwiki (https://wiki.sams-project.eu/index.php/SAMS_-_Business_Models, last accessed on 18.02.2021) also after the project end to enable stakeholders around the world to take up SAMS ideas and business concepts and to better position the apiculture sector in their own countries. Manuscript to be reviewed Computer Science research and theory cloud represents all the knowledge acquired and collected during the SAMS project. SAMS technology produced from the research process aims to make beekeeping activities more effective and efficient. To implement this product to its beneficiary, namely beekeepers, the high costs of its production makes it difficult to promote it directly, unless funding schemes from collaborations between government and business people and research institutions/universities are considered. SAMS data that is utilized by the government (described as institution mountain), is useful for policy making in the fields of forestry, animal husbandry, agriculture, and the environment. The policy is then derived as an intake of community empowerment, leaders and other driving nodes. This concept is also expected to provide valuable benefits for the stakeholders involved. For beekeepers, bee colony management technology (SAMS) developed is obtained free of charge, as well as raising awareness in protecting the environment and government policies that support beekeepers and environmental communities. For governments, universities and businesses as funders, getting data from the technology applied to the colonies maintained by beekeepers for research and policy making. Three main directions have strong impact on the SAMS ecosystem:</ns0:p><ns0:p>1. Practice -Individuals play a key role in driving institutional changes and therefore were identified as important for the SAMS ecosystem. Therefore, it is of major importance to recognize key individuals amongst a larger group of potentials and further empower them.</ns0:p><ns0:p>2. Institutional -International partnerships were initiated to support the SAMS ecosystem on business development, bee colony data and knowledge exchange, apiculture technology and services. Furthermore, the SAMS technology enables social innovation to engage more socially aspirational younger generations (i.e. their customers) to be more involved in the honey and bee industry.</ns0:p><ns0:p>3. Systemic -Social issues have an impact on the SAMS technology application in Indonesia. The market survey supported the research by mapping participant survey responses including all respondent-identified potentials in supporting the future business model of SAMS application. Wealth was also identified in the interviews as a key determinant of all identified issues. How to develop SAMS businesses and maintain their sustainability showing the interrelated nature of technology and also social problems, reinforcing the need for a collaborative, multi-agency approach to overcome the challenges in implementing the SAMS technology.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The SAMS project developed an open source information and communication technology that allows active monitoring and managing of bee colonies to ensure bee health and bee productivity. For the first time, focus was given to special conditions of Africa and Asia, PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>including thorough research on actual user needs. Continuous monitoring of variables associated with honey bee colonies, including weight changes, temperature, humidity, acoustics, activity at entrance for detection of different bee colony states like swarming, broodless stage, and others becomes feasible for most practical applications. Established European or North American systems are not designed for the peculiarities that can be expected when monitoring colonies in Africa or Asia. Application of the SAMS design process allows the requirements of beekeeping in different countries and settings to be met, enhancing sustainable agriculture worldwide. To develop SAMS for local contexts, the project collected data from different user groups (individual beekeepers, beekeeping cooperatives, private and public input suppliers like beehive producers, beekeeping experts and researchers and others) within the UCD process and enabled the team to adapt the system to specific requirements. At the end of the project, a greater awareness will be created in Indonesia and Ethiopia in regard to beekeeping and its activities and opportunities for greater income. There will also be the possibility to use collected data from different regions to better understand the behavior of bees and the environmental aspect and to ensure food production and bee farming activities. In addition, an international partnership network will ensure knowledge exchange and mutual learning. Main results of the SAMS project are: a) a manual for the SAMS monitoring beehive model, that is locally produced and adapted to local conditions, including integrated open source sensor and information transition technology, as well as an energy-supply solution; b) the SAMS data warehouse which can be individually adapted; c) a decision support system interface that can combine the sensor-based data-outputs with other information sources and predictive models to measure, analyze and describe different states of the bee colony such as health, vitality and production, d) the SAMSwiki which provides knowledge on beekeeping in Ethiopia and Indonesia but also for other regions and e) 54 SAMS business models for greater income opportunities and related upscaling potential.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia. States are ranked based on the importance to the beekeepers in target countries. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation (if a solution that could be used for specific state detection already exists) and predictability of each event or colony state. Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia</ns0:p><ns0:p>States are ranked based on the importance to the beekeepers in target countries. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation (if a solution that could be used for specific state detection already exists) and predictability of each event or colony state.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table 1: Ranking of smart management possibilities for bee colony state detection in Ethiopia and Indonesia. Bold events/states were identified to be most relevant for the SAMS project. Asterisks (*) rank the importance, technical feasibility, grade of innovation and predictability of each event or colony state. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1.</ns0:head><ns0:label /><ns0:figDesc>Beekeepers have a limited budget, and technology is not yet considered in beekeeping practices. 2. Local beekeepers found it valuable to monitor trap-hives (modern beehives used to trap new bee colony), placed deep in the forest, so power source became the main concern for such systems. PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 10 illustrates the overall concept of the SAMS business models that involves various stakeholders in the process. SAMS can have a wider impact on the development of honey bee businesses by involving various stakeholders during the SAMS development and contextualization. The SAMS data, PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Human Centred Design Process applied in SAMS project for development of interactive systems. User participation in the HCD process is very important, since the iteration steps of product design and context of use analysis are based on user feedback</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Architecture diagram of the SAMS HIVE system. Power unit, scale unit, sensor frame and data warehouse.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. SAMS HIVE device. Measurement device with ports and status LED.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. SAMS HIVE case. PCB and components placed in a 3D printable case.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. SAMS HIVE sensor frame. Sensors are installed in a 3D printable case placed in a regular brood frame and connected with flat cable to SAMS HIVE device.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Placement of SAMS HIVE system. Sketch of a common Dadant beehive with placement of: (1) Sensor frame in a brood frame, (2) HIVE case and (3) Scale unit.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Architecture of the developed SAMS data warehouse. Main DW components are shown in frames (Core, WebApi, User interface). Cubes represent various processing units interacting with each other, cylinders represent persistent storage, pipes (horizontal cylinders) represent communication channels. Vaults and Reports in DW Core are independent processing units with dedicated storage<ns0:ref type='bibr' target='#b35'>(Komasilovs et al., 2019)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Example illustration showing the use of the smart bee management system. Mass nectar flow is detected by the SAMS hive monitoring and decision support system, which triggers an alert on smartphones along with recommendations for beekeepers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure10. Overall concept of the SAMS business model. The SAMS data, research and theory can be utilized by a wider community and can be achieved by the collaboration between government, business people, researchers and universities, and impacts social value (illustrated</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 10 Overall</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,275.62,525.00,355.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,229.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,204.37,525.00,312.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,453.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,204.37,525.00,426.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,229.87,525.00,403.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,280.87,525.00,177.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,229.87,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52718:2:0:CHECK 19 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "2 Round_Rebuttal letter: Smart apiculture management services for developing countries - the case of SAMS project in Ethiopia and Indonesia
Dear Editor,
We thank you once again for the comments provided by You. We changed our article accordingly, and also revised section “Business models within SAMS”, including the linguistics. We hope that now this section is much better readable.
A point by point rebuttal can be found below.
On behalf of all authors,
Asoc.prof. dr.sc.ing. Aleksejs Zacepins (Latvia Unviersity of Life Sciences and Technologies)
Editor comments:
Done: line 146: missing closing bracket for the HCD reference.
Done: line 181: italicise 'Apis'
Done: line 212: suggest: 'universality can help easy adoption of the..'
Done: line 257-258: relocate optional to say: 'with an 'optional' outdoor temperature..'
Done: line 259-266 and Figure 3. Suggest you relocate the reference to Figure 3 to the beginning of this section, and add 'referred to as' before 'power unit'. Please also be consistent later in its use (e.g. in line 303 you mention additional expenses for the 'power supply' - this should be 'power unit' instead).
e.g.:
The flow chart of the SAMS HIVE system is shown in Figure 3. The power supply for the monitoring units is provided by a photovoltaic system (referred to as power unit) via cables. It consists of the standard components: solar module, charging controller, and battery. The power unit also supplies a mobile GSM Wi-Fi router, which is used as a hotspot for the monitoring units to transfer data to a web server (SAMS data warehouse).
Done: NB. Figure 3 is not really a 'Flow Chart' - suggest instead you call it an 'Architecture Diagram'.
Done: line 311-314. change 'Findings' to 'Observations' and Fix grammar. Suggest revision to:
'During UCD, implementation and testing of the SAMS Hive monitoring system, some observations were made. These observations (listed below) will significantly contribute to business potential mapping and development:'
Done: line 323: insert 'as' so it reads '..is not yet *as* developed as..'
Done: line 330: remove 'the', add a comma and 'a': 'For data storage, a dedicated'
Done: line 342-347. You seem to repeat the aims of the DSS twice here. Suggest omitting line 342-345 since wording in line 345-347 is more concise.
Done: line 349-line 360 is quite garbled. Suggest revising like:
'The SAMS DSS has a modular design, consisting of a comprehensive expert interface, which has been developed and adapted together with local beekeepers and which can be used by apiculture experts, e.g. in a service and advisory support centers, to analyze and monitor data. Also, an easy to use and understandable applications on smartphones or SMS services is also required to alert beekeepers about hives that need attention. The user centred design approach allowed the technical layout and user interfaces to be developed in parallel, based on shared research results. Through the expert interface, local beekeeping experts can assist the beekeepers if needed. At this moment some of the models required for DSS are implemented into the SAMS data warehouse. A mockup of mobile application interface was created according to local user needs and is publicly available, allowing further development by interested parties.'
Done: line 367: insert 'world': 'third world countries'
Done: line 369: 'such programs are lacked completely in Indonesia' - or 'are completely lacking in Indonesia'
Done: line 396-397: suggest '.. often lack infrastructure, access to modern beekeeping knowledge and techniques is restricted.'
We would not agree with this, as approach is precision beekeeping, and it only one: - line 446: 'which has not been studied before using precision beekeeping approach.' - suggest 'approaches'
Done: line 453: 'ideathlon' is not a word - suggest it was 'ideation'
Done: line 456: insert 'a' - ie 'only a few' in 'only few businesses that'
Done: line 476: ' to create a greater position of the apiculture sector' - suggest 'to better position the apiculture sector'
Done: Lines 478-503: After starting out well, the 'Business models within SAMS' section becomes very disconnected and hard to follow. I can suggest some rewordings, e.g.
line: 480-483: this paragraph doesn't make sense: 'In the context of SAMS, the ecosystem pattern can be developed and contextualized in the development of SAMS which is directed to have a wider impact on the development of honey bee businesses. Ecosystem is developed by involving various stakeholders who carry out their respective roles.' What is the 'ecosystem' here ? it might be best to put this more simply, e.g. 'SAMS can have a wider impact on the development of honey bee businesses by involving various stakeholders during SAMS development and contextualization.'
Done: Line 484-503 seem to be a mixture of brief sentences and descriptive text. Some of that text that could instead be located in Figure 10's legend (e.g. line 491-493: 'raining impact'), others need to be revised and made more coherent. Line 504-523 are quite clear, but I am not sure what 'concept' these three points are supposed to be supporting.
Done: Line 538: 'the UCD processed' should be 'the UCD process'.
Done: Line 547: insert 'an' before energy: 'as well as [an] energy supply solution'
Yes, there were 54 business models provided: line 552: 'e) 54 SAMS business models' - it looks like '54' is a typo, or do you really mean that there are 54 business models provided by SAMSwiki?
Done: Figure 1: 'Significant is the user participation in this process, iterations of product design as well as iterations of context of use analysis are driven by user feedback. Interdependence of human- centred design activities [ISO /FDIS 9241-210:2019].'
- this legend needs to be reworded to be grammatically correct and easy to read.
Done: Figure 9: 'Exemplary illustration of the nectar flow as one smart bee management possibility
Mass nectar flow is detected by the SAMS hive monitoring and decision support system, which triggers an alert on smartphones and recommendations for beekeepers.' suggest slight revision to 'Example illustration showing the use of the smart bee management system. Mass nectar flow is detected by the SAMS hive monitoring and decision support system, which triggers an alert on smartphones along with recommendations for beekeepers.'
Done: Figure 10. You should relocate the descriptive text in lines 484-503 to here, in order to help the reader interpret this illustration. Please also address typos:
- 'intitutional mountain' should be 'institutional mountain'.
" | Here is a paper. Please give your review comments after reading it. |
64 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Calibration of vision systems is essential for performing measurement in real world coordinates. For stereo vision, one performs stereo calibration, the results of which are used for 3D reconstruction of points imaged in the two cameras. A common and flexible technique for such calibration is based on collection and processing pairs of images of a planar chessboard calibration pattern. The inherent weakness of this approach lies in its reliance on the random nature of data collection, which might lead to better or worse calibration results, depending on the collected set of image pairs. In this paper, a subsetbased approach to camera and stereo calibration, along with its implementation based on OpenCV, is presented. It utilizes a series of calibration runs based on randomly chosen subsets from the global set of image pairs, with subsequent evaluation of metrics based on triangulating the features in each image pair. The proposed method is evaluated on a collected set of chessboard image pairs obtained with two identical industrial cameras. To highlight the capabilities of the method to select the best-performing calibration parameters, a principal component analysis and clustering of the transformed data was performed, based on the set of metric measurements per each calibration run.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Vision systems based on stereo imaging setups are widely used in numerous application contexts where contact-less measurement of 3D space is required. To perform 3D reconstruction, the imaging setup has to be calibrated. is includes calibration of each individual camera, combined with calibration of the whole stereo rig. In practice, a very widely used family of methods for camera and stereo calibration are based on a planar object with known pa ern (such as chessboard). e core principle of planar-based calibration is minimization of reprojection error in each image, as described in the original article by <ns0:ref type='bibr' target='#b15'>Zhang (2000)</ns0:ref>. Widely used implementation of this approach is available as a part of the open source computer vision library OpenCV.</ns0:p><ns0:p>Because stereo vision systems are pervasive in many optical measurement applications, the topics of stereo calibration and improvement of thereof have been widely presented in the research literature. Some approaches use the standard calibration approach directly. For example, the standard routine as implemented with OpenCV is applied by <ns0:ref type='bibr' target='#b17'>Zhong and Dong (2015)</ns0:ref> to guide a REINOVO robot using stereo vision. <ns0:ref type='bibr' target='#b11'>Singh et al. (2020)</ns0:ref> combine the standard calibration approach with the preliminary knowledge on the ranges of dimensions to be measured, namely height of a person. In other cases, some parts of the standard calibration pipeline are used in combination with additional supporting elements of the physical setup. Stereo calibration is tackled in context of a custom 3D scanner that includes a stereo camera rig <ns0:ref type='bibr' target='#b5'>(Lin, 2020)</ns0:ref>. e solution makes use of extrinsic transformations with respect to the chessboard calibration object. Each camera is calibrated separately, with the relative transformation between the cameras computed from the known extrinsics. <ns0:ref type='bibr' target='#b6'>Möller et al. (2016)</ns0:ref> combine a stereo camera system with a laser tracker to increase absolute positional accuracy of a milling robot. e system relies on retroreflective markers on the robot tool to facilitate pose estimation. Finally, there exist approaches that greatly differ from the standard ones due to precise management of a custom geometric setup. An example is a custom calibration method based on the geometric constraints of the imaging setup, such as parallelism of the image planes and the calibration/measurement plane <ns0:ref type='bibr' target='#b7'>(Ramírez-Hernández et al., 2020)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>When using the traditional planar calibration method, given enough images of the calibration target in different orientation, one can obtain sufficient calibration results. However, these results are slightly different depending on the actual input image set. Such property is known to planar-based calibration methods, and can be a ributed to degenerate configuration due to many planes with similar orientation <ns0:ref type='bibr' target='#b15'>(Zhang, 2000)</ns0:ref>. To make the calibration results be er, a number of heuristics exist, such as the following: the view of the planar calibration target shall not be parallel in two or more images; for be er estimation of the camera distortion, the calibration target shall appear in all four corners of the image and cover as much exterior orientations as possible; the best results would be obtained providing more than 10 calibration object views and orientation angle near 45 • <ns0:ref type='bibr' target='#b15'>(Zhang, 2000;</ns0:ref><ns0:ref type='bibr' target='#b12'>Steger et al., 2007)</ns0:ref>.</ns0:p><ns0:p>Since the plane-based calibration is usually performed manually by rotating a calibration target in front of one or more cameras, it may be tricky to precisely account for all the abovementioned heuristics. A sensible alternative can be to gather a relatively large set of images (or image pairs in the case of a stereo system) and evaluate how different subsets of the images work as an input to the standard calibration procedures. is approach has been investigated in a previous work by the author <ns0:ref type='bibr' target='#b8'>(Semeniuta, 2016)</ns0:ref>, with an observation that the calibration parameters in a general form can be described by Gaussian distributions if numerous calibration runs with different subsets are invoked. However, it is challenging to select an optimal sample from such distribution if only one camera is used. It is worth noting that the original paper describing the plane-based method for calibrating a single camera <ns0:ref type='bibr' target='#b15'>(Zhang, 2000)</ns0:ref> also applied a similar approach, namely selecting subsets of 4 images out of 5, when analyzing variability of calibration results. e general approach in geometric computer vision is to formulate an estimation problem as a linear model, which can be solved in a closed form, and then apply iterative optimization to further refine the sought parameters given a nonlinear cost function that is closer to the reality <ns0:ref type='bibr' target='#b2'>(Hartley and Zisserman, 2004)</ns0:ref>. Such cost function has to be differentiable to apply gradient-based optimization methods. In the case of camera and stereo calibration, this cost function models the reprojection error: given the known real-world coordinates, how close their transformed values will be to the known coordinates of the corresponding image features? is cost function is well-suited for estimation of camera intrinsic parameters and relative pose of two cameras in a stereo system, however, it operates on a totally different domain than what one is interested during stereo reconstruction (image plane vs metric R 3 ). is paper proposes a subset-based stereo calibration method that performs a number of calibration runs on randomly selected subsets from the global set of image pairs and evaluates the quality of a calibration run by performing stereo reconstruction given all the image pairs and analyzing a cascade of metrics based on the reconstructed point clouds. e proposed approach does not aim to replace the well-known planar calibration method, but to conduct the la er in an automated manner according to the set of the proposed rules. As such, a more optimal calibration result can be obtained by automatic analysis of the set of individual calibration runs. is paper applies the proposed method to a set of experimentally collected pairs of images acquired with a stereo rig. e analysis shows that in approximately half of the calibration runs, the performance is unsatisfactory. For the rest of the runs, one can rank them in the order of how well they perform, and hence, select the best-performing one. In addition to the analysis of the raw metric data obtained based on the triangulation results, principal component analysis (PCA) is performed on the vectors of metrics per each calibration run to transform the data into two components. PCA, along with k-means clustering, highlight the clear separation of the satisfactory and unsatisfactory classes and demonstrate the efficacy of the ranking. e paper is organized as follows. First, a detailed overview of mathematical preliminaries of camera and stereo calibration is presented, followed with the specifics of OpenCV functionality and how it was harnessed in the presented work. Further, the proposed method is formulated, followed with its experimental validation and outlook for further research activities.</ns0:p></ns0:div>
<ns0:div><ns0:head>CAMERA AND STEREO CALIBRATION</ns0:head><ns0:p>Geometric computer vision deals with estimating geometric properties of the scene being imaged, parameters of camera models, multiple-view geometry, and related aspects of computer vision <ns0:ref type='bibr' target='#b2'>(Hartley and Zisserman, 2004)</ns0:ref>. Models for these tasks are normally expressed in linear form by operating with homogeneous vectors on the projective space. e la er constitutes an extension of the Euclidean space with points at infinity. Taking an example of Euclidean 2-space (R 2 ), a point (x, y) T ∈ R 2 is represented Manuscript to be reviewed Computer Science in projective 2-space P 2 as an equivalence class of coordinate triples (kx, ky, k) T , where k = 0. Such triple is referred to as a homogeneous vector. A set of homogeneous vectors {(x, y, 0) T } correspond to points at infinity, each characterized by ratio x : y. A homogeneous vector (0, 0, 0) T is not an element of P 2 .</ns0:p><ns0:p>For camera calibration and 3D reconstruction, the pinhole camera model, the distortion model, and the epipolar geometry are used as a basis. e pinhole camera model deals with two coordinate frames (see Figure <ns0:ref type='figure'>1</ns0:ref>): camera coordinate frame {C} with axes (x c , y c , z c ), located in the optical center, with z c perpendicular to the image plane, and the image (sensor) coordinate frame {S} with axes (x s , y s ), located in the top le corner of the image plane with x s and y s representing pixel columns and rows respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Principal ray</ns0:head></ns0:div>
<ns0:div><ns0:head>Projection center</ns0:head><ns0:p>Image plane</ns0:p><ns0:formula xml:id='formula_0'>𝑥 ! 𝑧 ! 𝑦 ! 𝑥 ' 𝑦 ' {𝐶} Pinhole aperture {𝑆} {𝑆} 𝑥 ' 𝑦 ' # 𝒑 $ 𝒑 $ 𝒑 {𝑊} 𝑥 % 𝑦 % 𝑧 % Figure 1.</ns0:formula><ns0:p>Coordinate frames in the pinhole camera model e process of projection of points in the world coordinate frame {W } to image pixels in {S} using the pinhole model (expressed in the projective spaces P 3 and P 2 respectively) is referred to as a perspective projection model. e la er is a linear mapping (x w , y w , z w , 1) T → (λx s , λy s , λ) T that can be expressed in the matrix form as follows:</ns0:p><ns0:formula xml:id='formula_1'>  x s • λ y s • λ λ   =   f x 0 c x 0 f y c y 0 0 1     1 0 0 0 0 1 0 0 0 0 1 0       t x r x r y r z t y t z 0 0 0 1         x w y w z w 1    <ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Or, for short:</ns0:p><ns0:formula xml:id='formula_2'>λ   x s y s 1   = K[I 3×3 |0] c T w     x w y w z w 1     (2)</ns0:formula><ns0:p>where K is a camera matrix, which embodies such intrinsic parameters as the camera center (c x , x y ) and the focal lengths (f x , f y ), and c T w ∈ SE(3) corresponds to a homogeneous transformation expressing coordinate frame {W } in terms of coordinate frame {C}.</ns0:p><ns0:p>e distortion model L : R 2 → R 2 describes a non-linear mapping from an 'as-if perfectly imaged' image coordinate (x, y) T to the displaced image coordinate (x * , y * ) T distorted by lens. One is normally interested in the inverse function L −1 modeling the undistortion process, which is is characterized by radial distortion coefficients (k 1 , k 2 , k 3 ) and tangential distortion coefficients (p 1 , p 2 ):</ns0:p><ns0:formula xml:id='formula_3'>x y = L −1 ( x * y * ) = x * (1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + (2p 1 y + p 2 (r 2 + 2x 2 )) y * (1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) + (p 1 (r 2 + 2y 2 ) + 2p 1 x) (3)</ns0:formula><ns0:p>where r is the radial distance of point (x * , y * ) T .</ns0:p><ns0:p>Let Cam p,ξ : P 3 → R 2 be a full camera model that embodies both perspective projection and nonlinear distortion effect. Cam p,ξ is parameterized by intrinsic parameters p = (f x , f y , c x , c y , k 1 , k 2 , k 3 , p 1 , p 2 ) T and extrinsic parameters describing the pose ξ of {W } with respect to {C}, typically described with a Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>homogeneous transformation c T w . Cam p,ξ first performs linear transformation described in eq. ( <ns0:ref type='formula'>2</ns0:ref>), followed by lens-induced distortion of the projected point (x s , y s ):</ns0:p><ns0:formula xml:id='formula_4'>x * s y * s = L( x s y s )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>e goal of camera calibration is to estimate camera intrinsic parameters -(f x , f y , c x , c y ) embodied in the camera matrix K (2), as well as parameters (k 1 , k 2 , k 3 , p 1 , p 2 ) of the non-linear distortion model ( <ns0:ref type='formula'>3</ns0:ref>).</ns0:p><ns0:p>e general principle of camera calibration lies in finding the correspondence between a sufficiently large number of known 3D points and their projections in the image <ns0:ref type='bibr' target='#b12'>(Steger et al., 2007)</ns0:ref>. e known points are provided by the calibration object containing features that have known coordinates and are easily identifiable by image processing algorithms. As described by <ns0:ref type='bibr' target='#b16'>Zhang (2004)</ns0:ref>, a calibration object may be different depending on one of the the respective calibration techniques: (1) a precisely manufactured 3D object (typically, consisting of three perpendicular planes) <ns0:ref type='bibr' target='#b14'>(Tsai, 1987;</ns0:ref><ns0:ref type='bibr' target='#b3'>Heikkilä, 2000)</ns0:ref>,</ns0:p><ns0:p>(2) a planar object <ns0:ref type='bibr' target='#b15'>(Zhang, 2000;</ns0:ref><ns0:ref type='bibr' target='#b13'>Sturm and Maybank, 1999)</ns0:ref>, or (3) three or more collinear points (e.g.</ns0:p><ns0:p>string of balls) <ns0:ref type='bibr' target='#b16'>(Zhang, 2004)</ns0:ref>. Because manufacturing of a custom 3D object is costly, planar objects make calibration process more flexible, and are widely used in practice.</ns0:p><ns0:p>e camera takes k images of the planar calibration object from different views. For each view i, S x i = (x i , y i ) T . A known 3D point x i is reprojected to the image as follows:</ns0:p><ns0:formula xml:id='formula_5'>a</ns0:formula><ns0:formula xml:id='formula_6'>x (rp) i y (rp) i = Cam p,ξi (x i )<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>e misfit function f rp maps to R 2m , as for each feature point, two difference values are obtained:</ns0:p><ns0:formula xml:id='formula_7'>f rp (p) =       x 1 y 1 − Cam p,ξ1 (x 1 ) ... x m y m − Cam p,ξm (x m )      <ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Given f rp , the reprojection error constitutes a root mean square cost function:</ns0:p><ns0:formula xml:id='formula_8'>C rp (p) = 1 2m f rp (p) T f rp (p)<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>In the OpenCV implementation, distortion coefficients are computed using the method proposed by <ns0:ref type='bibr' target='#b1'>Brown (1971)</ns0:ref>, and, based on them, K is reestimated <ns0:ref type='bibr' target='#b0'>(Bradski and Kaehler, 2008)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Calibration of stereo vision system is based on epipolar geometry (see Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>), where a 3D point is imaged in two cameras, with the corresponding projections obeying a set of geometric constraints. e la er can be expressed in term of essential matrix E and fundamental matrix F. e fundamental matrix constrains two corresponding points in the image coordinates x 1 and x 2 by the following relation: </ns0:p><ns0:formula xml:id='formula_9'>x 1 Fx 2 = 0 (8) 𝒆 ! 𝒆 ' 𝒑 ! 𝒑 ' 𝒑 {𝐶 ! } {𝐶 ' } {𝑊}</ns0:formula><ns0:formula xml:id='formula_10'>x1 Ex 2 = 0<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>e essential matrix encodes the relative rotation Ω and translation t between the cameras:</ns0:p><ns0:formula xml:id='formula_11'>E = skew(t)Ω =   0 −t z t y t z 0 −t x −t y t x 0     r x r y r z   (10)</ns0:formula><ns0:p>Hence, the rotation and translation can be decomposed from E once it is estimated from the point correspondences. In the complete stereo calibration pipeline, the fundamental matrix F is first estimated given the original image points. Further, the essential matrix is obtained given the previously estimated camera matrices:</ns0:p><ns0:formula xml:id='formula_12'>E = K T 2 FK 1<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>e relative rotation Ω and translation t of the second camera expressed in terms of the first one, decomposed from E, are used as a basis, together with the intrinsic parameters of the cameras, to triangulate pairs of corresponding image points.</ns0:p></ns0:div>
<ns0:div><ns0:head>IMPLEMENTATION SPECIFICS</ns0:head><ns0:p>All the work underlying this paper has been implemented on top of the basic calibration routines in the OpenCV library using the Python API. As such, the implementation specifics presented in this section, refer to specific OpenCV functions as defined in the API. e goal of this section to present the workflow of the chessboard corners preparation, camera and stereo calibration, and stereo triangulation in as transparent way as possible.</ns0:p><ns0:p>In OpenCV, the stereo calibration pipeline starts with the camera calibration process for the two cameras, followed by the stereo calibration process, the results of which are fed to the computation of rectification transform. e la er step is important, since both dense and sparse stereo require that the input image points are undistorted and rectified.</ns0:p><ns0:p>e core parameters estimated during camera and stereo calibration are the following:</ns0:p><ns0:formula xml:id='formula_13'>5/16</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1. Camera matrix K 1 and a vector of distortion coefficients d 1 for the first camera.</ns0:p><ns0:p>2. Camera matrix K 2 and a vector of distortion coefficients d 2 for the second camera.</ns0:p><ns0:p>3. Relative rotation Ω and translation t of the second camera expressed in terms of the first camera.</ns0:p><ns0:p>4. Rotation matrices R 1 and R 2 describing the rectification transforms of the two cameras.</ns0:p><ns0:p>5. Projection matrices P 1 and P 2 in the rectified coordinate systems.</ns0:p><ns0:p>All the listed key parameters are used to perform undistortion and rectification of two sets of image points. A er this, P 1 and P 2 are used to perform triangulation.</ns0:p><ns0:p>Below, the logic of corners preparation and stereo calibration is modeled in terms of EPypes (Semeniuta and Falkman, 2019) directed acyclic graphs. e image processing logic is arranged as an executable direct acyclic bipartite graph G:</ns0:p><ns0:formula xml:id='formula_14'>G = (F, T, E)</ns0:formula><ns0:p>where </ns0:p><ns0:formula xml:id='formula_15'>F</ns0:formula></ns0:div>
<ns0:div><ns0:head>PROPOSED METHOD</ns0:head><ns0:p>For the total set of all available image pairs, identify corners in them and store the corners for further processing. Additionally, store indices of the image pairs that resulted in successful corners detection in both images (see Figure <ns0:ref type='figure'>4</ns0:ref>). Given calibration run i and image pair j, triangulate the coordinates of the corner points. As a result of triangulation, a point cloud in R 3 is obtained. All the point clouds are stored in a tensor of shape (m × n × n points × 3). e overall workflow of point clouds' preparation is presented in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>.</ns0:p><ns0:p>In this work, the point cloud represents the chessboard corners, and hence, the triangulated points have to be as close to the original chessboard geometry as possible. is includes the distances between the neighboring points and how much the triangulated points resemble a plane. Given a point cloud as n points points in R 3 , a point cloud metric function maps these point to a single value in R. Given the properties imposed by the chessboard geometry, namely coplanarity and equal distances between the neighboring points, two such metrics are defined:</ns0:p><ns0:p>1. Function J M DIR measures the mean distances of neighboring points in rows. It should ideally be close to the nominal square size.</ns0:p><ns0:p>2. Function J P DRM S measures plane difference RMS error: given the points in R 3 , fit a plane to them, and calculate the RMS error of the points with respect to the plane. In the ideal case, the overall error should be close to zero, showing the evidence of a good plane reconstruction.</ns0:p><ns0:p>A calibration run i ∈ {1, ..., m} is associated with n point clouds obtained with triangulation of the respective image pairs. For each of these point clouds, the metric functions J M DIR and J P DRM S acceptable triangulation result, namely the one that is not an outlier:</ns0:p><ns0:formula xml:id='formula_16'>|d ij − d| < δ<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Let p ij denote the value of J P DRM S for image pair j given calibration i and p max denote the maximal value of J P DRM S that separates inliers and outliers.</ns0:p><ns0:p>For each combination of a set of calibration parameters (i) and image pair (j), an indicator variable g ij ∈ {0, 1} is defined as follows:</ns0:p><ns0:formula xml:id='formula_17'>g ij = 1, if (|d ij − d| < δ) ∧ (p ij < p max ) 0, otherwise.<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>e number of acceptable triangulation results a i per calibration run is thus defined as follows:</ns0:p><ns0:formula xml:id='formula_18'>a i = n j=1 g ij (14)</ns0:formula><ns0:p>Based of the values of g ij , which serve a role of a mask for the cases with acceptable triangulation, vectors d i and p i are defined. ey contain the values of the triangulation metrics J M DIR and J P DRM S respectively only for those (i, j) that are characterized by g ij = 1.</ns0:p><ns0:p>To assess the calibration performance based on triangulation, a number of metrics is proposed (for a calibration run i), which are defined below.</ns0:p><ns0:p>1. e number of image pairs that resulted in an acceptable triangulation result, namely a i .</ns0:p><ns0:p>2. Mean and standard deviation of the J M DIR scalar triangulation assessment metric for such (i, j)</ns0:p><ns0:p>that are masked by the condition of g ij = 1: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_19'>µ i = di = 1 a i n j=1 (d ij if g ij = 1)<ns0:label>(15</ns0:label></ns0:formula><ns0:p>Computer Science 4. Maximal absolute error from the nominal scalar triangulation assessment metric:</ns0:p><ns0:formula xml:id='formula_20'>σ i = 1 a i − 1 n j=1 ((d ij − di ) 2 if g ij = 1)<ns0:label>(16</ns0:label></ns0:formula><ns0:formula xml:id='formula_21'>ǫ i = max(e)<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>Furthermore, all calibration runs are ranked from the best-performing to the worst-performing based on the associated values of the metrics defined above. e ranking logic proposed in this paper is as follows: sort the calibration runs first by the values of h</ns0:p><ns0:formula xml:id='formula_22'>(i)</ns0:formula><ns0:p>0 (in decreasing order) and then by the values of ǫ i (in increasing order). en, pick the calibration results associated with the calibration run of rank 1 as the final results.</ns0:p><ns0:p>e overall workflow of metrics computation is presented in Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_23'>{𝑷𝑪 !' } Apply point cloud metric functions {𝑝 !' } {𝑑 !' } Identify inliers {𝑔 !' } Compute 𝜇 and 𝜎 of acceptable cases of 𝑑 !' Compute 𝐽 #$%& deviations {𝜇 ! , 𝜎 ! } {𝒅 ! } Analyze 𝐽 #$%& deviations {ℎ ' , ℎ ( , … , ℎ (' } (!) {𝜀 ! }</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL VALIDATION Experimental setup and data collection</ns0:head><ns0:p>A stereo rig of two identical Prosilica GC1350 cameras with TAMRON 25-HB/12 lens is used to collect pairs of images with a chessboard calibration pa ern. Allied Vision Vimba is used as an image acquisition driver, orchestrated by FxIS <ns0:ref type='bibr' target='#b9'>(Semeniuta and Falkman, 2018)</ns0:ref> to achieve synchronization across the two cameras.</ns0:p><ns0:p>e data collection component continuously receive streams of images from the two cameras. At certain moments, a pair of images is obtained that most closely correspond with their acquisition timestamps. ey further undergo chessboard corner detection. In case the corners were successfully identified in both images, the la er are saved to disk.</ns0:p><ns0:p>In total, n = 264 image pairs are collected. eir dimensions, along with the details on the chessboard calibration object, are presented in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Dimensions of the images and the calibration pa ern</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameter Value</ns0:head><ns0:p>Image dimensions 1360 × 1024 pixels Pa ern size 9 × 7 Square size 20 mm</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental results</ns0:head><ns0:p>Given the collected set of n = 264 image pairs and the method described above, m = 200 calibration runs were invoked with the size of the randomly selected input subset varying between s min = 15</ns0:p><ns0:p>and s max = 30 image pairs. e resulting calibration parameters were used for triangulation using all images, and the results were collected in a tensor of dimension 200 × 264 × 63 × 3. e dimension of the la er two components correspond to the number of chessboard points in R 3 (n points = 9 × 7 = 63).</ns0:p><ns0:p>Given the triangulation results, outliers were excluded. An outlier is defined as a calibration run/image pair combination for which either of the two following conditions holds true: |J M DIR −d| ≥ 5 mm (where d is the nominal square size dimension), J P DRM S ≥ 10 mm (see the definition in eq. ( <ns0:ref type='formula' target='#formula_17'>13</ns0:ref>)).</ns0:p><ns0:p>e result of marking inliers and outliers is shown as a binary mask in Figure <ns0:ref type='figure'>7</ns0:ref>. One may observe that certain calibration runs perform consistently well on the most of the image pairs. Similarly, some image pairs result in be er triangulation results than others. Based on the masked combinations of calibration runs and image pairs (as shown in Figure <ns0:ref type='figure'>7</ns0:ref>)</ns0:p><ns0:p>and the values ofJ M DIR and J P DRM S , a set of metrics is computed for each calibration run: mean and standard deviation of the J M DIR values, µ i , σ i , number of acceptable image pairs per calibration run, a i , values of the histogram bins based on the absolute differences of J M DIR with the nominal value, {h 0 , h 1 , ..., h 10 } (i) , number of image pairs used for stereo calibration, s i , and the mean value of J P DM RS , p i . Further in this section, the index i is omi ed when referring about the metrics.</ns0:p><ns0:p>e calibration runs are further ranked first by the values of h 0 (in decreasing order) and then by the values of ǫ (in increasing order). e values of all metrics for the 10 best calibration runs are shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. e first row of the table correspond to the best-performing calibration run (rank 1).</ns0:p><ns0:p>To be er motivate the rankings of the calibration runs as shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>, it is worth to investigate the values of the histogram bins. e first histogram bin h 0 describes the number of image pairs in which the mean value of e i is closest to zero. e last histogram bin h 10 describes the number of image pairs in which the mean value of e i is beyond the specified top threshold ∆. In the presented analysis, ∆ = 1 mm, which corresponds to the range of [0, 0.1) mm for h 0 and [1 mm, ∞) for h 10 . Figure <ns0:ref type='figure'>10</ns0:ref> shows the histograms for the calibration runs of rank 1 (the best), rank 25, rank 50, rank 75, rank 100, and rank 125. It is clear that the selected ranking clearly sorts the alternative calibration runs from the best performing to the worst performing one.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy assessment</ns0:head><ns0:p>To reason about accuracy of the proposed method, the following approach is applied. A single image pair is chosen as the benchmark. A suitable candidate for this is the image pair that results in acceptable triangulation results based on most calibration runs. Further, parameters from a number of first n f irst ranked calibration runs are used to triangulate the points from the chosen image pair. Distance between each neighboring points in rows (J M DIR ) are measured and collected in an array. Let this array be denoted as a.</ns0:p><ns0:p>For each calibration run, the deviation of the mean of the array a from the nominal value (d = 20 mm) is computed, along with the standard deviation of the values of a. Both of these metrics are desirable to be close to zero. When they are plo ed together (in Figure <ns0:ref type='figure' target='#fig_11'>14</ns0:ref> [ <ns0:ref type='bibr'>1-5] [6-10] [11-15] [16-20] [21-25] [26-30] [31-35] [36-40] [41-45] [46-50] [51-55] [56-60] [61-65] [66-70] [71-75] [76-80] [81-85] [86-90] [91-95][96-100]</ns0:ref> Groups </ns0:p></ns0:div>
<ns0:div><ns0:head>Principal component analysis</ns0:head><ns0:p>To take into account all the metrics per calibration run, principal component analysis is performed. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>It takes the dataset as shown in</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to the degenerate configurations) is associated with uniformly low values of the h 0 /h 10 ratio, while class 1 (configurations with acceptable calibration results) vary in the range of the defined ratio, with some of them clearly be er than the others. </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FURTHER WORK</ns0:head><ns0:p>is paper has presented an approach for calibration of stereo vision setup that utilizes the standard technique for plane-based calibration, along with its OpenCV-based implementation, but with a novel technique for systematic selection of the optimal subset of image pairs leading to the best triangulation performance. e proposed approach is based on two computation stages, the first being the triangulation of points in the image pairs, and the second being the evaluation of the point cloud metrics for each calibration run by preparation and analysis of metric vectors. e merit of the proposed method is that it goes beyond the traditional performance characteristic of reprojection error, and instead evaluates the performance of stereo triangulation given the estimated intrinsic, extrinsic and stereo rig parameters. is allows for devising metrics that directly evaluate the intended application of a stereo vision system. At the same time, the approach is self-contained in a sense that no additional data collection is made, and the existing pool of image pairs is used as a basis of performance evaluation. is aspect also allows for performing the calibration process in a highly automated manner, with preserving a transparency of the underlying processing steps and the associated computation results. e proposed method is validated on a dataset comprised of image pairs of a chessboard calibration pa ern gathered with two identical Prosilica GC1350 cameras. e computed metrics and the associated ranking of calibration runs allowed selection of the best-performing result of stereo calibration. Furthermore, because the la er stage of the analysis pipeline results in a dataset similar to those dealt with in unsupervised machine learning, namely a set of feature vectors per calibration run (see Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>), principal component analysis with subsequent k-means clustering was performed to further highlight the nature of the distribution of the stereo reconstruction performance.</ns0:p><ns0:p>For the further work, it is beneficial to evaluate the proposed method with a concrete stereo vision application use case. In addition to a broader validation of the method, this will allow for uncovering additional application-specific metrics that may be useful in achieving more optimal system calibration.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>homography matrix H i is computed based on two sets of points: (1) real-world coordinates of the target points in the world coordinate frame, and (2) their projected pixel values determined with the appropriate feature detection technique. A system of homogeneous equations Ax = 0 is formed from the homography matrices {H i } and the constraints imposed by the orthonormality of the rotation vectors, and solved in closed form to determine the initial values for the the parameters embodied in the camera matrix K. e la er is derived from b, the extrinsic parameters for each view c T wi are computed having H i and K.Given the initial values of K and the set of poses for each view { c T wi }, the values of all intrinsic parameters p = (f x , f y , c x , c y , k 1 , k 2 , k 3 , p 1 , p 2 ) T are refined by non-linear estimation minimizing the reprojection error. e reprojection error is defined as follows. Let m be the number of features used for calibration. Set W = {x i ∈ P 3 , i = 1, ...m} contains known coordinates of the points in the coordinate frame of the calibration object. Set S = { S x i ∈ R 2 , i = 1, ...m} contains the corresponding coordinates of image features. S x i , has 2 degrees of freedom, which are row and column coordinate of a point in the image:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Epipolar geometry A similar relation holds for points expressed in normalized coordinates x1 and x2 , i.e. assuming camera matrices K 1 and K 2 are unit matrices:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Computational graph for the process of stereo calibration</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Given a set of all image pairs, randomly select a series of m integers S = {s 1 , ..., s m }, within the predefined range [s min , s max ]. e value of s i corresponds to the size of image subset used for calibration. For each i ∈ {1, ..., m}, randomly select s i image pairs and perform stereo calibration (here it is presumed that all image pairs have been validated to retain only those with successfully identified corners in both images, totaling to n pairs). Store the result of each calibration run i.e general idea of the proposed method is to utilize the set of all image pairs that capture the calibration object in different orientation as a basis for assessment of stereo triangulation quality. Let i ∈ {1, ..., m} denote the index of a calibration set, and let j ∈ {1, ..., n} denote the index of an image pair. ere exists mn such combinations of calibration results and image pairs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Workflow of point clouds' preparation</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>into the first histogram bin (the one closest to zero) of the distribution of errors from the nominal scalar triangulation assessment metric. e histogram is constructed as follows. A vector e of absolute errors is computed given the vector d i :e i = |d i − d| (17) A value ∆ ∈ R + is defined,and the range [0, ∆] is split into 10 bins, with the first bin corresponding to the range of [0, ∆/10). e values in e i are distributed across the defined 10 bins, with those ǫ ij ≥ ∆ falling into the eleventh bin. As such, if for example ∆ = 1 mm, the number of image pairs that fall into the first bin indicate how many of those resulted in the value of 0 ≤ |d ij − d| ≤ 0.1 mm. If this number is bigger for a calibration i, it is indicative of a more optimal triangulation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Metrics computation workflow</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .FrequencyFigure 8 .FrequencyFigure 9 .</ns0:head><ns0:label>789</ns0:label><ns0:figDesc>Figure 7. Calibration run/image pair combinations that are characterized with acceptable values of the scalar triangulation metrics (yellow) e distributions of the two metric functions operating on the triangulated point clouds are shown</ns0:figDesc><ns0:graphic coords='11,259.57,487.48,205.96,155.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .Figure 11 .Figure 12 .Figure 13 .</ns0:head><ns0:label>10111213</ns0:label><ns0:figDesc>Figure 10. Histograms for calibration runs of different ranks</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Accuracy assessment of the first 100 ranked calibration runs e overall trend can be further validated by comparing the two deviation metrics against groups of 5 consecutive ranked calibration runs. It can be observed in Figures 15 and 16 that the deviations on average increase with decreasing the rank of the calibration runs (moving away from rank 1).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 15 .Figure 16 .</ns0:head><ns0:label>1516</ns0:label><ns0:figDesc>Figure 15. Mean value of the deviation of J M DIR from the nominal value of the chessboard square of 20 mm, calculated for the groups of consecutive calibration runs, and the associated linear trend</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. Data transformed into two principal components</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Values of the calibration run metrics for five best calibration runs h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 h s p</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>i h 0 195 20.090 0.951 257 4.310 158 25 µ σ a ǫ</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>24 0.495</ns0:cell></ns0:row><ns0:row><ns0:cell>92</ns0:cell><ns0:cell cols='3'>20.074 0.969 257 4.492 142 19 17 13</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>25 0.501</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>154 20.066 0.948 257 4.206 140 23 22</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>16 0.541</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>193 20.157 0.986 257 4.366 134 22 20 14</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>17 0.578</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>103 20.162 0.981 257 4.671 133 47</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>27 0.509</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>163 20.064 0.962 257 4.335 131 24 20</ns0:cell><ns0:cell cols='2'>9 12</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>27 0.492</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>159 20.061 0.934 257 4.061 128 36 20</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>23 0.501</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>122 20.075 0.964 257 4.369 124 45 14</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>15 0.516</ns0:cell></ns0:row><ns0:row><ns0:cell>51</ns0:cell><ns0:cell cols='4'>20.044 0.926 257 4.143 120 32 25 10 13</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>17 0.504</ns0:cell></ns0:row><ns0:row><ns0:cell>33</ns0:cell><ns0:cell cols='5'>20.046 0.976 258 4.999 112 34 16 17 13 10</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>17 0.540</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>h0 h1 h2 h3 h4 h5 h6 h7 h8 h9 h10</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:53225:1:2:NEW 29 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Dr Boongoen and the reviewers,
First of all, I would like to express my gratitude for your efforts reviewing my paper, “Subsetbased stereo calibration method optimizing triangulation accuracy”. Your comments and
suggestions have helped me to improve the quality of the paper and make my idea clearer
to a reader.
In this letter, I would like to provide my responses to your comments. I have highlighted the
original comments with yellow background, with my responses provided on the white
background.
The main points of improvement in the revised manuscripts are the following:
•
•
•
•
•
•
Clearer motivation of the proposed method
Clearer explanation of how the proposes method relates to the traditional planar
calibration and why the proposed approach is a viable solution for automating
planar stereo calibration
Analysis of the geometric aspects of the experiment, including the coverage of the
image planes by points
More detailed explanation of accuracy assessment
Fixed and clarified mathematical expressions
Improved references to the figures
Best regards,
Oleksandr Semeniuta
Editor
The authors must revise this manuscript carefully based on all the comments provided by
three reviewers. Then, it can be reviewed again for possible publication.
All the reviewer‘s comments are addressed in the paper and in the responses below.
Reviewer 1
The paper has been well organized. All literature, figures, and tables support the article.
Only a typo as inline 333 (in fig Figure12...) is found.
Thank you for the positive feedback. The typo has been fixed.
The research has been designed with many dimensions. The experiment covered the
hypothesis.
Thank you for the positive feedback.
This paper proposed the method to perform the optimized calibration of the stereo vision
system.
Thank you for the positive feedback.
Reviewer 2
This article is very interested that try to propose the optimization for stereo-camera imaging
calibration method along with the OpenCV implementation. The explanation is clear
through the article. However, in the literature review, the stereo imaging setups are widely
used in 3D reconstruction. Is there any application else that required the stereo imaging
setups? If any, please explain to give more reason why we need to develop the new stereo
calibration optimization method.
The motivation for the method, along with its relation to the state-of-the-art method have
been highlighted more explicitly in the Introduction. In a nutshell, the proposed technique
does not aim to replace the well-known planar calibration method, but to conduct the latter
in an automated manner according to the set of the proposed rules.
This article gives a well-defined research question that try to optimize the setup of the
stereo imaging calibration. The experimental design and arrangement is well defined.
Thank you for the positive feedback.
The suggest is in figure 4 to give this image gain more understanding, please include the 3
steps: 1) The total set of image pairs, 2) identify corners, and 3) store indices of image pairs
that mentioned in line 212-214 along with computational graph for the process of image
points preparation.
Thank you for the suggestion. The proposed reference to Figure 4 has been integrated in the
text.
In the part of 'Principle component analysis'. Why this article apply only one clustering
algorithm that is K-Means? Can we use another clustering algorithm?
For the analysis presented in this paper, clustering is used as a tool to automatically label
two classes of points that are very clearly distinct. As such, there is no reason to compare
different clustering methods.
For the experimental result. There is only the result of the proposed method. Is possible to
show the result of the comparison of traditional method v.s. the proposed method of stereo
calibration optimization.
Because the proposed approach is not a totally new calibration method, but rather an
automated realization of the calibration procedure using the traditional method, it is hard if
not impossible to devise a method comparing the two. It is worth noting that, as mentioned
in the Introduction, for the traditional method to be more accurate, a number of heuristic
rules have to be applied while collecting the calibration images. However, no one has
attempted to describe an automated method aimed at optimizing the calibration results by
based on the input set of images or image pairs. As such, the presented paper has a
potential to lay the ground for such methods by providing one possible implementation.
In any case, the subsection “Accuracy assessment” presents the efficacy of the proposed
method with respect to accuracy of stereo triangulation. This section has been improved in
the revised version of the paper to be clearer and more readable.
The figure 1, figure 2, figure 13, and figure 14 are not mentioned in the contexts, please
verify.
Thank you very much for pointing this out. All the respective references have been added to
the manuscript.
Reviewer 3
The author presents an approach to select the best subset of images to calibrate a stereo rig
using the well-known library OpenCV for image processing. The author has implemented a
routine to try different sets of calibration images and evaluated them in terms of several
metrics. By using these metrics the author selects the best subset of images and final
calibration.
The article is well written and structured and the literature review is concise and well linked
to the publication.
Thank you for the positive feedback.
The presented figures are of exceptional quality and help the reader understand the
contents fully.
Thank you for the positive feedback.
The author validates the approach with a single dataset of 264 stereo images (I expect them
to be time-synchronised) and presents several calibration runs together with some metric
values in particular cases of the runs.
You are right. The image pairs are time-synchronized. This is highlighted in the beginning of
the “Experimental setup and data collection” section.
Please note line 292 Jmdir is said to be an outlier if Jmdir > 25 mm, but figure 8 also suggests
an outlier is any run with Jmdir < 15 mm
Thank you very much for pointing this up. The formulation in the text has been fixed.
Looking at Table 2 results, given that the standard deviation of the plane is approx. 0.95
there is no statistical difference between any of the runs. I would suggest the author to
validate the results using more precise methods.
The standard deviation is computed only for the cases of image pairs that produced
acceptable triangulation results. The first rows shown on the table show the same high
number of such acceptable cases (a = 257), so they are certainly comparable. However, not
all calibration results in the group without degenerate configurations result in the same
value of a. As such, the proposed method aims at optimizing accuracy across many different
image pairs.
In my opinion, there is a lack of information in terms of the geometry used in the
experimental design. What is the expected calibration resolution? What is the intended
working distance? At which distance was the calibration pattern captured in the images?
This is a valid question. Overall, the proposed method is aimed at multitude of different
applications, and the respective geometries will largely be depended on the lenses used on
the stereo setup. In any case, for the studied dataset, a histogram is generated (see Figure
13) that shows how the z-translation of the calibration object with respect to the left
camera is distributed. One can see that the operating range of the studied stereo rig is
between 0.5 and 1 meters.
There is also another important detail: how does your method ensure that all the image are
is covered at some point by a calibration point and avoid skewed or biased distortion
models?
Are 'good candidates' the points at the centre of the image (less distorted) or are they
normally distributed across the image?
This is a very good idea to analyze the coverage of the image space by calibration points. It
is not directly applied in the presented method, but can be a great addition to an improved
version of the method. I have added a new subsection to the paper (“Experiment
assessment”), which includes the analysis of the points’ coverage of both the left and the
right image plane. Figure 12 includes visualization of the coverage, along with showing how
the points of the highest-ranked calibration run are distributed.
Figure 12 mentions a deviation from the nominal. What is the nominal value here? Has the
standard deviation also been deviated from the nominal? That is not stated.
Thank you for pointing this up. The details of the calculation are clarified.
I personally do not see the value of the PCA data transformation. Could the author validate
that with the reprojection error at some point?
In this paper, PCA in used as an additional visualization tool rather than allows to look the
measured metrics in two-dimensional space. The proposed idea to couple PCA with
visualization of reprojection error is good. However, as mentioned in the paper,
reprojection error is measures with respect to only one camera, and the proposed method
tries to assess triangulation rather than reprojection (the latter is already optimized during
calibration of each camera in the individual runs).
I had a look at the images and they do not look sharp. I would encourage the author to use a
harder support material for the checkerboard pattern. Cardboard can easily bend and
provide a > 1 mm deviation inaccuracy.
The presented results are not statistically sound and controlled. I believe further work needs
to be done from data gathering to better analysing it.
This is a very good point, and I totally agree that the calibration object should be as flat as
possible with the best possible printing quality. At the same time, as Figure 8 shows, even
with the present quality of the calibration object and the captured images, the triangulation
accuracy is rather high in most cases. At the present time, unfortunately, there is a limited
access to the laboratory due to the COVID-19 situation, so re-running the experiment with a
better calibration object is challenging.
" | Here is a paper. Please give your review comments after reading it. |
65 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Mobile devices now rival desktop computers as the most popular devices for web surfing and E-commerce. As screen sizes of mobile devices continue to get larger, operating smartphones with a single-hand becomes increasingly difficult. Automatic operating hand detection would enable E-commerce applications to adapt their interfaces to better suit their user's handedness interaction requirements. This paper addresses the problem of identifying the operative hand by avoiding the use of mobile sensors that may pose a problem in terms of battery consumption or distortion due to different calibrations, improving the accuracy of user categorization through an evaluation of different classification strategies. A supervised classifier based on machine learning was constructed to label the operating hand as left or right. The classifier uses features extracted from touch traces such as scrolls and button clicks on a data-set of 174 users.</ns0:p><ns0:p>The approach proposed by this paper is not platform-specific and does not rely on access to gyroscopes or accelerometers, widening its applicability to any device with a touchscreen.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>This paper proposes a classification device to determine the user operation hand (handedness) in order to help web developers to customize the user interfaces dynamically, thus improving the usability of their designs.</ns0:p><ns0:p>The proposed model, founded upon the use of machine learning algorithms, is based exclusively on data gathered by agents embedded inside E-commerce web applications, observing users while they spontaneously interact with their browser, as they normally would do in their own computational context. This allows the detection of the user's handedness after very few interaction actions with the system (such as scrolling or point and clicks). Once the users are classified as left-handed or right-handed, the user interface can be dynamically adapted to the specific interaction requirements of the users' handedness.</ns0:p><ns0:p>This topic is important because as of 2019, there were about 5.112 billion unique mobile users in the world over a total population of 7.676 billion <ns0:ref type='bibr'>(Social and Hootsuite, 2019)</ns0:ref>. Penetration rate of mobile technology in 2018 was of 68% and the number of smartphone mobile users increased in 100 million in the period January 2018 to January 2019 <ns0:ref type='bibr'>(Social and Hootsuite, 2019)</ns0:ref> .</ns0:p><ns0:p>However, this important penetration rate of mobile technologies is not usually supported by high levels of ease-of-use and/or accessibility. Mobile app usability related problems are the fifth most frequent user complaint <ns0:ref type='bibr' target='#b16'>(Khalid et al., 2015)</ns0:ref>, while about 52% of mobile users experienced usability problems relevant enough to impact on their loyalty and/or trustworthy feelings about E-commerce sites <ns0:ref type='bibr' target='#b3'>(Dynamics, 2015)</ns0:ref>. These usability problems not only downgrade the company brand mark but also increase the chances of customers looking for similar services in the competitors.</ns0:p><ns0:p>Usability and accessibility on mobile computing is usually affected by context-specific issues. Among other requirements, usability engineering must deal with a large variety of screen sizes and device shapes.</ns0:p><ns0:p>The display layout created to hold the application's interactive objects <ns0:ref type='bibr'>(buttons, menu items, text, etc.)</ns0:ref> must take in consideration the hand posture required to operate the device, which is heavily influenced by the so-called user's handedness <ns0:ref type='bibr' target='#b0'>(Azenkot and Zhai, 2012)</ns0:ref>.</ns0:p><ns0:p>Progressive increment in the display size of such devices introduced novel usability problems when the users try to use them with only one hand <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Löchtefeld et al., 2015)</ns0:ref>. These usability issues reveal to be even more relevant if we consider that 49% of the users like to use their mobile devices with their thumb <ns0:ref type='bibr' target='#b9'>(Hoober, 2013)</ns0:ref>. Operating a mobile device with only one hand may be difficult for some users. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the difficulties experienced by users of large displays when they try to access certain areas with their thumb.</ns0:p><ns0:p>The access to the top of the display and/or to the side opposite to the operating hand are some examples of difficult or annoying interactions. Usability problems arise when the application screen layout forces interaction on those areas since users are implicitly requested to modify their posture in an uncomfortable way. Even worse, this may also represent an accessibility problem for motor-disabled users. People with a limited range of movement in their hand or fingers may find hard (or even impossible) to reach some areas of the display, preventing them to use specific features of their mobile applications. This paper is focused on how to classify users implicitly and dynamically according to their operational hand. Some applications provide enhanced user experience (UX) through customization. Customized interactive dialogues are created on demand to satisfy the interaction requirements of users running applications in a specific interaction context <ns0:ref type='bibr' target='#b7'>(Gonzalez-Rodriguez et al., 2009)</ns0:ref>. The changes and adaptations on the layout of the user interface are implicitly available to the user only when the application is able to infer or detect the interaction context. However, most applications are not explicitly aware of changes in the context of interaction, so these must be explicitly reported by the user to manually activate the customization process <ns0:ref type='bibr' target='#b19'>(Löchtefeld et al., 2015)</ns0:ref>. The explicit selection of the interaction context (eg. changing from portrait to landscape display modes in a mobile device) may result in an annoying process to certain users.</ns0:p><ns0:p>The customization of the user interface enhances the user experience when it is adapted to the specific interaction requirements of the user's operational hand <ns0:ref type='bibr' target='#b11'>(Hussain et al., 2018)</ns0:ref>. The detection o user features can be done implicitly and/or dynamically. Implicitly means that users do not need to activate any option or preference in the interface in order to select their operational hand. They may not even know that such an option exists. Dynamically means that the system continuously monitors the interaction context looking for variations that force users to change their operational hand (eg. when users have to carry a heavy item with their operational hand, forcing them to use their mobile devices with the other hand). Such kind of system would help developers to detect the user's operational hand (left-handedness or right-handedness) at any moment, thus they would be able to apply the corresponding customized style to the display layout. To the extent we know, customization of mobile web interfaces to accommodate user handedness has been barely explored. Discussion about how this customization process can be done and its implications in terms of user interaction is beyond the scope of this research.</ns0:p><ns0:p>This paper proposes a model based on the use of machine learning techniques to classify users implicitly and dynamically according to their operational hand, reaching an accuracy of 99.92%. Unlike other approaches, where users are forced to execute specific predefined operations to facilitate the classification, the proposed solution encourages spontaneous interactions without limiting or guiding the kind of actions to be done. The ultimate goal is to avoid possible biases caused by non spontaneous behaviour <ns0:ref type='bibr' target='#b15'>(Kaikkonen et al., 2005)</ns0:ref>.</ns0:p><ns0:p>Unlike other approaches, the proposal does not require reading data from the internal sensors of the mobile phone (accelerometers, gyroscopes, etc), Therefore there is no additional battery consumption. It neither requires the installation of platform specific software <ns0:ref type='bibr'>(Android, iOS, Windows Mobile, etc)</ns0:ref>. Thus it can be used in any touchscreen based mobile device, provided that it is able to run a web browser. This approach also avoids the bias and reading noise specific to each device as the performance of gyroscopes and accelerometers varies significantly between different devices <ns0:ref type='bibr' target='#b18'>(Kos et al., 2016)</ns0:ref>.</ns0:p><ns0:p>The paper is organized as follows. Section 1 describes the so called 'thumb zones' and their relevance for human computer interaction studies. Section 2 presents the research background with a representation of the related studies on interface adaptation and algorithms designed to detect the user's operational hand.</ns0:p><ns0:p>Section 3 describes the experimental design, the data gathering and depicts the sample distribution while Section 4 describes the analyzed variables. Section 5 shows the machine learning strategies adopted to select the best-performing algorithm. Section 6 discusses the performance of the most rated algorithms.</ns0:p><ns0:p>Section ?? shows the results while Section 7 points out the research limitations and future research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>THUMB-BASED INTERACTION PATTERNS</ns0:head><ns0:p>Human Computer Interaction defines the so-called 'Thumb Zones' as areas of mobile displays that have the same easiness of access for the thumb <ns0:ref type='bibr' target='#b10'>(Hoober and Berkman, 2011)</ns0:ref>. They are defined for both the left and right thumbs and are applicable independently of the user's laterality or handedness.</ns0:p><ns0:p>In accordance with the easiness of access, the display is divided into three areas (see Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). The easiest area to access for one-handed operation, and therefore the most comfortable area, is known as the 'natural' area. It is the closest area to the user's thumb. The second area, the so-called 'extent' area, entails some difficulties for the user to access it, but it is still usable. The last area, the so-called 'hard' area, requires the user to modify the wrist position to enlarge the thumb operational swipe area. As a result, the access to elements located in that area is uncomfortable for most of the users and even painful or inaccessible for some others, specially for those users with motor disabilities and/or who interact with large displays <ns0:ref type='bibr' target='#b26'>(Scott Hurff, 2014)</ns0:ref>.</ns0:p><ns0:p>An example on how thumb zones affect usability is shown in Fig 3 <ns0:ref type='figure'>.</ns0:ref> Overlays of these areas have been applied to the user interface of Facebook to show the degree of easiness for reaching relevant interactive objects. In the example, relevant menu options like 'Home' or 'Events' are hard to reach by left-handed users but they are easily accessible to right-handed users, even though these elements are not located in the right-handed users' comfort (natural) zone.</ns0:p><ns0:p>Since the thumb zones for left and right-handed users are different, the difficulty level required to access the same elements (buttons and menu items) will be different, thus their user experience and satisfaction will be different too.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science Scrolling operations are another challenge to easiness of use related to handedness-based interaction on touchscreen displays. Most of the users start their scrolling actions placing the thumb in the area near to the center of the screen, swiping to create an arc-shaped trajectory that points to the side where the hand is holding the device (to the left, in the case of left-handed users or to the right for right-handed users) <ns0:ref type='bibr' target='#b6'>(Goel et al., 2012</ns0:ref>). In the cited example, the location of the 'like' button makes it easier to unwillingly click on it while scrolling through the display. Since clicking on this kind of button is done in a single step action, not requiring extra confirmation dialogues, the erroneous action may be executed implicitly, thus may be annoying to some users, thus diminishing the user experience. In an analogous way, right handed users may be prone to unintentionally click on the 'share' button. Fig <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> shows a possible adaptation of the user interface, designed to comply with the interaction requirements of both left-handed and right-handed users.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>PRIOR RESEARCH</ns0:head><ns0:p>Adaptation of user interfaces to the user handedness on mobile devices is mostly focused on improving the performance in touchscreen operations with one hand only. The process explores different locations and size to locate interactive objects. Studies on user handedness in Human Computer Interaction have been mostly focused on human performance. <ns0:ref type='bibr' target='#b17'>Khan and Rizvi (Khan and Rizvi, 2009)</ns0:ref> studied the influence of handedness on data entry performance in typing tasks, while Shen et al. <ns0:ref type='bibr' target='#b28'>(Shen et al., 2017)</ns0:ref> measured the performance differences in keystroke tasks to attempt handedness recognition in computer forensics analysis. <ns0:ref type='bibr' target='#b23'>Parhi et al. (Parhi et al., 2006)</ns0:ref> determined the optimal target size area when using a mobile application with thumb only. Along the tests, 20 right-handed volunteers were asked to tap the screen on different locations using their right-hand thumb only. The study took into account the limited range of movement of the thumb on the screen and the different locations it can reach on small touchscreen devices. Researchers concluded that a 9.2 mm size is large enough to reach single targets on mobile device apps. <ns0:ref type='bibr' target='#b8'>Guo et al. (Guo et al., 2016)</ns0:ref> explored the right matching between object location and user handedness, asking volunteers to tap on 'like' and 'dislike' buttons that were located on either side of display. For a right-handed user, the 'like' button was placed on the right side of the screen, making it easily accessible.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. The thumb-zones described in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> are overlaid over the user interface of the Facebook app for Android. Relevant interaction objects like the 'share' or 'home' buttons are hard-to-reach for left-handed users. On the other hand, elements like the 'photos' or the 'like' buttons are hard-to-reach for right-handed users. Source: own elaboration.</ns0:p><ns0:p>The 'dislike' button was located further to the left of the screen, making it more difficult to access.</ns0:p><ns0:p>Researchers recognized that there are several handedness dependant elements in the user interface of mobile devices that must be configured in a way different than in their desktop counterparts.</ns0:p><ns0:p>A similar study by Perry and Hourcade <ns0:ref type='bibr' target='#b24'>(Perry and Hourcade, 2008)</ns0:ref> quantifies the performance change produced when the users deal with interfaces for mobile devices not designed to match the interaction requirements of their handedness. In the study, half of the volunteers were asked to execute several tasks using their non-preferred hand (left or right). As expected, the study showed that users operating the device with their non-preferred hand were slower and less accurate. This effect has more impact when the target is located on the side opposite to the operating hand.</ns0:p><ns0:p>These studies also showed that the bigger the target, the easier, more accurate and quicker it was for the users to reach it. Perry and Hourcade <ns0:ref type='bibr' target='#b24'>(Perry and Hourcade, 2008)</ns0:ref> reported that this phenomenon happened even whenever the non-preferred hand was used. The effect is supported by Fitts's Law <ns0:ref type='bibr' target='#b4'>(Fitts, 1954)</ns0:ref>, which states that the time required to point to a target by a hand or a finger can be estimated as a logarithmic function of the distance and the size of the target. The bigger the target is or the closer it is to the thumb, the faster it will be to reach it. This reasoning can also be applied to the accuracy required to reach the target. Fitts's Law shows the need to decrease the distance between the thumb and the interactive targets in order to achieve a comfortable one-handed interaction experience.</ns0:p><ns0:p>If handedness can be properly determined, it would be possible to mitigate the associated usability and accessibility issues through proper user interface customization strategies.</ns0:p><ns0:p>Determination of the operating hand and posture has been studied by several researchers who proposed different algorithms. These algorithms are mostly based on the analysis of the areas where the users tap on the screen. Separate studies by <ns0:ref type='bibr' target='#b6'>Goel et al. (Goel et al., 2012)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Löchtefeld et al. (Löchtefeld et al., 2015)</ns0:ref> combine the detection of changes in the size of the touching area with the screen location where touching is produced to infer the operating hand. The algorithms assume that a more frequent contact is done on the screen side further away from the thumb (the easier to reach zone).</ns0:p><ns0:p>Using a similar approach, Seipp and Devlin <ns0:ref type='bibr' target='#b27'>(Seipp and Devlin, 2015)</ns0:ref> determined that the size of the</ns0:p></ns0:div>
<ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:ref> Manuscript to be reviewed touch zone depends on the finger used to operate the device. This area is much larger when using the thumb than when using the index finger. They concluded that a horizontal touch offset over the center of a button was a strong indicator of the operating hand. Goel et al. <ns0:ref type='bibr' target='#b6'>(Goel et al., 2012</ns0:ref>) also included the study of touch trace analysis as a relevant factor to detect the operating finger. Their heuristic-based prototype assumes that the thumb-based traces consistently create an arc in contrast to the index finger where this consistency is not found. The algorithm analyses the x-displacement of the traces recorded, biasing towards a thumb-based interaction whenever the measure is greater than 5% of the screen resolution.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref> designed an Android-based prototype that determines the user's operating hand and the hand-changing process, combining touchscreen trace and data provided by the device's accelerometers and gyroscopes.The evaluated trace data include speed, X and Y displacements, curvature, convex orientation and the total trace length to obtain an accuracy of 95.6%. The study included only 14 volunteers who participated under the supervised conditions of a usability lab. They were asked to swipe the operating finger in each one of four possible directions: left, right, up and down, recording their actions, so user spontaneous interaction behaviour on a free context was ignored.</ns0:p><ns0:p>Another study by <ns0:ref type='bibr' target='#b19'>Löchtefeld et al. (Löchtefeld et al., 2015)</ns0:ref> bases the user's operating hand detection algorithm on a PIN and password phone-unlocking system. Researchers discovered that when right handed users tried to unlock their phones with their (right) thumb, they showed a tendency to swipe from center to right. Left handed, however, tended to swipe their thumbs to the left area of the display.</ns0:p><ns0:p>Although the researchers achieved high rates of accuracy in their study, the number of users observed was only 12 and all of them where right-handed. A similar study based on data gathered from a PIN/password phone-unlocking process was designed by <ns0:ref type='bibr' target='#b2'>Buriro et al. (Buriro et al., 2016)</ns0:ref>. They combined data about touching zones with information provided by accelerometers and gyroscopes. They managed to determine the user's operating hand with a high level of accuracy and at the same time, they inferred information about the gender and age of the users. Unfortunately, both algorithms are based on a heavily domain dependent task (phone-unlocking) so it may be hard to extrapolate these results to other domains. These algorithms can neither be used without installing platform dependant software in the target mobile devices.</ns0:p><ns0:p>They also force users to execute specific phone-unlocking tasks to update information about the user's The research methodology employed in most of these studies is based on a similar approach. Researchers ask volunteers to tap or swipe over specific (restricted) areas of the screen in order to gather relevant data to be used by the algorithms. Information about hot areas is frequently complemented with accelerometer and device orientation readings coming from the gyroscopes installed on the mobile devices. This involves a strong device dependency, since specific hardware (accelerometer and gyroscope sensors) is required. In this regard a platform dependant development <ns0:ref type='bibr'>(Android, iOS, etc.)</ns0:ref> is required to access the information provided by the sensors, as this data is crucial to infer the operating hand. The main drawback of this approach is that mobile web-based applications cannot access this information right from the web browser, as they need the explicit user permission. In addition, not all mobile browsers offer this functionality <ns0:ref type='bibr' target='#b21'>(Mozilla, 2019)</ns0:ref>.</ns0:p><ns0:p>Although several of the mentioned studies succeed in determining the user's handedness, obtaining moderate to high levels of accuracy, they were not able to do it implicitly. That is, they were not able to determine the user's handedness through the (stealth) observation of the users' spontaneous behavior while they browse freely through the web with their mobile devices. In the mentioned studies, small number of users were asked to execute specific actions (such as swiping their fingers in the horizontal or vertical directions or unlocking their phones using a specific finger) that were not directly related to those required to execute the users' everyday tasks. Therefore, these actions were unfamiliar to the users.</ns0:p><ns0:p>Buttons and other kinds of relevant interactive web elements, like scroll controls, were not neither in the tests. <ns0:ref type='bibr' target='#b15'>Kaikkonen et al. (Kaikkonen et al., 2005)</ns0:ref> showed that, under such kind of controlled interaction environment, users show a strong bias to adapt their behavior to the one expected by the observers. Thus their behavior may be different if they do similar tasks in their natural own environment.</ns0:p><ns0:p>All these solutions require users to perform unfamiliar actions that they don't usually execute in their everyday interaction environments. This makes the handedness categorization process explicit rather than implicit, so the predicting algorithms are hard to be used at all in real web scenarios if they are intended to detect the handedness of anonymous users browsing the web.</ns0:p><ns0:p>Therefore, the research hypothesis we try to address is whether it is possible to reach a level of accuracy similar to those obtained by the above mentioned studies but, using implicit detection tactics instead. That is, trying to infer handedness through the stealth observation of the spontaneous behavior of users while they freely navigate through the web. All the data required is supposed to be captured by the web browser itself, without requiring access to the mobile device sensors (eg. accelerometers or gyroscopes).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>DESIGN OF THE STUDY</ns0:head><ns0:p>In order to validate the previously described hypothesis a study was conducted, composed by following the phases:</ns0:p><ns0:p>1. Work-space design. A generic E-Commerce website prototype was developed to be freely and spontaneously explored by the volunteers participating in the study.</ns0:p><ns0:p>2. Data gathering. Software agents were deployed on the prototype to observe by stealth, the actions performed by the users.</ns0:p><ns0:p>3. Selection of Subjects. A volunteer recruiting process was done on an E-commerce user target population obtaining a probability sample of 174 volunteers.</ns0:p><ns0:p>4. Variable selection. The coordinates of the click and scrolls operations, the scroll displacement as well as the mean slope of the thumb sub-traces gathered during the navigation sessions were considered as the dependent variables.</ns0:p><ns0:p>5. Statistical Methods. As it happens in the target population, the sample was highly unbalanced (there were many more right-handed users than left-handed users) so resampling techniques were required. To increase the Information Gain Ration, feature selection techniques were applied too in order to discard attributes that added noise.</ns0:p><ns0:p>6. Algorithm Evaluation. The processed sample was used to train and to evaluate a considerable number of classifiers. A ranking was obtained based on several accuracy markers.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Design of the Test</ns0:head><ns0:p>To simulate a real mobile web environment, a web application was developed containing a series of tests which had to be completed using two types of interaction tasks: scrolling and tapping. To avoid the use of external non standard libraries, the native TouchEvent API for JavaScript was used. This API, shared by the most popular web browsers, allows developers to detect when a user initiates or finishes a touch trace as well as to gather information during each touch trace. The web application would be later distributed through social networks to gather data from a large user pool.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the flow of the experiment and the type of data recorded in each of the pages. To initiate the test, the users first had to click on a 'start' button. They would be then presented with instructions and the context of the test. To continue, they were asked to press another button. Then, the test began allowing users to freely navigate through a web document in order to find an object that was </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Data Gathering</ns0:head><ns0:p>One of the issues found in described studies was the low number of test subjects: 14 <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref>, 32 <ns0:ref type='bibr' target='#b0'>(Azenkot and Zhai, 2012)</ns0:ref>, 12 <ns0:ref type='bibr' target='#b19'>(Löchtefeld et al., 2015)</ns0:ref>, 14 <ns0:ref type='bibr' target='#b27'>(Seipp and</ns0:ref><ns0:ref type='bibr'>Devlin, 2015), 10 (Goel et al., 2012)</ns0:ref>.</ns0:p><ns0:p>A major goal in this study was to gather information from a large number of users to better simulate the data that would be gathered from a real web application with many diverse users.</ns0:p><ns0:p>The data gathering phase consisted of a 3-day period in which the site was made public 1 and shared through social media (Facebook, Twitter and WhatsApp). The goal of this approach was to recruit users who had some experience in browsing the Internet with mobile devices as well as obtaining from them abundant real user data for the study.</ns0:p><ns0:p>Three main categories of data were collected: data related to button clicks, data related to the user's swiping behavior and data provided by the user. The testing web site included a total of seven pages. Four pages had buttons that spanned the width of the screen: the start page, the instructions pages, and the form.</ns0:p><ns0:p>The position where the users clicked on each button was recorded, measuring each click's coordinates.</ns0:p><ns0:p>Additionally, two pages were specifically designed so that mobile users were required to scroll up and down. Both pages required looking for a Call to Action object which was located at the bottom of the page and out of view. This way, users unwittingly generated scroll data while focusing on the search task. This idea is represented in Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, where 'Call to action' represents the position of the object to be found. Scroll data was recorded as a collection of points, each with X and Y coordinates.</ns0:p><ns0:p>The button and scroll variables recorded were based on previous studies which showed the importance of the curve formed by a finger swipe <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Goel et al., 2012)</ns0:ref> and the X-position of a button click <ns0:ref type='bibr' target='#b27'>(Seipp and Devlin, 2015)</ns0:ref>. Finally, the users were asked to fill a form to provide information about the hand used to perform the experiment, as well as some other information to serve as sample description: gender, age, weekly computer usage (hours/week), and device type. The main goal behind this experimental design was to prevent subjects from being aware of the object of the study. To achieve this, the tests were designed as games, and the users were simply asked to use the same hand and posture throughout the experiment while looking for the items on the screen. Thus, they did not know that their scrolls and clicks were being recorded. Furthermore, the experiment was accessible through a website and open to the public. The raw data captured was directly stored for future analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Analyzed Sample</ns0:head><ns0:p>The experiment yielded data from 174 voluntary users. Out of these, 35 completed the experiment with their left hand, whereas 139 performed it with the right hand, as can be seen in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. This meant that the resulting sample was highly unbalanced, which might have caused the machine learning algorithms to bias towards the majority class, providing low classification power for the minority class. Thus, as will be explained later, it was required to apply a re-sampling technique to balance the classes before training the classifiers <ns0:ref type='bibr' target='#b5'>(García et al., 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>VARIABLES OF THE STUDY</ns0:head><ns0:p>The gathered data for the scrolls was filtered to separate each touch trace into a set of sub-traces. This step was necessary because a single trace could be the result of users swiping up and down multiple times without lifting their finger. Therefore, every time the trace changed direction (upward to downward or for the clicks did not require any such filtering. Finally, the resulting set of features is comprised of:</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Click X and Y Positions</ns0:head><ns0:p>The mean and median were calculated for the set of X coordinates and the set of Y coordinates from each of the user's clicks. Each X and Y coordinate is relative to the button being clicked. In the study by <ns0:ref type='bibr' target='#b8'>Guo et al. (Guo et al., 2016)</ns0:ref>, the only data considered was based on scrolls. In this study, click data was included to provide a more accurate description of the user's interactions with a mobile application.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Scroll X and Y Positions</ns0:head><ns0:p>For each user, the positions of each point recorded, along their scroll sub-traces, were examined. From this data, the means were calculated for the maximum, minimum, initial and overall X-values for each of those sub-traces. Lastly, the standard deviation and median were calculated for the user's set of X and Y-values from all of their sub-traces.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Scroll X and Y Displacements</ns0:head><ns0:p>The displacements from the maximum to the minimum X and Y values along each scroll sub-trace were calculated and then averaged over the set of sub-traces for each user.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Mean Slope</ns0:head><ns0:p>The mean slope for the user's sub-traces was calculated by computing the mean of the slopes from the starting point to the end point of each sub-trace (1). Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016</ns0:ref>) included more curve shape descriptors in their algorithms (Root Mean Squared Error, Maximum Curvature, Average Curvature and Curve Convex Orientation). Although these measures might provide a more exhaustive description of the curve, they increase the computational workload. In contrast, as we will evaluate later, the slope of a curve is a simpler operation which still serves its purpose as a trace descriptor.</ns0:p><ns0:formula xml:id='formula_0'>m = ∆y ∆x = y n − y 0 x n − x 0 (1)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='5'>STATISTICAL METHODS</ns0:head><ns0:p>Several pre-processing techniques were applied to the filtered data to balance the classes and remove irrelevant and redundant information from the feature set. to its good performance in solving the class imbalance problem <ns0:ref type='bibr' target='#b14'>(Japkowicz and Stephen, 2002;</ns0:ref><ns0:ref type='bibr' target='#b1'>Batista et al., 2004)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.0.2'>Feature Selection</ns0:head><ns0:p>Further data pre-processing was applied to discard any attributes that might add noise to the classification.</ns0:p><ns0:p>The complete set of features was evaluated and ranked using the Information Gain Ratio (IGR) as an attribute selection method. The results for the Information Gain Ratio evaluation can be found in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The average X-position of the clicks was found to be the most informative feature. Average scroll X-position, along with the average maximum and minimum points of the scrolls, also showed a high degree of information gain towards the classification. This supports this study's approach for a mixed classification system, combining click and scroll data to better predict the user's operating hand. However, four features provided no useful information for this classification problem, each having a ranking of 0 after the evaluation. For the standard deviation measures, this might mean that both left and right-handed users are just as consistent with the areas of the screen they use.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>ALGORITHM EVALUATION</ns0:head><ns0:p>It is possible that, as each individual user generates more and more touch data during their navigation session, the scrolling manner could be affected in some way during that session so there could exist specific patterns present in their first scroll actions that may vary in later scrolls actions. Furthermore, it would be interesting to gauge whether the predictive power of the algorithms under evaluation varies when considering a specific number of scrolls actions and whether a different classifier could perform better than the proposed one based on that figure.</ns0:p><ns0:p>For this reason, the classification models were finely tuned for the prediction of the operating hand considering a specific n number of scroll actions as the main parameter. The initial goal was to obtain faster, but slightly less accurate classifiers, created from training data coming from a very few scroll actions done by users who just arrived at the web document. But the goal also pursued to define slower but more accurate classifiers created with more touch trace data, coming from users who spent more time exploring the website.</ns0:p><ns0:p>In order to perform individual scroll analysis, the original sample was split into several sub-samples.</ns0:p><ns0:p>Each of these sub-samples contains the features previously described for all the recorded scrolls actions from only 1 scroll action up to n. The re-sampling method previously described was consequently applied to each sub-sample. That is, the minority class examples were over-sampled until both classes were balanced.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The amount of examples in each sub-sample decreases as the number of recorded scroll actions grows larger. This occurs because some users were faster when finding the Call to Action objects in the test, thus completing it without generating as much scroll data as other users.</ns0:p><ns0:p>A considerable number of classifiers were trained and evaluated. Some of them were chosen based on their success in previous studies, such as Random Forest, used by Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref> and Seipp and Devlin <ns0:ref type='bibr' target='#b27'>(Seipp and Devlin, 2015)</ns0:ref>, Multi-Layer Perceptron, used by Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref> or C4.5, used by Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>Goel et al. (Goel et al., 2012)</ns0:ref> and <ns0:ref type='bibr' target='#b27'>Seipp and Devlin (Seipp and Devlin, 2015)</ns0:ref>. Sequential Minimal Optimization (SMO) <ns0:ref type='bibr' target='#b25'>(Platt, 1998)</ns0:ref> is one of the most popular algorithms for training Support Vector Machines (SVMs). It was not used in the aforementioned studies, but was chosen for its wide applicability in pattern recognition and classification problems <ns0:ref type='bibr' target='#b22'>(Naveed et al., 2019)</ns0:ref>. Others used by other researchers in similar studies, such as Naive Bayes <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref> and K-Nearest Neighbors <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Löchtefeld et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>Seipp and Devlin, 2015)</ns0:ref> were also included. Furthermore, other algorithms were tested and the best-performing ones were included into the experimental set of classifiers. These include K-Star, PART (partial decision tree), Adaptive Boosting with Decision Stumps, Adaptive Boosting with PART, and Logistic Regression.</ns0:p><ns0:p>The classifiers were evaluated by using a random 66% split on the data and averaging the results over 200 iterations. In a practical application, this approach would provide a quick prediction as the user starts scrolling, in exchange for some loss in the quality of the said prediction. As the user generates more touch trace data, the classifiers learn from the new information while still considering the previous one. This means that the classification power increases and predictions are more accurate and robust. This is indeed confirmed by the results shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> for n=2 where the best classifier (AdaBoost PART) achieves a 98.94%</ns0:p><ns0:p>TPR and the prediction is incorrect only for one instance. The corresponding tables for values of n from 3 to 6 are included in Tables A1, A2, A3 and A4 in the Appendix (see section 10).</ns0:p><ns0:p>Finally, the best classification was achieved for n = 7 by partial decision trees (PART), classifying incorrectly, on average, only 0.33 of the 42 testing instances in this sample on average (see Table <ns0:ref type='table'>5</ns0:ref>). This means that in most iterations the classifier provided perfect results, predicting the user's operating hand with 100% accuracy (see Table <ns0:ref type='table'>5</ns0:ref>).</ns0:p><ns0:p>The results obtained in these tests demonstrate that when considering scrolls actions individually and adding new information as the user generates touch trace data, the classifiers can provide highly educated predictions from the moment the user starts scrolling and even more accurate ones with as little as 7 interface interactions.</ns0:p><ns0:p>For comparison to the 99.6% obtained, it is remarkable that Guo et al. <ns0:ref type='bibr' target='#b8'>(Guo et al., 2016)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the phone unlocking process is rarely used. Furthermore, these approaches require access to the device's sensors, whereas the approach proposed in this study is completely sensor-independent and applicable for any device with a touchscreen.</ns0:p><ns0:p>The improved results obtained in this study are probably due to the combination of click and scroll data, which provide more information for the classifiers than using only one of them. Furthermore, the addition of several descriptors for the values of the scrolls, such as the median or the starting points, and the inclusion of the slope descriptor might also have had a positive influence on the classification power.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>LIMITATIONS</ns0:head><ns0:p>The age distribution of the sample comprised users from 15 to 74 years. While the most recurrent user age groups are those from 15 to 25 and from 35 to 55 years, only 2 of the subjects were in the range from 65 years onward, and no subject was younger than 15. This sample distribution is consistent with the target population of e-commerce users and consequent to the volunteer recruit strategy employed, as users belonging to those age groups tend to use these interaction scenarios more frequently than older or younger users <ns0:ref type='bibr' target='#b12'>(Hutchinson, 2017)</ns0:ref>.</ns0:p><ns0:p>Although this sample distribution reinforces the internal validity of the study, it weakens its external validity in specific age ranges as the study cannot draw conclusions concerning user profiling of children and/or elderly users.</ns0:p><ns0:p>Regarding gender distribution, 44% of the test subjects were female, in comparison with the approximately 50% found in the global population <ns0:ref type='bibr'>(Worldbank, 2017)</ns0:ref>. This slight difference may be due to the method of distribution of the experiment. Nevertheless, this study's gender ratio remains representative of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the global population.</ns0:p><ns0:p>Additionally, the data in this research was gathered from Spanish and English-speaking users. These languages belong to the Western culture and share the same writing direction (from left to right), among other common characteristics. As a result, the profiling model may not be extensible to other cultures, specifically to the users of the majority of the Semitic languages (like the Arabic, the Amharic, etc) which are written from right to left.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>The goal of this research was to implicitly determine the operating hand of a mobile device user in an E-commerce web application. Previous studies had mostly focused on sensor-based solutions, with few test subjects. The study by <ns0:ref type='bibr' target='#b8'>Guo et al. (Guo et al., 2016)</ns0:ref> was the closest we found to our goal (F-Measure value of 0.956), although in their approach the user is required to do specific tasks on an Android-based platform, whereas our user can freely use their preferred device to navigate through a web document.</ns0:p><ns0:p>Our findings suggest that the best classification device is a partial decision tree trained using a combination of features gathered through the evaluation of button clicks and scroll traces from 174 voluntary users, detecting the user operating hand with an TPR value of 99.92. However, we must point out that whenever it is required to get a quick classification based on analysis of very few user interactions (n = 1, n=2...), the algorithms based on boosting techniques (such as AdaBoost Decision</ns0:p><ns0:p>Stump or AdaBoost PART) are the ones that perform better.</ns0:p><ns0:p>To the best of our knowledge, this approach is the first to explore and propose a solution for operating hand detection in mobile web applications using only data gathered from the touchscreen when the user spontaneous carries out web browsing tasks. Although the tasks studied are focused on click and scroll up/down, the relatively high level of classification accuracy obtained (99.6%), ruled us out to explore the effect of other, no so common web browsing tasks, such as for example scroll left/right or zoom in/out.</ns0:p><ns0:p>Unlike other proposals covered in this document, the proposed solution is based on implicit determination of the user's handedness based on stealth observation of natural interaction tasks. It does not require users to perform uncommon tasks in order to determine their handedness. It neither requires to install platform dependant software on the target mobile device, neither the use of sensors, avoiding calibration an extra battery consumption.</ns0:p><ns0:p>E-commerce web application developers can make use of these findings to detect the handedness of anonymous users visiting their web sites after observing their natural (spontaneous) interactions for a few time in stealth mode. The proposed algorithm is able to detect the user's handedness with moderate-high level of accuracy, thus enriching the user model required by their applications through the user interface personalization process.</ns0:p><ns0:p>Although discussion about how this personalization process may be implemented is beyond the scope of this paper, E-commerce applications may improve their user experience providing custom interfaces for left and right-handed users. Hence, the personalization process may provide accessibility-based solutions to the specific user interaction requirements of each kind of user.</ns0:p><ns0:p>The data gathering agents developed in this study can be deployed in the target web applications to feed inference algorithms running on the server-side. Boosting-based algorithms, like AdaBoost Decision Stump or AdaBoost PART, can be used to perform a first and quick classification round based on one or two user interactions followed by the execution of partial decision trees (once more data from ore user interaction is gathered) to get a more accurate classification. These algorithms can classify the visiting anonymous users, updating these results in the user model. This information may be used in several ways to adapt the user interface (eg. through the use of customized CSS), hence increasing the overall user experience.</ns0:p><ns0:p>As mentioned in the limitation section, the way the users creates sub-traces while they freely perform scrolls operations during their navigation sessions may be influenced by other factors rather than their handedness, such as the language spoken. This study revealed a few interesting future research topics.</ns0:p><ns0:p>One of these involves studying the influence of user's culture. Activities such as swiping and clicking might be biased by cultural aspects like the writing direction of the user's language. Thus, studying cultural differences performing these interactions might improve this research, widening the applicability of the solution.</ns0:p><ns0:p>As mentioned in the limitation section, activities such as swiping and clicking might be biased by cultural aspects like the writing direction of the user's language. The data samples obtained for this study Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>were based on languages written from the left to the right only. Thus, studying cultural differences behind these interactions might improve this research, widening the applicability of the solution to, for example, languages written from the right to the left.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Red line shows the maximum thumb's motion range on a 5.4' size mobile display. Source: own elaboration.</ns0:figDesc><ns0:graphic coords='3,245.13,187.06,206.79,303.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Thumb Zone mappings in a large 5.5' screen. Left: thumb areas for a left-handed user. Right: thumb areas for a right-handed user. Source: adapted from Scott Hurff (2014).</ns0:figDesc><ns0:graphic coords='5,193.43,63.78,310.17,269.60' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Example of an interface adaptation designed to make the interactive objects cited as an example in Figure 3 more usable. The 'share' and 'like' buttons are now easier to access for both left and right-handed users. Source: own elaboration.</ns0:figDesc><ns0:graphic coords='7,193.43,63.78,310.17,272.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021) Manuscript to be reviewed Computer Science operating hand. Those PIN based phone-unlocking tasks have been largely superseded by footprint recognition or by face recognition in modern mobile faces.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>located at the bottom of the document. The object represented the classic Call To Action found in modern and E-commerce user interfaces. When the users clicked on that object, they were taken to the instructions page for the second test. Again, they were required to press a button to continue the tests. The second test involved finding an uppercase letter within a body of text in a web document. When the users clicked on the letter, they were taken to the last page where a web form asked users to indicate the operating hand and posture used during the tests. Their age, gender and some other relevant information about the users was also collected in that web form.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Transition diagram for the experiment and type of data recorded in each step. Source: own elaboration.</ns0:figDesc><ns0:graphic coords='9,172.75,288.77,351.49,80.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Two scroll tests were designed specifically to allow users to scroll through the web document. Source: own elaboration.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>For the purpose of this study, no preference was given in the prediction model for either the left or the right hand. However, as seen in the sample distribution, more subjects performed the test using their right hand, resulting in few left-hand operation examples, which produced an unbalanced data-set. Furthermore, the natural distribution in a classification problem seldom produces the best-performing classifier(Weiss and Provost, 2001). Therefore, a re-sampling technique was applied, randomly oversampling users from the 'left' class with replacement until both classes were balanced. The resulting data-set contained the original 139 right-hand examples and the re-sampled 139 left-hand examples. This method was used due</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,193.43,63.77,310.16,272.12' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Sample Distribution ), a new sub-trace was created to preserve the validity of features such as start and end points, slope, and maximum and minimum X-positions. Next, each sub-trace was passed through a secondary filter to ensure it contained at least two touch-points to provide meaningful results, since a sub-trace with only one touch-point would represent a user's misclick when scrolling. The gathered data</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell cols='2'>Category Occurrences</ns0:cell></ns0:row><ns0:row><ns0:cell>2*Hand</ns0:cell><ns0:cell>Left</ns0:cell><ns0:cell>35</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Right</ns0:cell><ns0:cell>139</ns0:cell></ns0:row><ns0:row><ns0:cell>6*Age</ns0:cell><ns0:cell>[15-25]</ns0:cell><ns0:cell>81</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(25-35]</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(35-45]</ns0:cell><ns0:cell>34</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(45-55]</ns0:cell><ns0:cell>41</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(55-65]</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(65-75]</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2*Gender Male</ns0:cell><ns0:cell>98</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Female</ns0:cell><ns0:cell>76</ns0:cell></ns0:row><ns0:row><ns0:cell>downward to upward</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Information Gain Ratio Attribute Selection Ranking</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature</ns0:cell><ns0:cell>IGR</ns0:cell><ns0:cell>Feature</ns0:cell><ns0:cell>IGR</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean X (clicks)</ns0:cell><ns0:cell cols='2'>0.487 Mean X (scrolls)</ns0:cell><ns0:cell>0.432</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Mean Minimum X (scrolls) 0.395 Median X (clicks)</ns0:cell><ns0:cell>0.384</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean Start X (scrolls)</ns0:cell><ns0:cell cols='2'>0.323 Median X (scrolls)</ns0:cell><ns0:cell>0.305</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean Slope</ns0:cell><ns0:cell cols='3'>0.178 Mean Y Displ. (scrolls) 0.148</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean Y (clicks)</ns0:cell><ns0:cell cols='2'>0.147 Median Y (clicks)</ns0:cell><ns0:cell>0.117</ns0:cell></ns0:row><ns0:row><ns0:cell>Median Y (scrolls)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell cols='2'>Mean X Displ. (scrolls) 0</ns0:cell></ns0:row><ns0:row><ns0:cell>Std. Dev. X (scrolls)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>Std. Dev. Y (scrolls)</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Table 3, shows the results obtained for n=1, that is when only the first scroll action done by the users in the web site is considered. As can be seen by considering only the first scroll action done by a user, the best model (AdaBoost Decision Stump) achieves on average a 98.16% TPR, classifying on average 2 examples incorrectly.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Table 3. Classification results for n=1</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithm</ns0:cell><ns0:cell>TPR</ns0:cell><ns0:cell cols='3'>Incorrect F-Measure AUROC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>AdaBoost Decision Stump 98.16 2.00</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>AdaBoost PART</ns0:cell><ns0:cell cols='2'>98.01 2.16</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell cols='2'>97.80 2.40</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>PART</ns0:cell><ns0:cell cols='2'>96.98 3.28</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>C4.5</ns0:cell><ns0:cell cols='2'>96.85 3.43</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>k-Star</ns0:cell><ns0:cell cols='2'>96.75 3.54</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>KNN</ns0:cell><ns0:cell cols='2'>96.68 3.62</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.96</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP</ns0:cell><ns0:cell cols='2'>95.32 5.10</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell cols='2'>95.11 5.32</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>SMO</ns0:cell><ns0:cell cols='2'>94.34 6.16</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell cols='2'>84.32 17.05</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>achieved a precision of 95.6% on data gathered from with an Android-specific implementation and fewer test subjects and Löchtefeld et al.<ns0:ref type='bibr' target='#b19'>(Löchtefeld et al., 2015)</ns0:ref> attained a TPR of 98.5% by gathering data during the phone unlocking process, including gyroscope and accelerometer readings. Although their systems achieved</ns0:figDesc><ns0:table /><ns0:note>12/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56614:1:1:NEW 12 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification results for n=2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Algorithm</ns0:cell><ns0:cell>TPR</ns0:cell><ns0:cell cols='3'>Incorrect F-Measure AUROC</ns0:cell></ns0:row><ns0:row><ns0:cell>AdaBoost PART</ns0:cell><ns0:cell cols='2'>98.94 1.00</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>AdaBoost Decision Stump 98.88 1.06</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell cols='2'>98.76 1.18</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>KNN</ns0:cell><ns0:cell cols='2'>98.21 1.69</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>k-Star</ns0:cell><ns0:cell cols='2'>97.95 1.94</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>C4.5</ns0:cell><ns0:cell cols='2'>97.94 1.95</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>PART</ns0:cell><ns0:cell cols='2'>97.60 2.26</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell cols='2'>94.30 5.39</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP</ns0:cell><ns0:cell cols='2'>94.00 5.67</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>SMO</ns0:cell><ns0:cell cols='2'>93.78 5.87</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell cols='2'>92.32 7.26</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Table 5. Classification results for n=7</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithm</ns0:cell><ns0:cell>TPR</ns0:cell><ns0:cell cols='3'>Incorrect F-Measure AUROC</ns0:cell></ns0:row><ns0:row><ns0:cell>PART</ns0:cell><ns0:cell cols='2'>99.92 0.33</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>AdaBoost Decision Stump 99.28 0.30</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>C4.5</ns0:cell><ns0:cell cols='2'>99.19 0.33</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell cols='2'>98.70 0.54</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>AdaBoost PART</ns0:cell><ns0:cell cols='2'>97.75 0.93</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.99</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell cols='2'>97.62 0.99</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>SMO</ns0:cell><ns0:cell cols='2'>95.46 1.88</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell>k-Star</ns0:cell><ns0:cell cols='2'>95.26 1.96</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP</ns0:cell><ns0:cell cols='2'>95.06 2.05</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell>KNN</ns0:cell><ns0:cell cols='2'>94.98 2.08</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell cols='2'>93.60 2.65</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.98</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>similar results compared to ours, they are not always applicable, such as in mobile web applications where</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "February 18, 2021
Dear Editor,
Please find enclosed a revised version of the manuscript titled “Implicit detection of
user Handedness in touchscreen devices through interaction analysis” to be
reconsidered for publication in the PeerJ Computer Science.
We would like to thank you and the reviewers for the careful review of our
manuscript and for your time. We have modified it in keeping with the reviewers’
comments and detailed replies are hereby attached.
We believe that the manuscript is now suitable for publication in PeerJ Computer
Science.
Yours sincerely
Firmado digitalmente por
GONZALEZ
GONZALEZ RODRIGUEZ
RODRIGUEZ
BERNARDO MARTIN BERNARDO MARTIN 71671062A
Fecha: 2021.02.19 10:29:36
- 71671062A
+01'00'
Martín González-Rodríguez, PhD
Department of Computer Science
University of Oviedo
On behalf of all authors.
Martín González-Rodríguez
University of Oviedo - Department of Computer Science - Federico García Lorca s/n 33007 Oviedo/Uviéu; Spain
martin@uniovi.es - (+34) 985 10 50 95
Reviewer 1 (Anonymous)
Experimental design
This paper is completely whiting the scope of PJeer Computer Science.
Section 3 presents the design study from its definition (section 3.1) to the execution.
It clears presents from section 3.2 how the procedures from section 3.1 was applied,
all variables analyses, collected data, how this data was statistical analyzed and
the conclusions obtained suppirted by the data. However, section 3.1 (Design of the
test) limits to describe the test procedure (step by step) performed by the
participants. It is really important for the replicability of the study, to present in this
section the complete research protocol of the study establishing the plan followed
in the experiment, that is : the context selection, the hypothesis formulation,
variables selection (independent and dependent variables, selection of subjects (if
it’s a a probability or a non-probability sample), instrumentation, the statistical test
planned to analyze the results with justification, and validation. All this information
is presented as the results are described in the following sections, but in a dispersed
way. That means the experiment really followed all rules of empirical studies which
gives confidence for the reader and the researches that intend to use the results
from this paper. My suggestion is just to value the study and allow others to not only
replicate it but use the protocol as a basis for other work. Moreover, having a
complete research protocol previously with a complete view of the study helps in
the understanding of the execution of the study and analysis of the results.
Thank you for your suggestion.
The manuscript was modified following your advice to include a description of the
research protocol. It was included just below section 3 (Design of the Study).
In order to improve the legibility, the description was organized using a numbered
list of the phases designed and executed for the test, briefly describing all of them (a
full description for each one is later included in the body text of this section).
We hope that this approach might help to clarify the connection between the
different phases of the study as well as helping other fellow researchers to replicate
the results obtained.
Validity of the findings
The paper presents very relevant results for the community that work on the design
of user interfaces for mobile applications. The conclusion is well stated. The goal of
the study is recalled and the performed procedures to address the goal is
summarized. The main results and originality of the study are also highlighted.
The conclusion (section 7 ) is followed by another section (section 8) that presents
the limitation of the work and future works. In fact, the future works is limited to the
Martín González-Rodríguez
University of Oviedo - Department of Computer Science - Federico García Lorca s/n 33007 Oviedo/Uviéu; Spain
martin@uniovi.es - (+34) 985 10 50 95
last paragraph. I suggest that future works be transferred for the section of
conclusion and more explored. Moreover, that the limitations be presented before
the conclusion. In other words, I suggest that section 8 be divided and limitations be
presented before the conclusion section to better discuss the validity of the sudy.
The validity of the study is discussed in the limitation section explicitly explaining
some results and the conditions to be replied, However, as an experimental
experiment, threats of validity should be analyzed to better discuss the limitations.
The way it is presented does not really address all validity criteria (internal validity,
construct validity, external validity and conclusion validity) of an empirical study. A
good suggestion about the threats to validity in experimentation can be foun in the
book of C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén,
Experimentation in software engineering. Springer Science & Business Media, 2012.
As suggested by the reviewer, we reorganized the sections related to the
conclusions, limitations, and future research. The limitations sections are now
included before the Conclusions and Future Work.
The contents of both sections were reorganized as well following the reviewer's
advice to comply with the statistical conclusion validity, mostly referring to the
limitations in the external validity of the conclusions.
Reviewer 2 (Anonymous)
Experimental design
I suggest the author can describe more about the connection between different
phase of the experiments.
Thank you for your suggestion.
It is very similar to the one proposed by Reviewer 1, thus the changes done to the
manuscript were the same.
A numbered list of the phases of the study were included just after the beginning of
section 2 (Design of the Study).
The list describes the steps followed to replicate the study, describing the results
obtained a each phase and the connection between them.
Martín González-Rodríguez
University of Oviedo - Department of Computer Science - Federico García Lorca s/n 33007 Oviedo/Uviéu; Spain
martin@uniovi.es - (+34) 985 10 50 95
" | Here is a paper. Please give your review comments after reading it. |
66 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Modern software development and operations rely on monitoring to understand how systems behave in production. The data provided by application logs and runtime environment are essential to detect and diagnose undesired behavior and improve system reliability. However, despite the rich ecosystem around industry-ready log solutions, monitoring complex systems and getting insights from log data remains a challenge. Researchers and practitioners have been actively working to address several challenges related to logs, e.g., how to effectively provide better tooling support for logging decisions to developers, how to effectively process and store log data, and how to extract insights from log data. A holistic view of the research effort on logging practices and automated log analysis is key to provide directions and disseminate the state-of-the-art for technology transfer. In this paper, we study 108 papers (72 research track papers, 24 journals, and 12 industry track papers) from different communities (e.g., machine learning, software engineering, and systems) and structure the research field in light of the life-cycle of log data. Our analysis shows that (1) logging is challenging not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable a contextual analysis of source code for log recommendation but further investigation is required to assess the usability of those tools in practice, (3) few studies approached efficient persistence of log data, and ( <ns0:ref type='formula'>4</ns0:ref>) there are open opportunities to analyze application logs and to evaluate state-of-the-art log analysis techniques in a DevOps context.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Modern software development and operations rely on monitoring to understand how systems behave in production. The data provided by application logs and runtime environment are essential to detect and diagnose undesired behavior and improve system reliability. However, despite the rich ecosystem around industry-ready log solutions, monitoring complex systems and getting insights from log data remains a challenge.</ns0:p><ns0:p>Researchers and practitioners have been actively working to address several challenges related to logs, e.g., how to effectively provide better tooling support for logging decisions to developers, how to effectively process and store log data, and how to extract insights from log data. A holistic view of the research effort on logging practices and automated log analysis is key to provide directions and disseminate the state-of-the-art for technology transfer.</ns0:p><ns0:p>In this paper, we study 108 papers (72 research track papers, 24 journals, and 12 industry track papers) from different communities (e.g., machine learning, software engineering, and systems) and structure the research field in light of the life-cycle of log data.</ns0:p><ns0:p>Our analysis shows that (1) logging is challenging not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable a contextual analysis of source code for log recommendation but further investigation is required to assess the usability of those tools in practice, (3) few studies approached efficient persistence of log data, and ( <ns0:ref type='formula'>4</ns0:ref>) there are open opportunities to analyze application logs and to evaluate state-of-the-art log analysis techniques in a DevOps context.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Software systems are everywhere and play an important role in society and economy. Failures in those systems may harm entire businesses and cause unrecoverable loss in the worst case. For instance, in 2018, a supermarket chain in Australia remained closed nationwide for three hours due to 'minor IT problems' in their checkout system <ns0:ref type='bibr' target='#b58'>(Chung, 2018)</ns0:ref>. More recently, in 2019, a misconfiguration and a bug in a data center management system caused a worldwide outage in the Google Cloud platform, affecting not only Google's services, but also businesses that use their platform as a service, e.g., Shopify and Snapchat <ns0:ref type='bibr' target='#b151'>(Wired, 2019;</ns0:ref><ns0:ref type='bibr' target='#b82'>Google, 2019)</ns0:ref>.</ns0:p><ns0:p>While software testing plays an important role in preventing failures and assessing reliability, developers and operations teams rely on monitoring to understand how the system behaves in production.</ns0:p><ns0:p>In fact, the symbiosis between development and operations resulted in a mix known as DevOps <ns0:ref type='bibr' target='#b46'>(Bass et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b67'>Dyck et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b134'>Roche, 2013)</ns0:ref>, where both roles work in a continuous cycle. In addition, given the rich nature of data produced by large-scale systems in production and the popularization of machine learning, there is an increasingly trend to adopt artificial intelligence to automate operations. <ns0:ref type='bibr' target='#b79'>Gartner (2019)</ns0:ref> refers to this movement as AIOps and also highlights companies providing automated operations as a service. Unsurprisingly, the demand to analyze operations data fostered the creation of a multi-million dollar business <ns0:ref type='bibr' target='#b145'>(TechCrunch, 2017;</ns0:ref><ns0:ref type='bibr'>Investor's Business Daily, 2018)</ns0:ref> and plethora of open-source and commercial tools to process and manage log data. For instance, the Elastic stack 1 (a.k.a.</ns0:p><ns0:p>'ELK' stack) is a popular option to collect, process, and analyze log data (possibly from different sources) 1 https://www.elastic.co/what-is/elk-stack (e.g., machine learning, software engineering, and systems). We structure the research field in light of the life-cycle of log data, elaborate the focus of each research area, and discuss opportunities and directions for future work. Our analysis shows that (1) logging is a challenge not only in open-source projects but also in industry, (2) machine learning is a promising approach to enable contextual analysis of source code for log recommendations but further investigation is required to assess the usability of those tools in practice, (3) few studies address efficient persistence of log data, and (4) while log analysis is mature field with several applications (e.g., quality assurance and failure prediction), there are open opportunities to analyze application logs and to evaluate state-of-the-art techniques in a DevOps context.</ns0:p></ns0:div>
<ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head><ns0:p>The goal of this paper is to discover, categorize, and summarize the key research results in log-based software monitoring. To this end, we perform a systematic mapping study to provide a holistic view of the literature in logging and automated log analysis.</ns0:p><ns0:p>In our mapping study, we apply keywording of abstracts <ns0:ref type='bibr' target='#b129'>(Petersen et al., 2008)</ns0:ref> to derive the classification schema. Overall, our survey methodology is divided into four parts as illustrated in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>First, we perform preliminary searches to derive our search criteria and build an initial list of potential relevant studies based on five data sources. Next, we apply our inclusion/exclusion criteria to arrive at Overview of survey methodology: our four steps consists of the discovery of related studies ('Search Process'), the selection of relevant studies ('Study Selection'), the mapping process ('Classification'), and the update for papers published in 2019 ('Survey Update').</ns0:p><ns0:note type='other'>Search Process Study Selection</ns0:note><ns0:p>the eventual list of selected papers up to 2018 (when we first conducted the survey). We then conduct the classification and data extraction procedures. Finally, we update the results of our survey to include papers published in 2019.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Sources and Search Process</ns0:head><ns0:p>To conduct our study, we considered five popular digital libraries from different publishers based on other literature reviews in software engineering, namely, ACM Digital Library, IEEE Xplore, SpringerLink, Scopus, and Google Scholar. By considering five digital libraries, we maximize the range of venues and increase the diversity of studies related to logging. In addition, this decision reduces the bias caused by the underlying search engine since two digital libraries may rank the results in a different way for the same equivalent search.</ns0:p><ns0:p>We aim to discover relevant papers from different areas as much as possible. However, it is a challenge to build an effective query for the five selected digital libraries without dealing with a massive amount of unrelated results, since terms such as 'log' and 'log analysis' are pervasive in many areas. Conversely, inflating the search query with specific terms to reduce false positives would bias our study to a specific context (e.g., log analysis for debugging). To find a balance between those cases, we conducted preliminary searches with different terms and search scopes, e.g., full text, title, and abstract. We considered terms based on 'log', its synonyms, and activities related to log analysis. During this process, we observed that forcing the presence of the term 'log' helps to order relevant studies on the first pages. In case the data source is unable to handle word stemming automatically (e.g., 'log' and 'logging'), we enhance the query with the keywords variations. In addition, configured the data sources to search on titles and abstracts whenever it was possible. In case the data source provides no support to search on titles and abstracts, we considered only titles to reduce false positives. This process resulted in the following search query: Dealing with multiple libraries requires additional work to merge data and remove duplicates. In some cases, the underlying information retrieval algorithms yielded unexpected results when querying some libraries, such as duplicates within the data source and entries that mismatch the search constraints. To overcome those barriers, we implemented auxiliary scripts to cleanup the dataset. We index the entries by title to eliminate duplicates, and we remove entries that fail to match the search criteria. Furthermore, we keep the most recent work when we identify two entries with the same title and different publication date (e.g., journal extension from previous work).</ns0:p><ns0:p>As of December of 2018, when we first conducted this search, we extracted 992 entries from Google Scholar, 1,122 entries from ACM Digital Library, 1,900 entries from IEEE Xplore, 2,588 entries from Scopus, and 7,895 entries from SpringerLink (total of 14,497 entries). After merging and cleaning the data, we ended up with 4,187 papers in our initial list.</ns0:p></ns0:div>
<ns0:div><ns0:head>Study Selection</ns0:head><ns0:p>We conduct the selection process by assessing the 4,187 entries according to inclusion/exclusion criteria and by selecting publications from highly ranked venues. We define the criteria as follows:</ns0:p><ns0:p>C 1 : It is an English manuscript. C 4 : The paper uses the term 'log' in a software engineering context, i.e., logs to describe the behavior of a software system. We exclude papers that use the term 'log' in an unrelated semantic (e.g., deforestation, life logging, well logging, log function).</ns0:p><ns0:p>The rationale for criterion C 1 is that major venues use English as the standard idiom for submission.</ns0:p><ns0:p>The rationale for criterion C 2 is to avoid including secondary studies in our mapping, as suggested by <ns0:ref type='bibr' target='#b101'>Kitchenham and Charters (2007)</ns0:ref>. In addition, the process of applying this criterion allows us to identify other systematic mappings and systematic literature reviews related to ours. The rationale for criterion C 3 is that some databases return gray literature as well as short papers; our focus is on full peer-reviewed research papers, which we consider mature research, ready for real-world tests. Note that different venues might have different page number specifications to determine whether a submission is a full or short paper, and these specifications might change over time. We consulted the page number from each venue to avoid unfair exclusion. The rationale for criterion C 4 is to exclude papers that are unrelated to the scope of this mapping study. We noticed that some of the results are in the context of, e.g, mathematics and environmental studies. While we could have tweaked our search criteria to minimize the occurrence of those false positives (e.g., NOT deforestation), we were unable to systematically derive all keywords to exclude; therefore, we favored higher false positive rate in exchange of increasing the chances of discovering relevant papers.</ns0:p><ns0:p>The first author manually performed the inclusion procedure. He analyzed the title and abstracts of all the papers marking the paper as 'in' or 'out'. During this process, the author applied the criteria and categorized the reasons for exclusion. For instance, whenever an entry fails the criteria C 4 , the authors classified it as 'Out of Scope'. The categories we used are: 'Out of Scope', 'Short/workshop paper', 'Not a research paper', 'Unpublished' (e.g., unpublished self-archived paper indexed by Google Scholar), 'Secondary study', and 'Non-English manuscript'. It is worth mentioning that we flagged three entries as 'Duplicate' as our merging step missed these cases due to special characters in the title. After applying the selection criteria, we removed 3,872 entries resulting in 315 entries.</ns0:p><ns0:p>In order to filter the remaining 315 papers by rank, we used the CORE Conference Rank (CORE Rank) 8 as a reference. We considered studies published only in venues ranked as A* or A. According to the CORE Rank, those categories indicate that the venue is widely known in the computer science community and has a strict review process by experienced researches. After applying the rank criteria, we removed 219 papers.</ns0:p><ns0:p>Our selection consists of (315 − 219 =) 96 papers after applying inclusion/exclusion criteria (step 1) and filtering by venue rank (step 2). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Classification and Data Extraction</ns0:head><ns0:p>To conduct the classification process, we perform the keywording of abstracts <ns0:ref type='bibr' target='#b129'>(Petersen et al., 2008)</ns0:ref>.</ns0:p><ns0:p>In this process, we extract keywords in the abstract (or introduction, if necessary) and cluster similar keywords to create categories. We perform this process using a random sample of papers to derive an initial classification schema.</ns0:p><ns0:p>The first author conducted two separate blinded classifications on different periods of time to measure the degree of adherence to the schema given that classification is subject of interpretation, and thus, a source of bias, The same outcome converged on 83% of the cases (80 out of the 96 identified papers). The divergences were then discussed with the second author of this paper. Furthermore, the second author reviewed the resulting classification. Note that, while a paper may address more than one category, we choose the category related to the most significant contribution of that paper.</ns0:p><ns0:p>Later, with all the papers initially classified, the authors explored the specific objectives of each paper and review the assigned category. To that aim, the first and second authors performed card sorting <ns0:ref type='bibr' target='#b141'>(Spencer and Warfel, 2004;</ns0:ref><ns0:ref type='bibr' target='#b149'>Usability.gov, 2019)</ns0:ref> to determine the goal of each of the studied papers. Note that, in case new categories emerge in this process, we generalize them in either one of the existing categories or enhance our classification schema to update our view of different objectives in a particular research area. After the first round of card sorting, we noticed that some of the groups (often the ones with high number of papers) could be further broken down in subcategories (we discuss the categories and related subcategories in the Results section).</ns0:p><ns0:p>Finally, we collect metadata information about the papers. While a complete meta-analysis is out of scope from our study, we are interested in visualizing the communities involved in the research area, understand how the field grows over time, and to which extent industry and academia collaborate. To this end, we extract the year of publication, the venue, and the type of paper (i.g., journal, research track, or industry track paper).</ns0:p></ns0:div>
<ns0:div><ns0:head>Survey Update</ns0:head><ns0:p>As of October of 2020, we updated our survey to include papers published in 2019 since we first conducted this analysis during December in 2018. To this end, we select all 11 papers from 2018 and perform forward snowballing to fetch a preliminary list of papers from 2019. We use snowballing for simplicity since we can leverage the 'Cited By' feature from Google Scholar rather than scraping data of all five digital libraries. It is worth mentioning that we limit the results up to 2019 to avoid incomplete results for 2020.</ns0:p><ns0:p>For the preliminary list of 2019, we apply the same selection and rank criteria (see Section 'Study Selection'); then, we analyze and map the studies according to the existing classification schema (see Section 'Classification and Data Extraction'). In this process, we identify 12 new papers and merge them with our existing dataset. Our final dataset consists of (96 + 12 =) 108 papers.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We identified 108 papers (72 research track papers, 24 journals, and 12 industry track papers) published in 46 highly ranked venues spanning different communities (Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>). We manually aggregate venues by their main scope after checking their description and call for papers, and we merge closely related topics (e.g., Artificial Intelligence, Machine Learning, and Data Science). </ns0:p></ns0:div>
<ns0:div><ns0:head>Overview of Research Areas</ns0:head><ns0:p>We grouped the studied papers among the following three categories and their respective subcategories that emerged from our keywording process:</ns0:p><ns0:p>• Logging: Research in this category aims at understanding how developers conduct logging practices and providing better tooling support to developers. There are three subcategories in this line of work: (1) empirical studies on logging practices, (2) requirements for application logs, and (3) implementation of log statements (e.g., where and how to log).</ns0:p><ns0:p>• Log Infrastructure: Research in this category aims at improving log processing and persistence.</ns0:p><ns0:p>There are two subcategories in this line of work: (1) log parsing, and (2) log storage.</ns0:p><ns0:p>• Log Analysis: Research in this category aims at extracting knowledge from log data. There are eight subcategories in this line of work: (1) anomaly detection, (2) security and privacy, (3) root cause analysis, (4) failure prediction, (5) quality assurance, (6) model inference and invariant mining, (7) reliability and dependability, and (8) log platforms.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>, we illustrate an example of a common development and operations workflow, and we relate them to the research areas observed in the primary studies. In Table <ns0:ref type='table'>4</ns0:ref>, we provide a summary of our mapping study. In the following, we elaborate the mapping for each category and their respective subcategories.</ns0:p></ns0:div>
<ns0:div><ns0:head>Logging</ns0:head><ns0:p>Log messages are usually in the form of free text and may expose parts of the system state (e.g., exceptions and variable values) to provide additional context. The full log statement also includes a severity level to indicate the purpose of that statement. Logging frameworks provide developers with different log levels: debug for low level logging, info to provide information on the system execution, error to indicate unexpected state that may compromise the normal execution of the application, and fatal to indicate a severe state that might terminate the execution of the application. Logging an application involves several decisions such as what to log. These are all important decisions since they have a direct impact on the effectiveness of the future analysis. Excessive logging may cause performance degradation due the number of writing operations and might be costly in terms of storage. Conversely, insufficient information undermines the usefulness of the data to the operations team. It is worth mentioning that the underlying environment also provides valuable data. Environment logs provide insights about resource usage (e.g., CPU, memory and network) and this data can be correlated with application logs on the analysis process.</ns0:p><ns0:p>In contrast to application logs, developers are often not in control of environment logs. On the other hand, they are often highly structured and are useful as a complementary data source that provides additional context.</ns0:p><ns0:p>LOGGING deals with the decisions from the developer's perspective. Developers have to decide the placement of log statements, what message description to use, which runtime information is relevant to log (e.g., the thrown exception), and the appropriate severity level. Efficient and accurate log analysis rely on the quality of the log data, but it is not always feasible to know upfront the requirements of log data during development time.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:ref> Manuscript to be reviewed techniques to help developers make informed decisions when implementing log statements (e.g., where and how to log). In the following, we discuss the 24 log engineering papers in the light of these three types of studies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Empirical Studies</ns0:head><ns0:p>Understanding how practitioners deal with the log engineering process in a real scenario is key to identify open problems and provide research directions. Papers in this category aim at addressing this agenda through empirical studies in open-source projects (and their communities). <ns0:ref type='bibr' target='#b158'>Yuan et al. (2012a)</ns0:ref> conducted the first empirical study focused on understanding logging practices.</ns0:p><ns0:p>They investigated the pervasiveness of logging, the benefits of logging, and how log-related code changes over time in four open-source projects (Apache httpd, OpenSSH, PostgresSQL, and Squid). In summary, while logging was widely adopted in the projects and were beneficial for failure diagnosis, they show that logging as a practice relies on the developer's experience. Most of the recurring changes were updates to the content of the log statement.</ns0:p><ns0:p>Later, Chen and Jiang (2017b) conducted a replication study with a broader corpus: 21 Java-based projects from the Apache Foundation. Both studies confirm that logging code is actively maintained and that log changes are recurrent; however, the presence of log data in bug reports are not necessarily correlated to the resolution time of bug fixes <ns0:ref type='bibr' target='#b51'>(Chen and Jiang, 2017b)</ns0:ref>. This is understandable as resolution time also relates to the complexity of the reported issue.</ns0:p><ns0:p>It is worth mentioning that the need for tooling support for logging also applies in an industry setting.</ns0:p><ns0:p>For instance, in a study conducted by <ns0:ref type='bibr' target='#b127'>Pecchia et al. (2015)</ns0:ref>, they show that the lack of format conventions in log messages, while not severe for manual analysis, undermines the use of automatic analysis. They suggest that a tool to detect inconsistent conventions would be helpful for promptly fixes. In a different study, <ns0:ref type='bibr' target='#b164'>Zhi et al. (2019)</ns0:ref> analyses log configurations on 10 open-source projects and 10 Alibaba systems.</ns0:p><ns0:p>They show that developers often rely on logging configurations to control the throughput of data and quality of data (e.g., suppressing inconvenient logs generated from external dependencies, changing the layout format of the recorded events) but finding optimal settings is challenging (observed as recurrent changes on development history).</ns0:p><ns0:p>In the context of mobile development, <ns0:ref type='bibr' target='#b160'>Zeng et al. (2019)</ns0:ref> show that logging practices are different but developers still struggle with inconsistent logging. They observed a lower density of log statements compared to previous studies focused on server and desktop systems <ns0:ref type='bibr' target='#b51'>(Chen and Jiang, 2017b;</ns0:ref><ns0:ref type='bibr' target='#b158'>Yuan et al., 2012a</ns0:ref>) by analyzing +1.4K Android apps hosted on F-Droid. Logging practices in mobile development differ mainly because developers need to consider the overhead impact on user's device. The authors observed a statistically significant difference in terms of response time, battery consumption, and CPU when evaluating eight apps with logging enabled and disabled.</ns0:p><ns0:p>Understanding the meaning of logs is important not only for analysis but also for maintenance of logging code. However, one challenge that developers face is to actively update log-related code along functionalities. The code base naturally evolves but due to unawareness on how features are related to log statements, the latter become outdated and may produce misleading information <ns0:ref type='bibr' target='#b158'>(Yuan et al., 2012a;</ns0:ref><ns0:ref type='bibr' target='#b51'>Chen and Jiang, 2017b)</ns0:ref>. This is particularly problematic when the system is in production and developers need to react for user inquiries. In this context, <ns0:ref type='bibr' target='#b140'>Shang et al. (2014)</ns0:ref> manually analyzed mailing lists and sampled log statements from three open-source projects (Apache Hadoop, Zookeper, and Cassandra) to understand how practitioners and customers perceive log data. They highlight that common inquiries about log data relate to the meaning, the cause, the context (e.g., in which cases a particular message appears in the log files), the implications of a message, and solutions to manifested problems.</ns0:p><ns0:p>In a different study, <ns0:ref type='bibr' target='#b138'>Shang et al. (2015)</ns0:ref> investigated the relationship between logging code and the overall quality of the system though a case study on four releases from Apache Hadoop and JBoss. They</ns0:p><ns0:p>show that the presence of log statements are correlated to unstable source files and are strong indicators of defect-prone features. In other words, classes that are more prone to defects often contain more logs.</ns0:p><ns0:p>Finally, <ns0:ref type='bibr' target='#b95'>Kabinna et al. (2016)</ns0:ref> explored the reasons and the challenges of migrating to a different logging library. The authors noticed that developers have different drivers for such a refactoring, e.g., to increase flexibility, performance, and to reduce maintenance effort. Interestingly, the authors also observed that most projects suffer from post-migration bugs because of the new logging library, and that migration rarely improved performance. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Log Requirements</ns0:p><ns0:p>An important requirement of log data is that it must be informative and useful to a particular purpose.</ns0:p><ns0:p>Papers in this subcategory aim at evaluating whether log statements can deliver expected data, given a known requirement.</ns0:p><ns0:p>Fault injection is a technique that can be useful to assess the diagnosibility of log data, i.e., whether log data can manifest the presence of failures. Past studies conducted experiments in open-source projects and show that logs are unable to produce any trace of failures in most cases <ns0:ref type='bibr' target='#b59'>(Cinque et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b128'>Pecchia and Russo, 2012;</ns0:ref><ns0:ref type='bibr' target='#b60'>Cinque et al., 2013)</ns0:ref>. The idea is to introduce faults in the system under test, run tests (these have to manifest failures), and compare the log data before and after the experiment. Examples of introduced faults are missing method calls and missing variable assignment. The authors suggest the usage of fault injection as a guideline to identify and add missing log statements.</ns0:p><ns0:p>Another approach to address the diagnosability in log data was proposed by <ns0:ref type='bibr' target='#b159'>Yuan et al. (2012b)</ns0:ref>.</ns0:p><ns0:p>LOGENHANCER leverages program analysis techniques to capture additional context to enhance log statements in the execution flow. Differently from past work with fault injection, LOGENHANCER</ns0:p><ns0:p>proposes the enhancement of existing log statements rather than addition of log statements in missing locations.</ns0:p><ns0:p>In the context of web services, da Cruz et al. ( <ns0:ref type='formula'>2004</ns0:ref>) already explored the idea of enhancing log data.</ns0:p><ns0:p>An interesting remark pointed by the authors is that, in the context of complex system with third-party libraries, there is no ownership about the format and content of log statements. This is an issue if the log data generated is inappropriate and requires changes (as observed by <ns0:ref type='bibr' target='#b164'>Zhi et al. (2019)</ns0:ref>). To overcome this issue, they propose WSLOGA, a logging framework based on SOAP intermediaries that intercepts messages exchanged between client and server and enhances web logs with important data for monitoring and auditing, e.g., response and processing time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Implementation of Log Statements</ns0:head><ns0:p>Developers need to make several decisions at development time that influence the quality of the generated log data. Past studies in logging practices show that in practice, developers rely on their own experience and logging is conducted in a trial-and-error manner in open-source projects <ns0:ref type='bibr' target='#b158'>(Yuan et al., 2012a;</ns0:ref><ns0:ref type='bibr' target='#b51'>Chen and Jiang, 2017b)</ns0:ref> and industry <ns0:ref type='bibr' target='#b127'>(Pecchia et al., 2015)</ns0:ref>. Papers in this subcategory aim at studying logging decisions, i.e., where to place log statements, which log level to use, and how to write log messages. Deciding where to place log statements is critical to provide enough context for later analysis. One way to identify missing locations is to use fault injection (see 'Log Requirements'). However, the effectiveness of that approach is limited to the quality of tests and the ability of manifesting failures. Furthermore, log placement requires further contextual information that is unfeasible to capture only with static analysis.</ns0:p><ns0:p>Another approach to address consistent log placement in large code bases is to leverage source code Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>situations and two for regular monitoring. Unexpected situations cover log statements triggered by failed assertions ('assertion logging'), exception handling or throw statements ('exception logging'), and return of unexpected values after a checking condition ('return-value-check logging'). Regular monitoring cover the remaining cases of log statements that can be in logic branches ('logic-branch logging') or not ('observing-point logging'). Later, <ns0:ref type='bibr' target='#b166'>Zhu et al. (2015)</ns0:ref> proposed LOGADVISOR, a technique that leverages supervised learning with feature engineering to suggest log placement for unexpected situations, namely catch blocks ('exception logging') and if blocks with return statements ('return-value-check logging').</ns0:p><ns0:p>Some of the features defined for the machine learning process are size of the method, i.e., number of lines of source code, name of method parameters, name of local variables, and method signature. They evaluated LOGADVISOR on two proprietary systems from Microsoft and two open-source projects hosted on GitHub. The results indicate the feasibility of applying machine learning to provide recommendations for where to place new log statements. <ns0:ref type='bibr' target='#b102'>Li et al. (2018)</ns0:ref> approached the placement problem by correlating the presence of logging code with the context of the source code. The rationale is that some contexts (defined through topic models) are more likely to contain log statements (e.g., network or database operations) than others (e.g., getter methods). In this work, the authors analyze log placement at method level rather than block-level as in previous work <ns0:ref type='bibr' target='#b73'>(Fu et al., 2014a;</ns0:ref><ns0:ref type='bibr' target='#b166'>Zhu et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Choosing the appropriate severity level of log statements is a challenge. Recall that logging frameworks provide the feature of suppressing log messages according to the log severity. <ns0:ref type='bibr' target='#b103'>Li et al. (2017a)</ns0:ref> proposed a machine learning-based technique to suggest the log level of a new log statement. The underlying model uses ordinal regression, which is useful to predict classes, i.e., log level, but taking into account their severity order, e.g., info < warning < error. Their technique provides better accuracy than random guessing and guessing based on the distribution of log levels in the source code. They report that the log message and the surrounding context of the log statement are good predictors of the log level.</ns0:p><ns0:p>It is worth mentioned that <ns0:ref type='bibr' target='#b87'>Hassani et al. (2018)</ns0:ref> also addressed the problem of identifying appropriate log level in their study on log-related changes by examining the entropy of log messages and log levels.</ns0:p><ns0:p>The underlying idea is that log levels that are commonly associated with a log message also should be used on other log statements with similar log messages. While this approach is intuitive and precise, that log descriptors are repetitive and small in vocabulary. For this reason, they suggest that it is feasible to exploit information retrieval methods to automatically generate log descriptions.</ns0:p><ns0:p>In addition to log descriptors, the state of the system is another important information the event being logged. <ns0:ref type='bibr' target='#b111'>Liu et al. (2019b)</ns0:ref> proposed a machine learning-based approach to aid developers about which variables to log based on the patterns of existing log statements. The technique consists of four layers:</ns0:p><ns0:p>embedding, Recurrent Neural Network (RNN), self-attention mechanism, and output. Results indicate better performance than random guess and information retrieve approaches on the evaluation of nine Java projects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Log Infrastructure</ns0:head><ns0:p>The infrastructure supporting the analysis process plays an important role because the analysis may involve the aggregation and selection of high volumes of data. The requirements for the data processing infrastructure depend on the nature of the analysis and the nature of the log data. For instance, popular log processors, e.g., Logstash and Fluentd, provide regular expressions out-of-the-box to extract data from well-known log formats of popular web servers (e.g., Apache Tomcat and Nginx). However, extracting content from highly unstructured data into a meaningful schema is not trivial.</ns0:p><ns0:p>LOG INFRASTRUCTURE deals with the tooling support necessary to make the further analysis feasible.</ns0:p><ns0:p>For instance, data representation might influence on the efficiency of data aggregation. Other important Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>concerns include the ability of handling log data for real-time or offline analysis and scalability to handle the increasing volume of data.</ns0:p><ns0:p>We observed two subcategories in this area: (1) log parsing, and (2) log storage. In the following, we summarize the 16 studies on log infrastructure grouped by these two categories.</ns0:p></ns0:div>
<ns0:div><ns0:head>Log Parsing</ns0:head><ns0:p>Parsing is the backbone of many log analysis techniques. Some analysis operate under the assumption that source-code is unavailable; therefore, they rely on parsing techniques to process log data. Given that log messages often have variable content, the main challenge tackled by these papers is to identify which log messages describe the same event. For example, 'Connection from A port B' and 'Connection from C port D' represent the same event. The heart of studies in parsing is the template extraction from raw log data. Fundamentally, this process consists of identifying the constant and variable parts of raw log messages.</ns0:p><ns0:p>Several approaches rely on the 'textual similarity' between the log messages. <ns0:ref type='bibr' target='#b34'>Aharon et al. (2009)</ns0:ref> create a dictionary of all words that appear in the log message and use the frequency of each word to cluster log messages together. Somewhat similar, IPLOM (Iterative Partitioning Log Mining) leverages the similarities between log messages related to the same event, e.g., number, position, and variability of tokens <ns0:ref type='bibr' target='#b115'>(Makanju et al., 2009</ns0:ref><ns0:ref type='bibr' target='#b114'>(Makanju et al., , 2012))</ns0:ref>. <ns0:ref type='bibr' target='#b107'>Liang et al. (2007)</ns0:ref> also build a dictionary out of the keywords that appear in the logs. Next, each log is converted to a binary vector, with each element representing whether the log contains that keyword. With these vectors, the authors compute the correlation between any two events.</ns0:p><ns0:p>Somewhat different from others, <ns0:ref type='bibr' target='#b77'>Gainaru et al. (2011)</ns0:ref> cluster log messages by searching for the best place to split a log message into its 'constant' and its 'variable' parts. These clusters are self-adaptive as new log messages are processed in a streamed fashion. <ns0:ref type='bibr' target='#b85'>Hamooni et al. (2016)</ns0:ref> also uses string similarity to cluster logs. Interestingly, authors however made use of map-reduce to speed up the processing. Finally, <ns0:ref type='bibr' target='#b165'>Zhou et al. (2010)</ns0:ref> propose a fuzzy match algorithm based on the contextual overlap between log lines.</ns0:p><ns0:p>Transforming logs into 'sequences' is another way of clustering logs. Lin et al. ( <ns0:ref type='formula'>2016</ns0:ref>) convert logs into vectors, where each vector contains a sequence of log events of a given task, and each event has a different weight, calculated in different ways. <ns0:ref type='bibr' target='#b144'>Tang and Li (2010)</ns0:ref> propose LOGTREE, a semi-structural way of representing a log message. The overall idea is to represent a log message as a tree, where each node is a token, extracted via a context-free grammar parser that the authors wrote for each of the studied systems. Interestingly, in this paper, the authors raise awareness to the drawbacks of clustering techniques that consider only word/term information for template extraction. According them, log messages related to same events often do not share a single word. The comparison between the two approaches <ns0:ref type='bibr' target='#b90'>(He et al., 2018b;</ns0:ref><ns0:ref type='bibr' target='#b33'>Agrawal et al., 2019)</ns0:ref> remains open.</ns0:p></ns0:div>
<ns0:div><ns0:head>Log Storage</ns0:head><ns0:p>Modern complex systems easily generate giga-or petabytes of log data a day. Thus, in the log data life-cycle, storage plays an important role as, when not handled carefully, it might become the bottleneck of the analysis process. Researchers and practitioners have been addressing this problem by offloading computation and storage to server farms and leveraging distributed processing.</ns0:p><ns0:p>Mavridis and Karatza (2017) frame the problem of log analysis at scale as a 'big data' problem.</ns0:p><ns0:p>Authors evaluated the performance and resource usage of two popular big data solutions (Apache Hadoop and Apache Spark) with web access logs. Their benchmarks show that both approaches scale with the number of nodes in a cluster. However, Spark is more efficient for data processing since it minimizes Manuscript to be reviewed</ns0:p><ns0:p>Computer Science reads and writes in disk. Results suggest that Hadoop is better suited for offline analysis (i.e., batch processing) while Spark is better suited for online analysis (i.e., stream processing). Indeed, as mentioned early, <ns0:ref type='bibr' target='#b90'>He et al. (2018b)</ns0:ref> leverages Spark for parallel parsing because of its fast in-memory processing.</ns0:p><ns0:p>Another approach to reduce storage costs consists of data compression techniques for efficient analysis <ns0:ref type='bibr' target='#b108'>(Lin et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b110'>Liu et al., 2019a)</ns0:ref>. <ns0:ref type='bibr' target='#b108'>Lin et al. (2015)</ns0:ref> argue that while traditional data compression algorithms are useful to reduce storage footprint, the compression-decompression loop to query data undermines the efficiency of log analysis. The rationale is that traditional compression mechanisms (e.g., gzip) perform compression and decompression in blocks of data. In the context of log analysis, this results in waste of CPU cycles to compress and decompress unnecessary log data. They propose a compression approach named COWIK that operates in the granularity of log entries. They evaluated their approach in a log search and log joining system. Results suggest that the approach is able to achieve better performance on query operations and produce the same join results with less memory. <ns0:ref type='bibr' target='#b110'>Liu et al. (2019a)</ns0:ref> proposes a different approach named LOGZIP based on an intermediate representation of raw data that exploits the structure of log messages. The underlying idea is to remove redundant information from log events and compress the intermediate representation rather than raw logs. Results indicate higher compression rates compared to baseline approaches (including COWIK).</ns0:p></ns0:div>
<ns0:div><ns0:head>Log Analysis</ns0:head><ns0:p>After the processing of log data, the extracted information serves as input to sophisticated log analysis methods and techniques. Such analysis, which make use of varying algorithms, help developers in detecting unexpected behavior, performance bottlenecks, or even security problems.</ns0:p><ns0:p>LOG ANALYSIS deals with knowledge acquisition from log data for a specific purpose, e.g, detecting undesired behavior or investigating the cause of a past outage. Extracting insights from log data is challenging due to the complexity of the systems generating that data.</ns0:p><ns0:p>We observed eight subcategories in this area: (1) anomaly detection, (2) security and privacy, (3) root cause analysis, (4) failure prediction, (5) quality assurance, ( <ns0:ref type='formula'>6</ns0:ref>) model inference and invariant mining, ( <ns0:ref type='formula'>7</ns0:ref>) reliability and dependability, and ( <ns0:ref type='formula'>8</ns0:ref>) platforms. In the following, we summarize the 68 studies on log analysis grouped by these seven different goals.</ns0:p></ns0:div>
<ns0:div><ns0:head>Anomaly Detection</ns0:head><ns0:p>Anomaly detection techniques aim to find undesired patterns in log data given that manual analysis is time-consuming, error-prone, and unfeasible in many cases. We observe that a significant part of the research in the logging area is focused on this type of analysis. Often, these techniques focus on identifying problems in software systems. Based on the assumption that an 'anomaly' is something worth investigating, these techniques look for anomalous traces in the log files. <ns0:ref type='bibr' target='#b123'>Oliner and Stearley (2007)</ns0:ref> raise awareness on the need of datasets from real systems to conduct studies and provide directions to the research community. They analyzed log data from five super computers and conclude that logs do not contain sufficient information for automatic detection of failures nor root cause diagnosis, small events might dramatically impact the number of logs generated, different failures have different predictive signatures, and messages that are corrupted or have inconsistent formats are not uncommon. Many of the challenges raised by the authors are well known nowadays and have been in continuous investigation in academia.</ns0:p><ns0:p>Researchers have been trying several different techniques, such as deep learning and NLP <ns0:ref type='bibr' target='#b66'>(Du et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b47'>Bertero et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b119'>Meng et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b162'>Zhang et al., 2019)</ns0:ref>, data mining, statistical learning methods, and machine learning <ns0:ref type='bibr' target='#b113'>(Lu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b91'>He et al., 2016b;</ns0:ref><ns0:ref type='bibr' target='#b80'>Ghanbari et al., 2014;</ns0:ref><ns0:ref type='bibr'>Dong Tang and Iyer, 1992;</ns0:ref><ns0:ref type='bibr' target='#b53'>Chinghway Lim et al., 2008;</ns0:ref><ns0:ref type='bibr'>Xu et al., 2009b,a)</ns0:ref> control flow graph mining from execution logs <ns0:ref type='bibr' target='#b120'>(Nandi et al., 2016)</ns0:ref>, finite state machines <ns0:ref type='bibr' target='#b72'>(Fu et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b62'>Debnath et al., 2018)</ns0:ref>, frequent itemset mining <ns0:ref type='bibr' target='#b53'>(Chinghway Lim et al., 2008)</ns0:ref>, dimensionality reduction techniques <ns0:ref type='bibr' target='#b94'>(Juvonen et al., 2015)</ns0:ref>, grammar compression of log sequences <ns0:ref type='bibr' target='#b78'>(Gao et al., 2014)</ns0:ref>, and probabilistic suffix trees <ns0:ref type='bibr' target='#b44'>(Bao et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Interestingly, while these papers often make use of systems logs (e.g., logs generated by Hadoop, a common case study among log analysis in general) for evaluation, we conjecture that these approaches are sufficiently general, and could be explored in (or are worth trying at on) other types of logs (e.g., application logs).</ns0:p><ns0:p>Researchers have also explored log analysis techniques within specific contexts. For instance, finding anomalies in HTTP logs by using dimensionality reduction techniques <ns0:ref type='bibr' target='#b94'>(Juvonen et al., 2015)</ns0:ref>, finding anomalies in cloud operations <ns0:ref type='bibr' target='#b70'>(Farshchi et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b71'>(Farshchi et al., , 2018) )</ns0:ref> and Spark programs <ns0:ref type='bibr' target='#b113'>(Lu et al., 2017)</ns0:ref> <ns0:ref type='formula'>2019</ns0:ref>) leverages LSTM models with attention mechanism to handle unstable log data. They argue that log data changes over time due to evolution of software and models addressing log analysis need to take this into consideration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Security and Privacy</ns0:head><ns0:p>Logs can be leveraged for security purposes, such as intrusion and attacks detection. An interesting characteristic among them all is that the most used log data is, by far, network data. We conjecture this is due to the fact that (1) network logs (e.g., HTTP, web, router logs) are independent from the underlying application, and that (2) network tends to be, nowadays, a common way of attacking an application.</ns0:p><ns0:p>Differently from analysis techniques where the goal is to find a bug, and which are represented in the logs as anomalies, understanding which characteristics of log messages can reveal security issues is still an open topic. <ns0:ref type='bibr' target='#b45'>Barse and Jonsson (2004)</ns0:ref> extract attack manifestations to determine log data requirements for intrusion detection. The authors then present a framework for determining empirically which log data can reveal a specific attack. Similarly, <ns0:ref type='bibr' target='#b32'>Abad et al. (2003)</ns0:ref> argue for the need of correlation data from different logs to improve the accuracy of intrusion detection systems. The authors show in their paper how different attacks are reflected in different logs, and how some attacks are not evident when analyzing single logs. <ns0:ref type='bibr' target='#b131'>Prewett (2005)</ns0:ref> examines how the unique characteristics of cluster machines, including how they are generally operated in the larger context of a computing center, can be leveraged to provide better security.</ns0:p><ns0:p>Finally, regarding privacy, <ns0:ref type='bibr' target='#b49'>Butin and Le Métayer (2014)</ns0:ref> propose a framework for accountability based on 'privacy-friendly' event logs. These logs are then used to show compliance with respect to data protection policies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Root Cause Analysis</ns0:head><ns0:p>Detecting anomalous behavior, either by automatic or monitoring solutions, is just part of the process.</ns0:p><ns0:p>Maintainers need to investigate what caused that unexpected behavior. Several studies attempt to take the next step and provide users with, e.g., root cause analysis, accurate failure identification, and impact analysis. The authors argue that, while console logs are useful for administration tasks, they can complex to analyze by operators. They propose a technique based on the correlation of console logs and resource usage information to link jobs with anomalous behavior and erroneous nodes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Failure Prediction</ns0:head><ns0:p>Being able to anticipate failures in critical systems not only represents competitive business advantage but also represents prevention of unrecoverable consequences to the business. Failure prediction is feasible once there is knowledge about abnormal patterns and their related causes. However, it differs from anomaly detection in the sense that identifying the preceding patterns of an unrecoverable state requires insights from root cause analysis. This approach shifts monitoring to a proactive manner rather than reactive, i.e., once the problem occurred.</ns0:p><ns0:p>Work in this area, as expected, relies on statistical and probabilistic models, from standard regression analysis to machine learning. <ns0:ref type='bibr' target='#b150'>Wang et al. (2017)</ns0:ref> apply random forests in event logs to predict maintenance of equipment (in their case study, ATMs). <ns0:ref type='bibr' target='#b74'>Fu et al. (2014b)</ns0:ref> use system logs (from clusters) to generate causal dependency graphs and predict failures. <ns0:ref type='bibr' target='#b136'>Russo et al. (2015)</ns0:ref> mine system logs (more specifically, sequences of logs) to predict the system's reliability by means of linear radial basis functions, and multilayer perceptron learners. <ns0:ref type='bibr' target='#b99'>Khatuya et al. (2018)</ns0:ref> propose ADELE, a machine learning-based technique to predict functional and performance issues. <ns0:ref type='bibr' target='#b137'>Shalan and Zulkernine (2013)</ns0:ref> utilize system logs to predict failure occurrences by means of regression analysis and support vector machines. <ns0:ref type='bibr' target='#b76'>Fu et al. (2012)</ns0:ref> also utilize system logs to predict failures by mining recurring event sequences that are correlated.</ns0:p><ns0:p>We noticed that, given that only supervised models have been used so far, feature engineering plays an important role in these papers. <ns0:ref type='bibr' target='#b99'>Khatuya et al. (2018)</ns0:ref>, for example, uses event count, event ratio, mean inter-arrival time, mean inter-arrival distance, severity spread, and time-interval spread. Russo et al.</ns0:p><ns0:p>(2015) use defective and non defective sequences of events as features. Shalan and Zulkernine (2013)'s paper, although not completely explicit about which features they used, mention CPU, memory utilization, read/write instructions, error counter, error messages, error types, and error state parameters as examples of features.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quality Assurance</ns0:head><ns0:p>Log analysis might support developers during the software development life cycle and, more specifically, during activities related to quality assurance. <ns0:ref type='bibr' target='#b37'>Andrews and</ns0:ref><ns0:ref type='bibr'>Zhang (2000, 2003)</ns0:ref> advocated the use of logs for testing purposes since the early 2000's.</ns0:p><ns0:p>In their work, the authors propose an approach called log file analysis (LFA). LFA requires the software under test to write a record of events to a log file, following a pre-defined logging policy that states precisely what the software should log. A log file analyzer, also written by the developers, then analyses the produced log file and only accepts it in case the run did not reveal any failures. The authors propose a log file analysis language to specify such analyses.</ns0:p><ns0:p>More than 10 years later, <ns0:ref type='bibr' target='#b52'>Chen et al. (2018)</ns0:ref> propose an automated approach to estimate code coverage via execution logs named LOGCOCO. The motivation for this use of log data comes from the need to estimate code coverage from production code. The authors argue that, in a large-scale production system, code coverage from test workloads might not reflect coverage under production workload. Their approach relies on program analysis techniques to match log data and their corresponding code paths. Based on this data, LOGCOCO estimates different coverage criteria, i.e., method, statement, and branch coverage. Their experiments in six different systems show that their approach is highly accurate (> 96%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Inference and Invariant Mining</ns0:head><ns0:p>Model-based approaches to software engineering seek to support understanding and analysis by means of abstraction. However, building such models is a challenging and expensive task. Logs serve as a source Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for developers to build representative models and invariants of their systems. These models and invariants may help developers in different tasks, such as comprehensibility and testing. These approaches generate different types of models, such as (finite) state machines <ns0:ref type='bibr' target='#b148'>(Ulrich et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b116'>Mariani and Pastore, 2008;</ns0:ref><ns0:ref type='bibr' target='#b143'>Tan et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b48'>Beschastnikh et al., 2014)</ns0:ref> directed workflow graphs <ns0:ref type='bibr' target='#b152'>(Wu et al., 2017)</ns0:ref> client-server interaction diagrams <ns0:ref type='bibr' target='#b41'>(Awad and Menasce, 2016)</ns0:ref>, invariants <ns0:ref type='bibr' target='#b96'>(Kc and Gu, 2011;</ns0:ref><ns0:ref type='bibr' target='#b112'>Lou et al., 2010)</ns0:ref>, and dependency models <ns0:ref type='bibr' target='#b142'>(Steinle et al., 2006)</ns0:ref>.</ns0:p><ns0:p>State machines are the most common type of model extracted from logs. <ns0:ref type='bibr' target='#b48'>Beschastnikh et al. (2014)</ns0:ref>, for example, infer state machine models of concurrent systems from logs. The authors show that their models are sufficiently accurate to help developers in finding bugs. <ns0:ref type='bibr' target='#b148'>Ulrich et al. (2003)</ns0:ref> show how log traces can be used to build formal execution models. The authors use SDL, a model-checking description technique, common in telecommunication industries. <ns0:ref type='bibr' target='#b116'>Mariani and Pastore (2008)</ns0:ref> propose an approach where state machine-based models of valid behaviors are compared with log traces of failing executions.</ns0:p><ns0:p>The models are inferred via the KBEHAVIOR engine <ns0:ref type='bibr' target='#b116'>(Mariani and Pastore, 2008)</ns0:ref>. <ns0:ref type='bibr' target='#b143'>Tan et al. (2010)</ns0:ref> extract state-machine views of the MapReduce flow behavior using the native logs that Hadoop MapReduce systems produce.</ns0:p><ns0:p>The mining of properties that a system should hold has also been possible via log analysis. <ns0:ref type='bibr' target='#b112'>Lou et al. (2010)</ns0:ref> derive program invariants from logs. The authors show that the invariants that emerge from their approach are able to detect numerous real-world problems. <ns0:ref type='bibr' target='#b96'>Kc and Gu (2011)</ns0:ref> aim to facilitate the troubleshooting of cloud computing infrastructures. Besides implementing anomaly detection techniques, their tool also performs invariant checks in log events, e.g., two processes performing the same task at the same time (these invariants are not automatically devised, but should be written by system administrators).</ns0:p><ns0:p>We also observe directed workflow graphs and dependency maps as other types of models built from logs. <ns0:ref type='bibr' target='#b152'>Wu et al. (2017)</ns0:ref> propose a method that mines structural events and transforms them into a directed workflow graph, where nodes represent log patterns, and edges represent the relations among patterns.</ns0:p><ns0:p>Awad and Menasce (2016) derive performance models of operational systems based on system logs and configuration logs. Finally, <ns0:ref type='bibr' target='#b142'>Steinle et al. (2006)</ns0:ref> map dependencies among internal components through system logs, via data mining algorithms and natural language processing techniques.</ns0:p><ns0:p>Finally, and somewhat different from the other papers in this ramification, <ns0:ref type='bibr' target='#b64'>Di Martino et al. (2012)</ns0:ref> argue that an important issue in log analysis is that, when a failure happens, multiple independent error events appear in the log. Reconstructing the failure process by grouping together events related to the same failure (also known as data coalescence techniques) can therefore help developers in finding the problem.</ns0:p><ns0:p>According to the authors, while several coalescence techniques have been proposed over time <ns0:ref type='bibr' target='#b147'>(Tsao and Siewiorek, 1983;</ns0:ref><ns0:ref type='bibr' target='#b86'>Hansen and Siewiorek, 1992)</ns0:ref>, evaluating these approaches is a challenging task as the ground truth of the failure is often not available. To help researchers in evaluating their approaches, the authors propose a technique which basically generates synthetic logs along with the ground truth they represent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reliability and Dependability</ns0:head><ns0:p>Logs can serve as a means to estimate how reliable and dependable a software system is. Research in this subcategory often focuses on large software systems, such as web and mobile applications that are distributed in general, and high performance computers. <ns0:ref type='bibr' target='#b43'>Banerjee et al. (2010)</ns0:ref> estimate the reliability of a web Software-as-a-Service (SaaS) by analyzing its web traffic logs. Authors categorize different types of log events with different severity levels, counting, e.g, successfully loaded (non-critical) images separately from core transactions, providing different perspectives on reliability. <ns0:ref type='bibr' target='#b146'>Tian et al. (2004)</ns0:ref> evaluate the reliability of two web applications, using several metrics that can be extracted from web access and error logs (e.g., errors per page hits, errors per sessions, and errors per users). The authors conclude that the usage of workload and usage patterns, present in log files, during testing phases could significantly improve the reliability of the system. Later, <ns0:ref type='bibr' target='#b92'>Huynh and Miller (2009)</ns0:ref> expanded previous work <ns0:ref type='bibr' target='#b146'>(Tian et al., 2004)</ns0:ref> by enumerating improvements for reliability assessment. They emphasize that some (http) error codes require a more in-depth analysis, e.g., errors caused by factors that cannot be controlled by the website administrators should be separated from the ones that can be controlled, and that using IP addresses as a way to measure user count can be misleading, as often many users share the same IP address. Analyzing the performance of mobile applications can be challenging specially when they depend on back-end distributed services. IBM researchers <ns0:ref type='bibr' target='#b132'>(Ramakrishna et al., 2017)</ns0:ref> proposed MIAS (Mobile Infrastructure Analytics System) to analyze performance of mobile applications. The technique considers session data and system logs from instrumented applications and back-end services (i.e., servers and databases) and applies statistical methods to correlate them and reduce the size of relevant log data for further analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Log Platforms</ns0:head><ns0:p>Monitoring systems often contain dashboards and metrics to measure the 'heartbeat' of the system.</ns0:p><ns0:p>In the occurrence of abnormal behavior, the operations team is able to visualize the abnormality and conduct further investigation to identify the cause. Techniques to reduce/filter the amount of log data and efficient querying play an important role to support the operations team on diagnosing problems. One consideration is, while visual aid is useful, in one extreme, it can be overwhelming to handle several charts and dashboards at once. In addition, it can be non-trivial to judge if an unknown pattern on the dashboard represents an unexpected situation. In practice, operations engineers may rely on experience and past situations to make this judgment. Papers in this subcategory focus on full-fledged platforms that aim at providing a full experience for monitoring teams.</ns0:p><ns0:p>Two studies were explicitly conducted in an industry setting, namely MELODY <ns0:ref type='bibr' target='#b35'>(Aharoni et al., 2011)</ns0:ref> at IBM and FLAP <ns0:ref type='bibr' target='#b105'>(Li et al., 2017b)</ns0:ref> at Huawei Technologies. MELODY is a tool for efficient log mining that features machine learning-based anomaly detection for proactive monitoring. It was applied with ten large IBM clients, and the authors reported that MELODY was useful to reduce the excessive amount of data faced by their users. FLAP is a tool that combines state-of-the-art processing, storage, and analysis techniques. One interesting feature that was not mentioned in other studies is the use of template learning for unstructured logs. The authors also report that FLAP is in production internally at Huawei.</ns0:p><ns0:p>While an industry setting is not always accessible to the research community, publicly available datasets are useful to overcome this limitation. <ns0:ref type='bibr' target='#b42'>Balliu et al. (2015)</ns0:ref> propose BIDAL, a tool to characterize the workload of cloud infrastructures, They use log data from Google data clusters for evaluation and incorporate support to popular analysis languages and storage backends on their tool. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Our results show that logging is an active research field that attracted not only researchers but also practitioners. We observed that most of the research effort focuses on log analysis techniques, while the other research areas are still in a early stage. In the following, we highlight open problems, gaps, and future directions per research area.</ns0:p><ns0:p>In LOGGING, several empirical studies highlight the importance of better tooling support for developers since logging is conducted in a trial-and-error manner (see subcategory 'Empirical Studies'). Part of the problem is the lack of requirements for log data. When the requirements are well defined, logging frameworks can be tailored to a particular use case and it is feasible to test whether the generated log data fits the use case (see subcategory 'Log Requirements'). However, when requirements are not clear, developers rely on their own experience to make log-related decisions. While static analysis is useful to anticipate potential issues in log statements (e.g., null reference in a logged variable), other logging decisions (e.g., where to log) rely on the context of source code (see subcategory 'Implementation of Log Statements'). Research on this area already shows the feasibility of employing machine learning to address those context-sensitive decisions. However, it is still unknown the implications of deploying such tools to developers. Further work is necessary to address usability and operational aspects of those techniques. For instance, false positives is a reality in machine learning. There is no 100% accurate model and false positives will eventually emerge even if in a low rate. How to communicate results in a way that developers keeps engaged in a productive way is important to bridge the gap of theory and practice. This also calls for closer collaboration between academia and industry.</ns0:p><ns0:p>In LOG INFRASTRUCTURE, most of the research effort focused on parsing techniques. We observed that most papers in the 'Log Parsing' subcategory address the template extraction problem as an unsupervised problem, mainly by clustering the static part of the log messages. While the analysis of system logs (e.g., web logs and other data provided that the runtime environment) was extensively explored (mostly Hadoop log data), little has been explored in the field of application logs. We believe that this is due to the lack of publicly available dataset. In addition, application logs might not have a well-defined structure and can vary significantly from structured system logs. This could undermine the feasibility of exploiting clustering techniques. One way to address the availability problem could be using log data generated from</ns0:p><ns0:p>test suites in open-source projects. However, test suites might not produce comparable volume of data.</ns0:p><ns0:p>Unless there is a publicly available large-scale application that could be used by the research community, we argue that the only way to explore log parsing at large-scale is in partnership with industry. Industry would highly benefit from this collaboration, as researchers would be able to explore latest techniques under a real workload environment. In addition to the exploration of application logs, there are other research opportunities for log parsing. Most papers exploit parsing for log analysis tasks. While this is an important application with its own challenges (e.g., data labeling), parsing could be also applied for efficient log compression and better data storage.</ns0:p><ns0:p>LOG ANALYSIS is the research area with the highest number of primary studies, and our study shows that the body of knowledge for data modeling and analysis is already extensive. For instance, logs can be viewed as sequences of events, count vectors, or graphs. Each representation enables the usage of different algorithms that might outperform other approaches under different circumstances. However, it remains open how different approaches compare to each other. To fulfill this gap, future research must address what trade-offs to apply and elaborate on the circumstances that make one approach more suitable than the other. A public repository on Github 9 contains several datasets used in many studies in log analysis.</ns0:p><ns0:p>We encourage practitioners and researchers to contribute to this collective effort. In addition, most papers frame a log analysis task as a supervised learning problem. While this is the most popular approach for machine learning, the lack of representative datasets with labeled data is an inherent barrier. Projects operating in a continuous delivery culture, where software changes at a fast pace (e.g., hourly deploys), training data might become outdated quickly and the cost of collecting and labeling new data might be prohibitive. We suggest researchers to also consider how their techniques behave in such dynamic environment. More specifically, future work could explore the use of semi-supervised and unsupervised learning to overcome the cost of creating and updating datasets. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>THREATS TO VALIDITY</ns0:head><ns0:p>Our study maps the research landscape in logging, log infrastructure, and log analysis based on our interpretation of the 108 studies published from 1992 to 2019. In this section, we discuss possible threats to the validity of this work and possibilities for future expansions of this systematic mapping.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Search and Selection Procedures</ns0:head><ns0:p>Our procedure to discover relevant papers consists of querying popular digital libraries rather than looking into already known venues in Software Engineering (authors' field of expertise). While we collected data from five different sources, it is unclear how each library indexes the entries. It is possible that we may have missed a relevant paper because none of the digital libraries reported it. Therefore, the search procedure might be unable to yield complete results. Another factor that influences the completeness of our dataset is the filtering of papers based on the venue rank (i.e., A and A* according to the CORE Rank).</ns0:p><ns0:p>There are several external factors that influence the acceptance of a paper that are not necessarily related to the quality and relevance of the study. Overall, it is possible that relevant studies might be missing in our analysis.</ns0:p><ns0:p>One way to address this limitation is by analyzing the proceedings of conferences and journals on different years to identify missing entries. In our case, we have 46 after the selection process. Another approach could be applying backwards/forward snowballing after the selection process. While Google Scholar provides a 'cited by' functionality that is useful for that purpose, the process still requires manual steps to query and analyze the results.</ns0:p><ns0:p>While the aforementioned approaches are useful to avoid missing studies, we argue that the number of papers and venues addressed in our work is a representative sample from the research field. The absence of relevant studies do not undermine our conclusions and results since we are not studying any particular dimension of the research field in depth (e.g., whether technique 'A' performs better than 'B' for parsing).</ns0:p><ns0:p>Furthermore, we analyze a broad corpus of high-quality studies that cover the life-cycle of log data.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Paper Classification Procedure</ns0:head><ns0:p>The first author conducted the first step of the characterization procedure. Given that the entire process was mostly manual, this might introduce a bias on the subsequent analysis. To reduce its impact, the first author performed the procedure twice. Moreover, the second author revisited all the decisions made by the first author throughout the process. All diversions were discussed and settled throughout the study.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this work, we show how researchers have been addressing the different challenges in the life-cycle of log data. Logging provides a rich source of data that can enable several types of analysis that is beneficial to the operations of complex systems. LOG ANALYSIS is a mature field, and we believe that part of this success is due to the availability of dataset to foster innovation. LOGGING and LOG INFRASTRUCTURE, on the other hand, are still in a early stage of development. There are several barriers that hinder innovation in those area, e.g., lack of representative data of application logs and access to developers. We believe that closing the gap between academia and industry can increase momentum and enable the future generation of tools and standards for logging.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure1. Overview of survey methodology: our four steps consists of the discovery of related studies ('Search Process'), the selection of relevant studies ('Study Selection'), the mapping process ('Classification'), and the update for papers published in 2019 ('Survey Update').</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Overview of research areas in logging practices and how they relate to the life-cycle of log data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>. (2012a);<ns0:ref type='bibr' target='#b51'>Chen and Jiang (2017b)</ns0:ref>;<ns0:ref type='bibr' target='#b140'>Shang et al. (2014</ns0:ref><ns0:ref type='bibr' target='#b138'>Shang et al. ( , 2015))</ns0:ref>;<ns0:ref type='bibr' target='#b127'>Pecchia et al. (2015)</ns0:ref>;<ns0:ref type='bibr' target='#b95'>Kabinna et al. (2016)</ns0:ref>;Li et al. (2019b);<ns0:ref type='bibr' target='#b160'>Zeng et al. (2019)</ns0:ref> Log requirements Assessment of log conformance given a known requirement<ns0:ref type='bibr' target='#b59'>Cinque et al. (2010)</ns0:ref>;<ns0:ref type='bibr' target='#b128'>Pecchia and Russo (2012)</ns0:ref>;<ns0:ref type='bibr' target='#b60'>Cinque et al. (2013)</ns0:ref>;<ns0:ref type='bibr' target='#b159'>Yuan et al. (2012b);</ns0:ref><ns0:ref type='bibr' target='#b61'>da Cruz et al. (2004)</ns0:ref> Implementation of log statementsFocus on what to log, where to log, and how to logChen and Jiang (2017a); Hassani et al. (2018); Fu et al. (2014a); Zhu et al. (2015); Li et al. (2018, 2017a); He et al. (2018a); Li et al. (2019a); Liu et al. (2019b); Anu et al. (2019); Zhi et al. (2019) Log Infrastructure: Techniques to enable and fulfil the requirements of the analysis process Parsing Extraction of log templates from raw log data Aharon et al. (2009); Makanju et al. (2009, 2012); Liang et al. (2007); Gainaru et al. (2011); Hamooni et al. (2016); Zhou et al. (2010); Lin et al. (2016); Tang and Li (2010); He et al. (2016a, 2018b); Zhu et al. (2019); Agrawal et al. (2019) Storage Efficient persistence of large datasets of logs Lin et al. (2015); Mavridis and Karatza (2017); Liu et al. (2019a) Log Analysis: Insights from processed log data Anomaly detection Detection of abnormal behaviourDong Tang and Iyer (1992); Oliner and Stearley (2007); Chinghway Lim et al. (2008); Xu et al. (2009b,a); Fu et al. (2009); Ghanbari et al. (2014); Gao et al. (2014); Juvonen et al. (2015); Farshchi et al. (2015); He et al. (2016b); Nandi et al. (2016); Du et al. (2017); Bertero et al. (2017); Lu et al. (2017); Debnath et al. (2018); Bao et al. (2018); Farshchi et al. (2018); Zhang et al. (2019); Meng et al. (2019) Security and privacy Intrusion and attack detection Oprea et al. (2015); Chu et al. (2012); Yoon and Squicciarini (2014); Yen et al. (2013); Barse and Jonsson (2004); Abad et al. (2003); Prewett (2005); Butin and Le Métayer (2014); Goncalves et al. (2015) Root cause analysis Accurate failure identification and impact analysis Gurumdimma et al. (2016); Kimura et al. (2014); Pi et al. (2018); Chuah et al. (2013); Zheng et al. (2011); Ren et al. (2019) Failure prediction Anticipating failures that leads a system to an unrecoverable state Wang et al. (2017); Fu et al. (2014b); Russo et al. (2015); Khatuya et al. (2018); Shalan and Zulkernine (2013); Fu et al. . (2003); Mariani and Pastore (2008); Tan et al. (2010); Beschastnikh et al. (2014); Wu et al. (2017); Awad and Menasce (2016); Kc and Gu (2011); Lou et al. (2010); Steinle et al. (2006); Di Martino et al. (2012) Reliability and dependability Understand dependability properties of systems (e.g., reliability, performance) Banerjee et al. (2010); Tian et al. (2004); Huynh and Miller (2009); El-Sayed and Schroeder (2013); Ramakrishna et al. (2017); Park et al. (2017) Log platforms Full-fledged log analysis platforms Li et al. (2017b); Aharoni et al. (2011); Yu et al. (2016); Balliu et al. (2015); Di et al. (2017); Neves et al. (2018); Gunter et al. (2007) Table 4. Summary of our mapping study. The 108 papers are grouped into three main research areas, and each area has subcategories according to the focus of the study. We observed three different subcategories in log engineering: (1) empirical studies on log engineering practices, (2) techniques to improve log statements based on known requirements for log data, and (3)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b87'>Hassani et al. (2018)</ns0:ref> proposed a set of checkers based in a an empirical study of log-related changes in two open-source projects (Apache Hadoop and Apache Camel). They observed that typos in log messages, missing guards (i.e., conditional execution of log statement according to the appropriate level), and missing exception-related logging (e.g., unlogged exception or missing the exception in a log statement) are common causes for code changes.<ns0:ref type='bibr' target='#b104'>Li et al. (2019a)</ns0:ref> also analyze log changes across several revisions on 12 C/C++ open-source projects. However, they mine rules based on the type of modification (e.g., update on log descriptor) and contextual characteristics from the revision. The rational is that new code changes with similar contextual characteristics should have similar type of log modification.The authors proposed this method in the form of a tool named LOGTRACKER. In another study, Chen and Jiang (2017a) analyzed 352 pairs of log-related changes from ActiveMQ, Hadoop, and Maven (all Apache projects), and proposed LCANALYZER, a checker that encodes the anti-patterns identified on their analysis. Some of these patterns are usage of nullable references, explicit type cast, and malformed output (e.g., referencing data types without user-friendly string representation) in the log statement.Li et al. (2019b) addressed additional anti-patterns caused mainly by improper copy-and-paste, e.g., same log statement reused on different catch blocks. They derived five duplication anti-patterns by studying 3K duplicated log statements on Hadoop, ElasticSearch, CloudStack, and Cassandra, and encoded those anti-patterns in a checker named DLFINDER. On the evaluation, they discovered not only new issues on the analyzed systems but also on other two systems (Camel and Wicket). Note that several recurrent problems aforementioned can be capture by static analysis before merging changes into the code base.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>analysis and statistical models to mine log patterns.<ns0:ref type='bibr' target='#b73'>Fu et al. (2014a)</ns0:ref> conducted an empirical study in two Microsoft C# systems and proposed five classifications for log placement: three for unexpected 10/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>the authors report low recall. Both studies highlight the relationship of the log message and associate severity of a log statement. In another study,<ns0:ref type='bibr' target='#b40'>Anu et al. (2019)</ns0:ref> also proposes a classifier for log level recommendation. They focus on log statements located on if-else blocks and exception handling. In terms of feature engineering, the authors leverage mostly the terms associated in the code snippet (e.g., log message, code comments, and method calls) while Li et al. (2017a) use quantitative metrics extracted from code (e.g., length of log message and code complexity). However it remains open how both techniques compare in terms of performance. An important part of log statements is the description of the event being logged. Inappropriate descriptions are problematic and delay the analysis process. He et al. (2018a) conducted an empirical study focused on what developers log. They analyzed 17 projects (10 in Java and 7 in C#) and concluded</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>From</ns0:head><ns0:label /><ns0:figDesc>an empirical perspective,<ns0:ref type='bibr' target='#b89'>He et al. (2016a)</ns0:ref> compared four log parsers on five datasets with over 10 million raw log messages and evaluated their effectiveness in a real log-mining task. The authors show, among many other findings, that current log parsing methods already achieve high accuracy, but do not scale well to large log data. Later,<ns0:ref type='bibr' target='#b167'>Zhu et al. (2019)</ns0:ref> extended the former study and evaluated a total of 13 parsing techniques on 16 datasets. In a different study,<ns0:ref type='bibr' target='#b90'>He et al. (2018b)</ns0:ref> also compared existing parsing techniques and proposed a distributed parsing technique for large-scale datasets on top of Apache Spark. The authors show that for small datasets, the technique underperforms due to the communication overhead between workers; however, for large-scale datasets (e.g., 200 million log messages), the approach overcomes traditional techniques. It is worth mentioning that the large-scale datasets were synthetically generated on top of two popular datasets due to the lack of real-world datasets.<ns0:ref type='bibr' target='#b33'>Agrawal et al. (2019)</ns0:ref> also proposes a distributed approach based on Apache Spark for distributed parsing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)Manuscript to be reviewed Computer Science machine learning. As within many other fields of software engineering, we see an increasingly adoption of machine and deep learning. In 2016,<ns0:ref type='bibr' target='#b91'>He et al. (2016b)</ns0:ref> then evaluated six different algorithms (three supervised, and three unsupervised machine learning methods) for anomaly detection. The authors found that supervised anomaly detection methods present higher accuracy when compared to unsupervised methods; that the use of sliding windows (instead of a fixed window) can increase the accuracy of the methods; and that methods scale linearly with the log size. In 2017,<ns0:ref type='bibr' target='#b66'>Du et al. (2017)</ns0:ref> proposed DEEPLOG, a deep neural network model that used Long Short-Term Memory (LSTM) to model system logs as a natural language sequence, and<ns0:ref type='bibr' target='#b47'>Bertero et al. (2017)</ns0:ref> explored the use of NLP, considering logs fully as regular text. In 2018,<ns0:ref type='bibr' target='#b62'>Debnath et al. (2018)</ns0:ref> (by means of the LOGMINE technique<ns0:ref type='bibr' target='#b85'>(Hamooni et al., 2016)</ns0:ref>) explored the use of clustering and pattern matching techniques. In 2019,<ns0:ref type='bibr' target='#b119'>Meng et al. (2019)</ns0:ref> proposed a technique based on unsupervised learning for unstructured data. It features a transformer TEMPLATE2VEC (as an alternative to WORD2VEC) to represent extracted templates from logs and LSTMs to learn common sequences of log sequences. In addition,Zhang et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b125'>Oprea et al. (2015)</ns0:ref> use (web) traffic logs to detect early-stage malware and advanced persistence threat infections in enterprise network, by modeling the information based on belief propagation inspired by graph theory.<ns0:ref type='bibr' target='#b54'>Chu et al. (2012)</ns0:ref> analyses access logs (in their case, from TACACS+, an authentication protocol developed by Cisco) to distinguish normal operational activities from rogue/anomalous ones.<ns0:ref type='bibr' target='#b156'>Yoon and Squicciarini (2014)</ns0:ref> focus on the analysis and detection of attacks launched by malicious or misconfigured nodes, which may tamper with the ordinary functions of the MapReduce framework.<ns0:ref type='bibr' target='#b155'>Yen et al. (2013)</ns0:ref> propose BEEHIVE, a large-scale log analysis for detecting suspicious activity in enterprise networks, based on logs generated by various network devices. In the telecommunication context,<ns0:ref type='bibr' target='#b81'>Goncalves et al. (2015)</ns0:ref> used clustering algorithms to identify malicious activities based on log data from firewall, authentication and DHCP servers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b100'>Kimura et al. (2014)</ns0:ref> identify spatial-temporal patterns in network events. The authors affirm that such spatial-temporal patterns can provide useful insights on the impact and root cause of hidden network events.<ns0:ref type='bibr' target='#b133'>Ren et al. (2019)</ns0:ref> explores a similar idea in the context of diagnosing non-reproducible builds. They propose a differential analysis among different build traces based on I/O and parent-child dependencies. The technique leverages the common dependencies patterns to filter abnormal patterns and to pinpoint 14/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)Manuscript to be reviewedComputer Sciencethe cause of the non-reproducible build.<ns0:ref type='bibr' target='#b130'>Pi et al. (2018)</ns0:ref> propose a feedback control tool for distributed applications in virtualized environments. By correlating log messages and resource consumption, their approach builds relationships between changes in resource consumption and application events. Somewhat related,<ns0:ref type='bibr' target='#b55'>Chuah et al. (2013)</ns0:ref> identifies anomalies in resource usage, and link such anomalies to software failures.<ns0:ref type='bibr' target='#b163'>Zheng et al. (2011)</ns0:ref> also argue for the need of correlating different log sources for a better problem identification. In their study, authors correlate supercomputer BlueGene's reliability, availability and serviceability logs with its job logs, and show that such a correlation was able to identify several important observations about why their systems and jobs fail.<ns0:ref type='bibr' target='#b84'>Gurumdimma et al. (2016)</ns0:ref> also leverages multiple sources of data for accurate diagnosis of malfunctioning nodes in the Ranger Supercomputer.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Outside the web domain,El-Sayed and Schroeder (2013) explore a decade of field data from the Los Alamos National Lab and study the impact of different factors, such as power quality, temperature, fan activity, system usage, and even external factors, such as cosmic radiation, and their correlation with16/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)Manuscript to be reviewed Computer Science the reliability of High Performance Computing (HPC) systems. Among the lessons learned, the authors observe that the day following a failure, a node is 5 to 20 times more likely to experience an additional failure, and that power outages not only increase follow-up software failures, but also infrastructure failures, such as problems in distributed storage and file systems. In a later study,<ns0:ref type='bibr' target='#b126'>Park et al. (2017)</ns0:ref> discuss the challenges of analyzing HPC logs. Log analysis of HPC data requires understanding underlying hardware characteristics and demands processing resources to analyze and correlate data. The authors introduce an analytic framework based on NoSQL databases and Big Data technology (Spark) for efficient in-memory processing to assist system administrators.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>9</ns0:head><ns0:label /><ns0:figDesc>Loghub -https://github.com/logpai/loghub 18/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Distribution of study selection when the survey was first conducted.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Selection Step</ns0:cell><ns0:cell>Qty</ns0:cell></ns0:row><ns0:row><ns0:cell>Step 1. Exclusion by selection criteria</ns0:cell><ns0:cell>3,872</ns0:cell></ns0:row><ns0:row><ns0:cell>Out of scope (failed C 4 )</ns0:cell><ns0:cell>3,544</ns0:cell></ns0:row><ns0:row><ns0:cell>Short/workshop paper (failed C 3 )</ns0:cell><ns0:cell>276</ns0:cell></ns0:row><ns0:row><ns0:cell>Not a research paper (failed C 3 )</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-English manuscript (failed C 1 )</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Unpublished (failed C 3 )</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Duplicate</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Secondary study (failed C 2 )</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Preliminary inclusion of papers</ns0:cell><ns0:cell>315</ns0:cell></ns0:row><ns0:row><ns0:cell>Step 2. Exclusion by venue rank (neither A* nor A)</ns0:cell><ns0:cell>219</ns0:cell></ns0:row><ns0:row><ns0:cell>Unranked</ns0:cell><ns0:cell>143</ns0:cell></ns0:row><ns0:row><ns0:cell>Rank B</ns0:cell><ns0:cell>47</ns0:cell></ns0:row><ns0:row><ns0:cell>Rank C</ns0:cell><ns0:cell>30</ns0:cell></ns0:row><ns0:row><ns0:cell>Inclusion of papers (up to 2018, inclusive)</ns0:cell><ns0:cell>96</ns0:cell></ns0:row></ns0:table><ns0:note>3/26PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)Manuscript to be reviewedComputer ScienceC 2 : It is a primary study.C 3 : It is a full research paper accepted through peer-review.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>highlights the distribution of venues</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Distribution of venues and publications grouped by research communities.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>5/26</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Top recurring venues ordered by number of papers. There are 14 (out of 46) recurring venues with at least three papers published (omitted venues with less than three papers for brevity).</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='19'>highlights the most recurring venues in our dataset (we omitted venues with less than three</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>papers for brevity). The 'International Conference on Software Engineering (ICSE)', the 'Empirical</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>Software Engineering Journal (EMSE)', and the 'International Conference on Dependable Systems and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>Networks (DSN)' are the top three recurring venues related to the subject and are well-established venues.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>DSN and ICSE are conferences with more than 40 editions each and EMSE is a journal with an average</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>of five issues per year since 1996 . At a glance, we noticed that papers from DSN have an emphasis on</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>log analysis of system logs while papers from ICSE and EMSE have an emphasis on development aspects</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>of logging practices (more details about the research areas in the next section). Note that Table 3 also</ns0:cell></ns0:row><ns0:row><ns0:cell cols='13'>concentrates 65% (71 out of 108) of the primary studies in our dataset.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='19'>Figure 2 highlights the growth of publication from 1992 to 2019. The interest on logging has been</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>continuously increasing since the early 2000's. During this time span, we observed the appearance of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>industry track papers reporting applied research in a real context. This gives some evidence that the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='19'>growing interest on the topic attracted not only researchers from different areas but also companies,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>fostering the collaboration between academia and industry.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Publication Type industry track paper journal</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1 3</ns0:cell></ns0:row><ns0:row><ns0:cell>research track paper</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell>2 2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2 1</ns0:cell><ns0:cell>2 1</ns0:cell><ns0:cell>1 1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>1 1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>3 1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>'92</ns0:cell><ns0:cell>'98</ns0:cell><ns0:cell>'00</ns0:cell><ns0:cell cols='3'>'03 '04 '05 '06</ns0:cell><ns0:cell>'07</ns0:cell><ns0:cell>'08</ns0:cell><ns0:cell>'09</ns0:cell><ns0:cell>'10</ns0:cell><ns0:cell>'11</ns0:cell><ns0:cell>'12</ns0:cell><ns0:cell>'13</ns0:cell><ns0:cell>'14</ns0:cell><ns0:cell cols='2'>'15 '16</ns0:cell><ns0:cell>'17</ns0:cell><ns0:cell>'18</ns0:cell><ns0:cell>'19</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Year</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='19'>Figure 2. Growth of publication types over the years. Labels indicate the number of publication per type</ns0:cell></ns0:row><ns0:row><ns0:cell>in a specific year.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='8'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>6/26</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b63'>Di et al. (2017)</ns0:ref> propose LOGAIDER, a tool that integrates log mining and visualization to analyze different types of correlation (e.g., spatial and temporal). In this study, they use log data from Mira, an IBM Blue Genebased supercomputer for scientific computing, and reported high accuracy and precision in uncovering correlations associated with failures.<ns0:ref type='bibr' target='#b83'>Gunter et al. (2007)</ns0:ref> propose a log summarization solution for time-series data integrated with anomaly detection techniques to troubleshoot grid systems. They used a publicly available testbed and conducted controlled experiments to generate log data and anomalous events. The authors highlight the importance of being able to choose which anomaly detection technique to use, since they observed different performance depending on the anomaly under analysis.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Open-source systems for cloud infrastructure and big data can be also used as representative objects</ns0:cell></ns0:row><ns0:row><ns0:cell>of study. Yu et al. (2016) and Neves et al. (2018) conduct experiments based on OpenStack and Apache</ns0:cell></ns0:row><ns0:row><ns0:cell>Zookeeper, respectively. CLOUDSEER (Yu et al., 2016) is a solution to monitor management tasks in cloud</ns0:cell></ns0:row><ns0:row><ns0:cell>infrastructures. The technique is based on the characterization of administrative tasks as models inferred</ns0:cell></ns0:row><ns0:row><ns0:cell>from logs. CLOUDSEER reports anomalies based on model deviation and aggregates associated logs for</ns0:cell></ns0:row></ns0:table><ns0:note>further inspection. Finally, FALCON<ns0:ref type='bibr' target='#b122'>(Neves et al., 2018)</ns0:ref> is a tool that builds space-time diagrams from log data. It features a happens-before symbolic modeling that allows obtaining ordered event scheduling from unsynchronized machines. One interesting feature highlighted by the authors is the modular design of tool for ease extension.17/26PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>https://www.elastic.co/elasticsearch/ 3 https://www.elastic.co/logstash 4 https://www.elastic.co/kibana 5 https://lucene.apache.org 6 https://grafana.com 7 https://www.fluentd.org 2/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='8'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021)</ns0:note>
<ns0:note place='foot' n='26'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55310:1:1:REVIEW 16 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editorial Office,
We would like to sincerely thank the editor and the reviewers again for the careful reading of
our manuscript. The reviewers’ comments helped us in clarifying many of the in-depth details
that were not clear in our first version.
This letter explains how we addressed the comments in this major review, point-by-point. The
main changes in this version of the paper are:
● Updated taxonomy used in our paper
● Elaborated the decisions we made in the survey methodology
● Addition of a Threats to Validity section
To facilitate the work of the reviewers, we also attach a version of our paper where changes are
automatically tracked using latexdiff. The Title, Tables, and Figures are also updated (although
not highlighted by latexdiff).
We look forward to reading back from you. Should there be any questions, please do not
hesitate to bring them to our attention.
Jeanderson, Maurício, Arie
Editor comments (Marieke Huisman)
The reviewers provide several comments, in particular on the use of the term logging, and how
you did your categorisation. They also would like to see more motivation for some of the
choices you have made. Please address their comments in a new version of this paper.
We are thankful for the time and comments from the editor and the reviewers. Our reviewed
manuscript adopts the recommended taxonomy and elaborates the decisions we made in the
survey methodology and results.
Reviewer 1 (Anonymous)
Basic reporting
1. The title is misleading, “Contemporary software monitoring: a systematic mapping study.” In
particular, while the title is about software monitoring, the survey mainly discusses the usage of
software logs, including logging, log parsing, log storage, and log analysis. I suggest the authors
either include papers about software monitoring that are not based on software logs or revise
the title to align with the main context. For example, “Log-based Software Monitoring: A
Systematic Mapping Study.”
We agree with the reviewer that the term 'log-based software monitoring' is a more
accurate title to our paper. The new version of the paper contains the updated title.
2. In abstract, the authors mention “A holistic view of the logging research field is key to
provide directions and to disseminate the state-of-the-art for technology transferring.” The
term “logging” may not be accurate. “Logging” typically refers to designing logging statements
in source codes and the practice that developers conduct logging is “logging practice”. The
research field should be “automated log analysis”.
Thank you for the question regarding the taxonomy. We rephrase this sentence to: 'A holistic
view of the research effort on logging practices and automated log analysis is key to provide
directions and to disseminate the state-of-the-art for technology transferring.'
We make the explicit distinction of 'logging practices' and 'automated log analysis' because
we think that only referencing 'automated log analysis' might not accurately refer to the
empirical studies and methods for logging practices (i.e., practices that support developers in
deciding, e.g., what to log or where to put a log line).
3. The authors mention that “In this paper, we study 108 papers … from different communities
(machine learning, software engineering, and systems) ” However, it is unclear how many
papers each category contains. The authors are recommended to add a table to show the
related statistics.
We added the following paragraph in the Results section of our paper:
'We manually aggregate venues by their main scope after checking their description and call
for papers, and we merge closely related topics (e.g., Artificial Intelligence, Machine Learning,
and Data Science). Table 2 highlights the distribution of venues grouped by the research
community, e.g., there are 44 papers published on 10 Software Engineering venues.'
In addition, we added the corresponding table (Table 2):
4. Line 221 (Overview of Research Areas). The term “Log Engineering” is inaccurate and
misleading. Could the authors explain why “log engineering” is an appropriate term here? I
would prefer “logging”.
Indeed, after reviewing the paper, we agree that 'log engineering' and 'logging' are
indistinguishable. Therefore, we changed the paper accordingly on Table 3, Results and
Discussion sections, and on Figure 3.
5. Table 3. The categorization is inaccurate. For example, “Logging practice” row mainly
includes empirical studies. “Implementation of log statements” row mainly includes
methodologies on what-to-log, where-to-log, and how-to-log. However, studies in the “Logging
practice” row also explores these three topics. Thus, I suggest the authors to use “Empirical
studies” instead of “Logging practice”.
We reviewed the categorization and we agree that 'Empirical studies' is the most
appropriate term for the respective category. We changed the paper accordingly on Table 3,
Results (subsection 'Empirical Studies') and Discussion.
6. In “log analysis”, the authors mention several tasks, including anomaly detection and failure
prediction. However, it is unclear what are the differences between them. The definition
provided in Table 3 looks similar: “Detection of abnormal behaviour” and “Anticipate abnormal
behaviour”.
The current definition is indeed a bit ambiguous.
Anomaly detection is about identifying undesired patterns in log data. Failure prediction is
about anticipating the occurrence of actual failures that leads to an unrecoverable state.
While papers in both categories might use similar approaches (e.g., supervised learning or
outlier detection), in our interpretation, not every anomaly necessarily leads to failures (e.g.,
increasingly latency that could be caused by external factors and needs to be addressed) but
predicting failures requires insights about preceding events (e.g. increased latency associated
with out-of-memory on services A and B results in a cascading effect of crashes). Those
insights usually happen after root cause analysis.
We clarified that in the paper. More specifically, in Table 3, we adjusted the definition of
Failure Prediction to 'Anticipating failures that leads a system to an unrecoverable state' and
updated the initial paragraph from subsection 'Failure Prediction' with the following
sentence: 'However, it differs from anomaly detection in the sense that identifying the
preceding patterns of an unrecoverable state requires insights from root cause analysis.'
Experimental design
1. A line of research on what kind of log we should use should be considered, for example:
[ICSE’16] Behavioral Log Analysis with Statistical Guarantees
[ESEC/FSE’18] Using Finite-State Models for Log Differencing
[ASE’19] Statistical Log Differencing
We analyzed the scraped data from the Digital Libraries present in our data collection and
confirmed that the studies aforementioned are not present. Note that this is not related to
selection criteria after inclusion/exclusion but for some reason, at the time we collected the
data, those entries were not indexed.
One way to avoid those missing studies would be checking the proceedings of the 46 venues
that we discovered in a range of years. We address this limitation in a newly added section of
Threats to Validity. More precisely, in the 'Paper selection procedure' subsection.
2. I suggest the authors add a section that describes existing open-source tools and datasets for
log-based software monitoring.
We find this suggestion very useful to our study. To this end, we point the reader in the
Discussion section about Log Analysis to an existing repository that contains a curated list of
papers that are also present in our study that describes the existing datasets (Loghub https://github.com/logpai/loghub).
Reviewer 2 (Amleto Di Salle)
Basic reporting
This paper details a mapping study about the monitoring of modern software systems. The
main result is a classification about logging research areas through the analysis of 108 papers.
Then a discussion concerning the findings is reported.
The article is well written, and the authors show competence and demonstrate excellent
expertise in the field. I think that behind this paper, there is a certain amount of work that must
be acknowledged.
The Introduction highlights the problem adequately, and the motivations are well justified.
Relation with existing literature is satisfactory and well discussed.
We thank the reviewer for their kind words.
The paper is clear concerning the English language, but the authors mix English UK and English
US. For example, in the Introduction Section, they use both behavior (or analyze) and behaviour
(or analyse). Please check it in the overall paper.
We revised and fixed the inconsistencies in the paper. The paper follows English US as the
standard language.
Concerning the paper structure, it does not follow the template suggested by the journal for
literal review. In particular, the authors added the Discussion section as recommended by the
standard journal template. However, having two separate sections allows better to highlight
the results with respect to their analysis.
In a previous iteration of the paper prior submission, we didn't have the Discussion section.
After checking the paper, we also had the same feeling that having a separate Discussion
section would allow better highlighting of the results. Therefore, we made the decision of not
following strictly the recommended format. If allowed by the editor, we would like to keep a
separated Discussion section.
Experimental design
The authors used the well-known guidelines from Petersen K. et al. 'Systematic Mapping
Studies in Software Engineering' for having a mapping study in software engineering. However,
they did not follow the guidelines. In particular, they did not define any research questions
about the mapping study; they did not report a subsection containing validity threats.
In our mapping study, we used the guidelines from Petersen et al. to derive the classification
scheme in a systematic way by using 'keywording of abstracts'.
Regarding the research questions, the main question being answered is how the existing
literature is distributed across the research field. The answer is summarized in Table 3. We
also provide an overview of the venues and trends over the year (as typically reported on
mapping studies).
The main reason for not explicitly referring to RQs is that we find that just providing the
counting over the years and the number of papers on each category does not fully exploit the
richness of our dataset. On the other hand, defining more significant research questions is
challenging given the diversity of topics and facets within logging, log infrastructure, and log
analysis.
We decided to provide an overview of each area without discussing them in-depth (which
might be more appropriate to a Systematic Literature Review). We imagine our mapping
study as an introductory text to researchers and practitioners to the research field and how
the existing literature is connected to the life-cycle of log data.
Regarding threats to validity, we included a section in the updated version of the paper.
Concerning the period used during the search process, it comes from 1992 to 2019 (line 213).
However, in the paper's title, the authors refer to the monitoring of contemporary software,
and the starting year is far from being considered modern. So please motivate why they
decided to use 1992 as the starting date.
Indeed, we find that the use of 'contemporary' leads to confusion. One of our goals is to
discuss the and demonstrate that it has attracted attention from the research community and
industry.
We renamed the title of the paper to 'Log-based Software Monitoring: A systematic mapping
study' as also suggested by the first reviewer.
Moreover, it is unclear why they decided to update the survey through a forward snowballing
and not through a repetition of the search process only for the considered period. Again, please
motivate it.
The main reason for using snowballing is that repeating the process would require running
the queries and scraping the results for all five digital libraries. We highlight this in the
'Survey Update' section:
'To this end, we select all 11 papers from 2018 and perform forward snowballing to fetch a
preliminary list of papers from 2019. We use snowballing for simplicity since we can leverage
the “Cited By” feature from Google Scholar rather than scraping data of all five digital
libraries. '
Please change the sentence at line 89 to better highlight the usage of the previous guidelines
described in the paper.
We modified the recommended line to emphasize how we use the guidelines: 'In our
mapping study, we apply keywording of abstracts (Petersen et al., 2008) to derive the
classification schema.'
Validity of the findings
The Results section contains the core of the paper, and it is clear and well written. However, as
already said in the study design, the results are not discussed for the hypothetical defined
research questions. So please, add the RQs and then modify the results section accordingly.
Also, The discussion and conclusions sections are clear and well written.
We thank the reviewer for the suggestion and the kind words. We mentioned our reasons for
not explicitly including research questions in the comments related to our experimental
design.
Comments for the Author
Please see the following from the PeerJ literal review template: Should not be used to
acknowledge funders – funding will be entered online in the declarations page as a separate
Funding Statement and appear on the published paper
We removed the Acknowledgment section. The confusion occurred due to the presence of
this section on the LaTeX template. We thank you for the reminder.
## Minor comments
- overall paper: please hyphenate open source when used as an adjective before a noun, i.e.,
open source projects --> open-source projects
- line 41: and also highlight --> and also highlights
- lines 228, 229: (2)security (6)model --> add space (2) security (6) model
- line 395: located located --> located
- line 456: to large log data Later, --> to large log data. Later,
- line 488: Results indicates --> Results indicate
- line 527: analysis, Farshchi et al. (2015); Why the reference? the sentence refers to another
reference i.e., Juvonen et al. (2015)
There was an improper use of citation macros in the latex sources. We updated the line with
the correct citation:
'For instance, finding anomalies in HTTP logs by using dimensionality reduction techniques
(Juvonen et al., 2015), finding anomalies in cloud operations (Farshchi et al., 2015; Farshchi et
al., 2018) and Spark programs (Lu et al., 2017) by using machine learning.'
- line 594: The propose --> They propose
- line 644: Ulrich et al. (2003) shows --> Ulrich et al. (2003) show
- line 651: has been also possible --> has also been possible
- line 741: allows to obtain --> allows obtaining
We have implemented all the fixes in the paper.
" | Here is a paper. Please give your review comments after reading it. |
67 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Today, redundancy in source code, so-called 'clones', caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, caused not by copy&paste. At present, it is not clear how only functionally similar clones (FSC) differ from clones created by copy&paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in < 16 % of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>As software is now a key ingredient of current complex systems, the size of software systems is continuously increasing. While software with a code size of several thousand lines has been considered large in the seventies and eighties, we now reach code sizes of tens to hundreds of millions of lines of code. This has strong effects on the complexity and manageability of these systems and, as a result, on the cost of maintaining them.</ns0:p><ns0:p>By abstraction and code generation, modern programming languages and development techniques help to reduce the amount of code we have to understand. Nevertheless, it still tends to be overwhelming. A factor that aggravates the situation is that there is unnecessary code in these huge code bases: unreachable code, never executed code and redundant code. The latter has been of increasing interest in the software engineering research community under the term 'cloning'. Especially clones that resulted from copy&paste can now be detected reliably <ns0:ref type='bibr' target='#b25'>(Rattan et al., 2013)</ns0:ref>. In our own research with companies, we often found rates of redundant code caused by copy&paste in the range of 20 % -30 % <ns0:ref type='bibr' target='#b32'>(Wagner, 2013)</ns0:ref>. ore challenging is the detection of functionally similar source code. We will refer to it as functionally similar clones (FSCs). FSCs might not have been created by copy&paste but developers independently needed and implemented certain functionalities in their code base. We deliberately go beyond type-4 clones <ns0:ref type='bibr' target='#b19'>(Koschke, 2007)</ns0:ref> or simions <ns0:ref type='bibr' target='#b2'>(Deissenboeck et al., 2012)</ns0:ref> which only include functional equivalence. In a refactoring session with the goal to reduce the size of a code base, a developer would still be interested in mostly similar and not only exactly equivalent functionality. Although this problem is in general undecidable, there have been several heuristic efforts <ns0:ref type='bibr' target='#b2'>(Deissenboeck et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b10'>Jiang & Su, 2009;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kim et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b24'>Marcus & Maletic, 2001)</ns0:ref>. <ns0:ref type='bibr' target='#b13'>Juergens, Deissenboeck, & Hummel (2010b)</ns0:ref> showed that traditional clone detection approaches and tools are hardly able to detect functionally equivalent clones because they rely on syntactic similarities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Problem Statement</ns0:head><ns0:p>So far, the work by <ns0:ref type='bibr' target='#b13'>Juergens, Deissenboeck, & Hummel (2010b)</ns0:ref> is the only study investigating the syntactical differences in functionally similar clones. Furthermore, their study is limited: they use only programs implementing a single specification in Java. Therefore, we have no clear understanding of what differences make an individually developed functionally similar clone really different from copy&paste clones. Hence, a realistic, open benchmark for comparing and improving such approaches is also lacking although it is necessary for faster progress in the field <ns0:ref type='bibr' target='#b22'>(Lakhotia et al., 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Research Objectives</ns0:head><ns0:p>The objective of this study is to better understand the differences that make up functionally similar clones to support future research on their detection. In particular, we want to classify and rate differences and build a representative benchmark.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3'>Contribution</ns0:head><ns0:p>We contribute a large-scale quantitative study combined with a qualitative analysis of the differences. We selected 2,800 Java and C programs which are solutions to the Google Code Jam programming contest and are therefore functionally similar. We identified copy&paste clones by using two clone detection tools (ConQAT and Deckard) to quantify syntactic similarities. We explored how a type-4 detection tool (CCCD) using cocolic detection performs in detecting the not syntactically similar FSCs. We created a categorisation of differences between undetected clones and quantified these categories. Finally, we derived a benchmark based on real FSCs covering the categories and degrees of differences which can drive the improvement of clone detection tools.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4'>Terminology</ns0:head><ns0:p>The most important terminological issue for this study is whether we need a new term for the phenomenon we investigate. The most established term is type-4</ns0:p><ns0:p>clone. Yet, the definition in <ns0:ref type='bibr' target='#b27'>Roy et al. (2009)</ns0:ref> emphasises that the code fragments have to perform the same computation. We want to emphasise the similarity, however. This is expressed by the term simion, introduced in <ns0:ref type='bibr' target='#b13'>Juergens et al. (2010b)</ns0:ref>, but that term has not been used in other context. Therefore, we propose the more general term functionally similar clone. Nevertheless, we believe the community should agree on one of these terms to avoid confusion in the future.</ns0:p><ns0:p>We define functionally similar clones as two code fragments that provide a similar functionality w.r.t a given definition of similarity but can be implemented quite differently. With this definition, we are based on the general definition of a code clone: 'two code fragments form a clone pair if they are similar enough according to a given definition of similarity.' <ns0:ref type='bibr' target='#b1'>(Bellon et al., 2007)</ns0:ref> Intuitively, what we are interested in is similar enough so that the clones are interesting for a developer of the system while changing it. For our FSCs, we expect this to include similarity in terms of the same output on the same input in most cases.</ns0:p><ns0:p>The data types of the input and output should be somehow transformable into each other. The nonfunctional properties of the code fragments, however, can be very different.</ns0:p><ns0:p>For example, as a developer, I would be interested in the different implementations of search algorithms in my system also when they have quite different performance characteristics. I could then decide if the used algorithms and implementations are suitable for the context. Finally, as with other code clones, the similarity should be such that the clones could potentially be refactored. If the fragments are so different that a refactoring is very elaborate or results in a very complex design, a developer will not be interested in it.</ns0:p><ns0:p>As there is a large diversity in how the further terms around FSCs are used, we provide definitions for the clone types we investigate in this paper. Moreover, we define terms for granularities of the software programs under analysis in Tab. 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1: Terminology</ns0:head></ns0:div>
<ns0:div><ns0:head>Type-1 clone</ns0:head><ns0:p>Similar code fragments except for variation in whitespace, layout and comments <ns0:ref type='bibr' target='#b1'>(Bellon et al., 2007)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Type-2 clone</ns0:head><ns0:p>Similar code fragments except for variation in identifiers, literals, types, whitespaces layouts and comments <ns0:ref type='bibr' target='#b1'>(Bellon et al., 2007)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Type-3 clone</ns0:head><ns0:p>Similar code fragments except that some statements may be added or deleted in addition to variation in identifiers, literals, types, whitespaces, layouts or comments <ns0:ref type='bibr' target='#b1'>(Bellon et al., 2007)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Type-4 clone</ns0:head><ns0:p>Two or more code fragments that perform the same computation but are implemented by different syntactic variants. <ns0:ref type='bibr' target='#b27'>(Roy et al., 2009)</ns0:ref> Functionally similar clone (FSC)</ns0:p><ns0:p>Code fragments that provide a similar functionality w.r.t a given definition of similarity but can be implemented quite differently </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related Work</ns0:head><ns0:p>A code clone consists of at least two pieces of code that are similar according to a definition of similarity. Most commonly, clone detection approaches look for exact clones (also called type-1 ) and clones with simple changes such as renaming (also called type-2 ). These types of clones are detectable today in an efficient and effective way. Even clones with additional changes (inconsistent, near-miss or type-3 clones) can be detected by several detection approaches and tools <ns0:ref type='bibr' target='#b4'>(Deissenboeck et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b9'>Jiang et al., 2007a;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kamiya et al., 2002)</ns0:ref>. There are also two surveys <ns0:ref type='bibr' target='#b19'>(Koschke, 2007;</ns0:ref><ns0:ref type='bibr' target='#b26'>Roy & Cordy, 2007)</ns0:ref> and a systematic literature review <ns0:ref type='bibr' target='#b25'>(Rattan et al., 2013)</ns0:ref> on this topic. <ns0:ref type='bibr' target='#b31'>Tiarks et al. (2011)</ns0:ref> investigated in particular type-3 clones and also their differences. They concentrated, however, on differences in code metrics (e.g. fragment size), low level edits (e.g. variable) and abstracted them only slightly (e.g. to type substitution).</ns0:p><ns0:p>Juergens, <ns0:ref type='bibr' target='#b13'>Deissenboeck, & Hummel (2010b)</ns0:ref> reported on an experiment to investigate the differences between syntactical/representational and semantic/behavioural similarities of code and the detectability of these similarities. They used a simple student assignment called email address validator and also inspect the open-source software JabRef. Both of them are in Java. To detect the clones of types 1-3, they used the clone detection tools ConQAT and Deckard.</ns0:p><ns0:p>They reviewed the open-source system manually to identify if behaviourally similar code that does not result from copy&paste can be detected and occurs in real-world software. The results indicate that behaviourally similar code of independent origin is highly unlikely to be syntactically similar. They also reported that the existing clone detection approaches cannot identify more than 1 % of such redundancy. We build our work on their study but concentrate on understanding the differences in more detail based on a diverse sample with a larger sample size and different programming languages.</ns0:p><ns0:p>Several researchers have proposed to move away from the concrete syntax to detect what they call semantic clones. <ns0:ref type='bibr' target='#b24'>Marcus & Maletic (2001)</ns0:ref> used information retrieval techniques on source code to detect semantic similarities. <ns0:ref type='bibr' target='#b20'>Krinke (2001)</ns0:ref> proposed to use program dependence graphs (PDG) for abstracting source code. <ns0:ref type='bibr' target='#b7'>Higo et al. (2011)</ns0:ref> extended this to an incremental approach.</ns0:p><ns0:p>Komondoor & Horwitz (2001) also used PDGs for clone detection and see the possibility to find non-contiguous clones as a main benefit. <ns0:ref type='bibr' target='#b5'>Gabel, Jiang, & Su (2008)</ns0:ref> combined the analysis of dependence graphs with abstract syntax trees in the tool Deckard to better scale the approach.</ns0:p><ns0:p>A very different approach to detecting semantic clones comes from <ns0:ref type='bibr' target='#b17'>Kim et al. (2011)</ns0:ref> who use static analysis to extract the memory states for each procedure exit point. They could show that they find more semantically similar procedures as clones than previous clone detectors including PDG-based detectors.</ns0:p><ns0:p>Finally, <ns0:ref type='bibr' target='#b15'>Kamiya (2013)</ns0:ref> the fundamental heuristic that two functionally similar code fragments will produce the same output for the same randomly generated input. They implemented a prototype based on their toolkit ConQAT. The evaluation of the approach involved 5 open-source systems and an artificial system with independent implementations of the same specification in Java. They experienced low detection results due to the limited capability of the random testing approach.</ns0:p><ns0:p>Furthermore, they mention that the similarities are missed due to chunking, i.e. if the code fragments perform a similar computation but use different data structures at their interfaces. They emphasise that further research is required to understand these issues.</ns0:p><ns0:p>CCCD <ns0:ref type='bibr' target='#b21'>(Krutz & Shihab, 2013</ns0:ref>) also claims to detect functionally similar code for C programs based on concolic analysis. Its creators evaluated their implementation of the approach on the benchmarks mentioned below and found a 92 % recall even in the type-4 clones in those benchmarks. As the tool is freely available in a virtual machine, we were able to include it in our experiment.</ns0:p><ns0:p>A clear comparison and measurement of the improvement in clone detection research would require a comprehensive benchmark. There have been few</ns0:p><ns0:p>approaches <ns0:ref type='bibr' target='#b22'>(Lakhotia et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b27'>Roy et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Tempero, 2013)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science 3 Experimental Design</ns0:head><ns0:p>To reach our research objectives, we developed a study design based on the idea that we investigate sets of programs which we knew to be functionally similar:</ns0:p><ns0:p>accepted submissions to programming contests. We formulated four research questions which we all answer by analysing these programs and corresponding detection results. All instrumentation, analysis scripts and results are freely available in <ns0:ref type='bibr' target='#b33'>Wagner et al. (2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Research Questions</ns0:head><ns0:p>As we have independently developed but functionally similar programs, we first wanted to establish how much syntactic similarity is in these programs. We can investigate this by quantifying the share of type-1-3 clones RQ 1: What share of independently developed similar programs are type-1-3 clones?</ns0:p><ns0:p>Then we wanted to understand what is different in clones not of type-1-3. This should result in a categorisation and rating of the differences between FSCs.</ns0:p><ns0:p>RQ 2: What are the differences between FSC that go beyond type-1-3 clones?</ns0:p><ns0:p>Although we could not fully evaluate type-4 detectors, we wanted at least</ns0:p><ns0:p>to explore what a modern clone detection approach can achieve on our FSCs.</ns0:p><ns0:p>This should give us an indication how much more research is needed on those detection approaches.</ns0:p><ns0:p>RQ 3: What share of FSC can be detected by a type-4 clone detector?</ns0:p><ns0:p>Finally, to make our results an operational help for clone detection research, we wanted to create a representative benchmark from the non-detected clones.</ns0:p><ns0:p>RQ 4: What should a benchmark contain that represents the differences between FSC?</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Hypotheses</ns0:head><ns0:p>We define two hypotheses regarding RQ 1. As we investigate the share of detectable Type-1-3 clones, we wanted to understand if there are differences between the used tools and analysed languages because this might have an influence on the generalisability of our results. We formulated the two null hypotheses:</ns0:p><ns0:p>H1: There is no difference in the share of detected Type-1-3 clones between the analysed programming languages.</ns0:p><ns0:p>H2: There is no difference in the share of detected Type-1-3 clones between the analysed clone detection tools.</ns0:p><ns0:p>Moreover, in RQ 2, we wanted to understand the characteristics of nondetected clone pairs and, therefore, categorised them. In this categorisation, we also rated the degree of difference in each category. An ideal categorisa-7</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_12'>2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science tion would have fully orthogonal categories and, hence, categories would not be correlated in the degree of difference:</ns0:p><ns0:p>H3: There is no correlation between the degrees of difference between categories.</ns0:p><ns0:p>Furthermore, we could imagine that different programming languages might cause disparately strong differences in certain categories. As this again has an impact on the generalisability of our results, we formulated this null hypotheses:</ns0:p><ns0:p>H4: There is no difference in the degree of difference between the analysed programming languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Design</ns0:head><ns0:p>The overall study design is a combination of quantitative and qualitative analysis. For the quantitative part of our study, we used a factorial design with two factors (programming language and clone detection tool). As applying the treatments of both factors was mostly automated we could apply almost all factor levels to all study object programs (which we call solutions). Only if a detection tool did not support a certain programming language, we would not apply it. We tried to minimise that but to include a contemporary tool, we accepted an unbalanced design. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows the factors in our experiment. We will describe the programming languages, clone detection tools and corresponding programs under analysis in more detail in the next subsection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Objects</ns0:head><ns0:p>The general idea of this experiment was that we analyse accepted solutions to programming contests because we know that for a given problem, the solutions must be functionally similar. Therefore, our selection of study objects needed to include clone detection tools we could access and execute as well as solutions in programming languages supported by most of the detection tools.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.1'>Clone Detection Tools</ns0:head><ns0:p>Primarily, we needed clone detection tools for detecting type-1-3 clones to investigate with RQ 1 the syntactic similarity of FSCs. We did a literature and web search for available tools.</ns0:p><ns0:p>Many research prototypes were not available or could not be brought to execute correctly. Several tools were not included in the study due their lower performance and scalability or their lack of support for some clone types. CloneDR and CPMiner have lower performance and scalability compared to Deckard <ns0:ref type='bibr' target='#b9'>(Jiang et al., 2007a)</ns0:ref>. CCFinder has also lower performance than Deckard and does not support type-3 clones <ns0:ref type='bibr'>(Svajlenko & Roy, 2014)</ns0:ref>.</ns0:p><ns0:p>In the end, we chose two clone detection tools that both can analyse Java and <ns0:ref type='bibr'>et al. (2010b)</ns0:ref>. At the time of the study, those were the two tools which were both freely available and were possible to make them work for us.</ns0:p><ns0:p>ConQAT is a stable open-source dashboard toolkit also used in industry.</ns0:p><ns0:p>It is a general-purpose tool for various kinds of code measurement and analysis.</ns0:p><ns0:p>For our experiment, ConQAT offers several specific clone detection configurations for various programming languages including Java, C/C++, C# and</ns0:p><ns0:p>Cobol. It has separate detection algorithms for type-1/2 clones and type-3 clones. We employed the latter algorithm. ConQAT has been used in various studies on clone detection <ns0:ref type='bibr' target='#b12'>(Juergens et al., 2010a</ns0:ref><ns0:ref type='bibr' target='#b14'>(Juergens et al., , 2009) )</ns0:ref> including the study we build on <ns0:ref type='bibr' target='#b13'>(Juergens et al., 2010b)</ns0:ref>.</ns0:p><ns0:p>The language-independent clone detection tool Deckard works on code in any programming language that has a context-free grammar. Deckard uses an efficient algorithm for identifying similar subtrees and applies it to tree representations of source code. It automatically generates a parse tree builder to build parse trees required by its algorithm. By a similarity parameter it is possible to control whether only type-1/2 clones or type-3 clones are detected. Deckard is a stable tool used in other studies <ns0:ref type='bibr' target='#b5'>(Gabel et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jiang et al., 2007b)</ns0:ref> including the study we build on.</ns0:p><ns0:p>To explore the state of type-4 clone detection tools, we also searched for such tools. Most existing tools, however, could not be used. For example, EqMiner <ns0:ref type='bibr' target='#b10'>(Jiang & Su, 2009)</ns0:ref> was too tightly coupled with the Linux kernel and MeCC <ns0:ref type='bibr' target='#b17'>(Kim et al., 2011)</ns0:ref> could not detect clones across files. Finally, we were able to only include a single type-4 detector.</ns0:p><ns0:p>CCCD <ns0:ref type='bibr' target='#b21'>(Krutz & Shihab, 2013</ns0:ref>) is a novel clone detection tool that uses concolic analysis as its primary approach to detect code clones. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.2'>Solution Sets and Solutions</ns0:head><ns0:p>We looked at several programming contests and the availability of the submitted solutions. We found that Google Code Jam 1 provided us with the broadest selection of programming languages and the highest numbers of submissions.</ns0:p><ns0:p>Google Code Jam is an annual coding contest organised by Google. Several tens of thousands of people participate each year. In seven competition rounds, the programmers have to solve small algorithmic problems within a defined time frame. Although over one hundred different programming languages are used, the majority of the solutions are in C, C++, Java and Python. Most solutions of the participants are freely available on the web. 2</ns0:p><ns0:p>We define a solution as a single code file delivered by one participant during the contest. We define a solution set as a set of solutions all solving the same problem. A typical solution set consists of several hundred to several thousand solutions. We can be sure that all solutions in a solution set should be FSCs because they passed the judgement of the programming contest. This is also implemented as automated tests. 3 Even if there are differences in the programs, e.g. in the result representation, these are instances of similarity instead of equivalence.</ns0:p><ns0:p>We selected 14 out of 27 problem statements of the Google Code Jam 2014.</ns0:p><ns0:p>For every problem we randomly chose 100 solutions in Java and 100 solutions in C from sets of several hundreds to several thousands of solutions. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows a summary of the size of the chosen solution sets. Hence, on average a C solution has a length of 46 LOC and a Java solution of 94 LOC. In Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>, we detail the size of the selected solution sets. The solution sets differ in size but the means all lie between 33 and 133 LOC per solution.</ns0:p><ns0:p>3.5 Data Collection Procedure</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.1'>Preparation of Programs Under Analysis</ns0:head><ns0:p>We implemented an instrumentation which automatically downloaded the solutions from the website, sampled the solution sets and solutions and normalised the file names. The instrumentation is freely available as Java programs in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2'>Configuration of Clone Detection Tools</ns0:head><ns0:p>We installed ConQAT, Deckard and CCCD and configured the tools with a common set of parameters. As far as the parameters between the tools were related to each other, we tried to set the same values based on the configuration in <ns0:ref type='bibr' target='#b13'>Juergens et al. (2010b)</ns0:ref>. We set the parameters conservatively so that the tools find potentially more clones as we would normally consider valid clones. This ensured that we do not reject our null hypotheses because of configurations.</ns0:p><ns0:p>We based the minimal clone length on the parameters for practical use men- Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_0'>tioned</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.5.3'>Executing Clone Detection Tools</ns0:head><ns0:p>We manually executed the clone detection tools for every solution set. Con-QAT generated an XML file for every solution set containing a list of found clone classes and clones. Deckard and CCCD generate similar CSV files. Our instrumentation tool parsed all these result files and generated reports in a unified format. The reports are tables in which both rows and columns represent the solutions. The content of the table shows the lowest detected clone type between two files. Additionally, our tool calculated all shares of syntactic similarity as described in the next section and wrote the values into several CSV files for further statistical analysis. We also wrote all the detected clones into several large CSV files. Altogether, the tools reported more than 9,300 clones within the Java solutions and more than 22,400 clones within the C solutions.</ns0:p><ns0:p>3.6 Analysis Procedure Because we expected the share of full syntactic similarity to be low, we wanted to check whether there are at least some parts with syntactic similarity.</ns0:p><ns0:p>It would give traditional clone detection tools a chance to hint at the FSC. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In ConQAT and Deckard, we can differentiate between type-1/2 clones and type-3 clones by configuration or result, respectively. In ConQAT, clones with a gap of 0 are type-1/2 clones. In Deckard, analysis results with a similarity of 1 are type-1/2 clones. The others are type-3 clones. The instrument tooling described in Sec. 3.5 directly calculated the various numbers. We computed means per clone type and programming language.</ns0:p><ns0:p>For a further statistical understanding and to answer the hypotheses H1-H4, we did statistical hypotheses tests. For answering H1 and H2, we performed an analysis of variance (ANOVA) on the recall data with the two factors programming language and detection tool. We tested the hypotheses at the 0.05 level.</ns0:p><ns0:p>All analyses implemented in R together with the data are available in <ns0:ref type='bibr' target='#b33'>Wagner et al. (2014)</ns0:ref>.</ns0:p><ns0:p>The combined descriptive statistics and hypothesis testing results answered RQ 1.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.2'>Classifying Differences (RQ 2)</ns0:head><ns0:p>For the categorisation of the differences of FSCs that were not syntactically similar, we took a random sample of these clone pairs. As we had overall 69,300 clone pairs for Java and C, we needed to restrict the sample for a manual analysis. We found in an initial classification (see also Sec. The set of fully syntactically different clone pairs is the set of all pairs in all solution sets minus any pair detected by any of the type-1-3 detection. We apply random sampling to get pairs for further analysis: First, we randomly selected one of the solution sets in a language. Second, we randomly selected a solution file in the solution set and checked if it was detected by Deckard or</ns0:p><ns0:p>ConQAT. If it was detected, we would discard it and select a new one. Third, we randomly picked a second solution file, checked again if it was detected and discard it if it was.</ns0:p><ns0:p>The set of partially syntactically different clone pairs is then the superset of all partially different clone pairs minus the superset of all fully different clone pairs. From that set, we randomly selected clone pairs from all partially different pairs of a programming language and checked if it was fully different. If that was the case, we would discard it and take a new random pair. We found their analysis to be useful to understand also smaller syntactic differences.</ns0:p><ns0:p>We then employed qualitative analysis. We manually classified the characteristics in the clone pairs that differed and, thereby, led to being not detected as Manuscript to be reviewed</ns0:p><ns0:p>Computer Science describe the differences. We added quantitative analysis to it by also counting how many of the sampled clone pairs have characteristics of the found types.</ns0:p><ns0:p>After the creation of the categories we also assessed the degree of difference (high, medium, or low) per category. From the discussion of the categories, we discovered that this gave us a comprehensive yet precise way to assign clone pairs to the categories. Furthermore, it gave us additional possibilities for a quantified analysis. First, we wanted to understand better how we categorised and assessed the degrees of difference as well as answer H3. Therefore, we performed correlation analysis on them. We chose Kendall's tau as the correlation coefficient and tested all correlations on the 0.05 level.</ns0:p><ns0:p>For answering H4, we performed a multivariate analysis of variance (MANO-VA) which allows more than one dependent variable to be used. Here, our dependent variables are the degrees of difference and the independent variable is the programming language. In this analysis, we have a balanced design because we ignored the category OO design which was only applicable to Java programs.</ns0:p><ns0:p>We use the Pillar-Bartlett statistic for evaluating statistical significance. We checked H4 also on the 0.05 level.</ns0:p><ns0:p>These categories with frequencies as well as the results of the hypothesis tests answered RQ 2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.3'>Running a Type-4 Detector (RQ 3)</ns0:head><ns0:p>As this part of the study is only for exploratory purposes, we focused on the recall of CCCD in the FSCs. As all solutions contain a main function, we expected it to find each main-pair as clone. We calculate the recall as the number of detected clone pairs by the sum of all clone pairs. A perfect clone detection tool would detect all solutions from a solution set as clones.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.4'>Creating a Benchmark (RQ 4)</ns0:head><ns0:p>After the categorisation to answer RQ 2, we had a clear picture of the various differences. Therefore, we could select representative examples of each difference for each programming language and put them into our new benchmark. To check that the clone pairs cannot be detected by the tools, we run the tools again on the benchmark. If one of the tools still detected a clone, we would replace the clone pair by another representative example until no clones are detected.</ns0:p><ns0:p>We created the benchmark by choosing clone pairs consisting of two source code files out of the same solution set. The two files therefore solve the same problem. We selected three pairs where the difference between the files belong to that category for each of the categories we created by answering RQ 2. We chose three pairs for all of the three levels of difference. The other categories of the pairs are low or non-existent so that the pair is focused on one category. Additionally, we added one extra clone pair with extreme differences in all categories.</ns0:p><ns0:p>Preferably, we would provide the source code of the chosen solutions directly all in one place. Yet, the copyright of these solutions remains with their authors.</ns0:p><ns0:p>Therefore, we provide source files following the same structure as the original files but not violating the copyright.</ns0:p><ns0:p>A final set of clone pairs that are not detected as full clones by any of the tools constitutes the benchmark and answered RQ 4.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>Validity Procedure</ns0:head><ns0:p>To avoid selection bias, we performed random sampling where possible. We randomly selected the solution sets and solutions that we use as study objects.</ns0:p><ns0:p>In addition, before we manually analysed the category of syntactically different clone pairs, we chose random samples of clone pairs.</ns0:p><ns0:p>To avoid errors in our results, we manually checked for false positives and clone types with samples of clones in the solution sets. Furthermore, by working in pairs during all manual work, we controlled each other and detected problems quickly. Overall, the manual inspection of 70 clone pairs for RQ 2 also was a means to detect problems in the detection tools or our instrumentation.</ns0:p><ns0:p>For the manual categorisation, we started by categorising 30 syntactically different clone pairs to freely create the categories of undetected clone pairs. Afterwards, we discussed the results among all researchers to come to a unified and agreed categorisation. The actual categorisation of clone pairs was then performed on a fresh sample. Additionally, we performed an independent categorisation of a sample of 10 categorised clone pairs and calculated the inter-rater agreement using Cohen's kappa.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Analysis and Results</ns0:head><ns0:p>We structure the analysis and results along our research questions. All quantitative and qualitative results are also available in <ns0:ref type='bibr' target='#b33'>Wagner et al. (2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Share of Syntactic Similarity (RQ 1)</ns0:head><ns0:p>We summarised the results of the calculated shares for fully and partially syntactically similar clone pairs in Tab. 5. We divided the results by programming languages, detection tools and detected clone types. The results differ quite strongly from tool to tool but only slightly between the programming languages.</ns0:p><ns0:p>The average syntactic similarities and the standard deviations (SD) are all very low. ConQAT detects more full and partial clones in clone pairs.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref> shows the ANOVA results which we need for answering hypotheses H1 and H2. As our experiment is unbalanced, we use the Type II sum of squares. This is possible because we found no significant interactions between the factors in any of the ANOVA results.</ns0:p><ns0:p>The results give us no single evaluation of the hypotheses H1 and H2. We have to differentiate between partial and full syntactic similarity. For the partial similarity, we consistently see a significant difference in the variation in the detection tools but not in the programming languages. Hence, for partial clone similarity, we corroborate H1 that there is no difference in recall between the analysed programming languages. Yet, we reject H2 in favour of the alternative hypothesis that there is a difference in the similarity share between different tools. For full similarity, we reject H1 in favour of the alternative hypothesis that there is a difference between the analysed programming languages. Instead, we accept H2 that there is no difference between the analysed detection tools.</ns0:p><ns0:p>How can we interpret these results? The overall interpretation is that the share of syntactic similarity in FSCs is very small. There seem to be many possibilities to implement a solution for the same problem with very different syntactic characteristics. When we only look at the full syntactic similarity, the results are negligible. Both tools detect none in Java and only few clone pairs for C. Hence, the difference between the tools is marginal. The difference is significant between C and Java, however, because we found no full clone pairs in Java. As we saw in a manual inspection, the full detection is easier in C if the developers implement the whole solution in one main function.</ns0:p><ns0:p>For partial syntactic similarity, we get higher results but still stay below 12 %. Hence, for almost 90 % of the clone pairs, we do not even detect smaller similarities. We have no significant difference between the languages but the tools. ConQAT has far higher results than Deckard in the type-1-3 clones. The distinct detection algorithms seem to make a difference here. For the further analysis, we accept an FSC as syntactically similar if one of the tools detected it.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Categories of Differences (RQ 2)</ns0:head><ns0:p>Initially, we created 18 detailed categories. In our qualitative analysis and discussions, we finally reduced them to five main categories of characteristics describing the differences between the solutions in a clone pair. The five categories are algorithm, data structure, object-oriented design, input/output and libraries. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We could assign each of the 18 initial categories there and realised that we can assign them to different degrees of difference. Therefore, we ended up with a categorisation including an ordinal quantification of the degree of difference with the levels low, medium and high. The overall categorisation is shown in Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref>.</ns0:p><ns0:p>The centre of the dimensions would be a type-1 clone. The further out we go on each dimension, the larger the difference. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithms</ns0:head><ns0:p>Computer Science edit operations) it would be to get from one solution to the other. Strongly different library and writing to file vs. console</ns0:p><ns0:p>The two main aspects in any program are its algorithms and its data structures. This is reflected in our two main categories. Our corresponding degrees of difference reflect that there might be algorithms that are almost identical with e.g. only a switch instead of nested if statements up to completely different solutions, e.g. iterative vs. recursive. Similiarly, in data structures, we can have very simple type substitutions which change the behaviour but are still functionally very similar (e.g. from int to long) but also completely user-defined data types with strong differences.</ns0:p><ns0:p>Related to data structures is the category OO design. We made this a separate category because it only applies to OO languages and it had a particular kind of occurrence in the programs we inspected. Some developers tended to write Java programs like there were no object-oriented features while others created several classes and used their objects.</ns0:p><ns0:p>As our programming environments and languages are more and more defined by available libraries, this was also reflected in the differences between solutions.</ns0:p><ns0:p>If one developer of a solution knew about a library with existing functionality needed, and the other developer implemented it herself, this created code that 18</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science looks strongly different but can have similar functionality.</ns0:p><ns0:p>Finally, maybe a category that arose because the programming contest did not specify if the input and output should come from a console or a file was the usage of I/O. Nevertheless, we think that this might also be transferable to other FSCs and contexts because we might be interested in functionally similar code even if one program writes the output on a network socket while the other writes into a file.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_10'>8</ns0:ref> shows descriptive statistics for the categories in our sample of undetected clone pairs. The column Share shows the ratio of clone pairs with a degree of difference higher than 0 in relation to all clone pairs in that language. The median and median absolute deviation (MAD) give the central tendency and dispersion of the degrees in that category. For that, we encoded no difference = 0, low = 1, medium = 2 and high = 3. All categories occur in the majority of clone pairs. The categories algorithm and libraries even occur in about three quarters of the clone pairs. The occurrence of categories is consistently smaller in C than in Java. The medians are mostly low but with a rather large deviation. Only input/output in C has a median of 0. This is consistent with our observation during the manual inspection that I/O is done similarly in the C programs.</ns0:p><ns0:p>For evaluating H3, we calculated Kendall's correlation coefficients for all combinations of categories. The results are shown in Tab. 9. The statistical tests for these correlations showed significant results for all the coefficients.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Therefore, we need to reject H3 in favour of the alternative hypothesis that there are correlations between the degrees of difference between different categories. Finally, for evaluating H4, we show the results of the MANOVA in Tab. 10.</ns0:p><ns0:p>We can reject H4 in favour of the alternative hypothesis that there is a difference between the degrees of difference between the analysed programming languages. This is consistent with the impression from the descriptive statistics in Tab. 8.</ns0:p><ns0:p>Table <ns0:ref type='table'>10</ns0:ref>: MANOVA results for variation in degree of differences (Type I sum of squares, * denotes a significant result)</ns0:p><ns0:p>Pillai-Bartlett approx. F Pr(>F) Language 0.1513 6.0196 0.0002 * In summary, we interpret these results such that there are differences in FSC pairs in their algorithms, data structures, input/output and used libraries.</ns0:p><ns0:p>In Java, there are also differences in the object-oriented design. On average, these differences are mostly small but the variance is high. Hence, we believe that with advances in clone detectors for tolerating the smaller differences, there could be large progress in the detection of FSCs. Yet, there will still be many medium to large differences. We also saw that the programming languages vary in the characteristics of undetected difference. Therefore, it might be easier to overcome those differences in non-object-oriented languages, such as C, than in object-oriented languages which offer even more possibilities to express solutions for the same problem. Yet, we were impressed by the variety in implementing solutions in both languages during our manual inspections.</ns0:p><ns0:p>Our categories are significantly correlated with each other. This can mean that there might be other, independent categories with less correlation. Nevertheless, we believe the categories are useful because they describe major code aspects in a way that is intuitively understandable to most programmers. It would be difficult to avoid correlations altogether. For example, a vastly different data structure will always lead to a very different algorithm. Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref> shows the recall of fully and partially detected clone pairs in our sample.</ns0:p><ns0:p>CCCD has a considerable recall for partial clones in the clone pairs of about 16 %. It does, however, detect almost none of the clone pairs a full clones.</ns0:p><ns0:p>The overlap with ConQAT and Deckard, and therefore type-1-3 clones, is tiny (0.05 % of the recall). We interpret this result such that also contemporary type-4 detection tools have still problems detecting real-world FSCs and to handle the differences we identified in RQ 2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Benchmark (RQ 4)</ns0:head><ns0:p>The number of study objects used in our analysis is quite high. As described above, we examined 1,400 Java files and 1,400 C files. For many demonstrations and clone detection tool analyses a much smaller file set is sufficient. We call this smaller set of files benchmark.</ns0:p><ns0:p>The first half of the benchmark we provide consists of 29 clone pairs. For Java, we include 16 clone pairs. The set of clone pairs we provide for C is structured in exactly the same way as the Java samples except that we do not have the three clone pairs that differ only in object-oriented design. Therefore, we do not have 16 samples here but 13 which make the 29 clone pairs for both languages.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_10'>2</ns0:ref> shows a rating of an example clone pair in the benchmark set where the two files only differ significantly in the kind of input/output, but not in the other categories. Figure <ns0:ref type='figure' target='#fig_11'>3</ns0:ref> shows how such a high difference in the category input/output could look like between two solutions in a solution set.</ns0:p><ns0:p>We provide this distribution of clone pairs for both partial clones and full clones. Hence, the total number of clone pairs within the benchmark is 58.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_12'>4</ns0:ref> shows an overview of the structure of the whole benchmark set. This structure enables developers of a clone detection tool to test their tool easily as well analyse the nature of the clones found and not found by a tool.</ns0:p><ns0:p>Our benchmark provides several advantages to the research community. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Threats to Validity</ns0:head><ns0:p>We analyse the validity threats for this study following common guidelines for empirical studies <ns0:ref type='bibr' target='#b34'>(Wohlin et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b35'>Yin, 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Conclusion Validity</ns0:head><ns0:p>As most of our measurements and calculations were performed automatically, the threats to conclusion validity are low. For the corresponding hypothesis tests, we checked all necessary assumptions. Only the classification and rating of the degree of difference is done manually and, hence, could be unreliable.</ns0:p><ns0:p>Manuscript to be reviewed We worked in pairs to reduce this threat. Furthermore, one of the researchers performed an independent classification of a random sample of 10 clone pairs to compare the results. We calculated Cohen's kappa for the categories of differences between clone pairs as presented in Table <ns0:ref type='table' target='#tab_13'>12</ns0:ref>. We interpret the kappa results according to the classification by <ns0:ref type='bibr' target='#b23'>Landis & Koch (1977)</ns0:ref>. Hence, our results are a moderate agreement between the categories: data structures, algorithms and input/output. For the categories objectoriented design and libraries we have a fair agreement. We consider this to be reliable enough for our investigations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.2'>Internal Validity</ns0:head><ns0:p>There is the threat that the implementation of our instrumentation tooling may contain faults and, therefore, compute incorrect results for the detected clones and recalls. We reduced this threat inherently by the manual inspections done to answer RQ 2 and independently to investigate the type-4 clones.</ns0:p><ns0:p>A further threat to internal validity is that we took our solution sets from Google Code Jam. We cannot be sure that all the published solutions of the Code Jam within a solution set are actually functionally similar. We rely on the fact that the organisers of the Code Jam must have checked the solutions to Manuscript to be reviewed</ns0:p><ns0:p>Computer Science rank them. Furthermore, we assume to have noticed in the manual inspections if there were solutions in a solution set with highly differing functionality.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Construct Validity</ns0:head><ns0:p>To fully understand the effectiveness of a clone detection approach, we need to measure precision as well as recall. In our study, we could not measure precision directly because of the large sample size. We checked for false positives during the manual inspections and noted only few rather short clones. Our minimal clone length is below recommended thresholds. This is a conservative approach to the problem. By that, we will find more clones than in an industrial approach.</ns0:p><ns0:p>We decided to use this threshold to be sure that we cover all the interesting clone pairs that would be lost due to variation in the precision of the tools.</ns0:p><ns0:p>There is a threat because we count each clone pair only once. In partial clones, one clone pair might contain a type-2 as well as a type-3 partial clone.</ns0:p><ns0:p>In those cases, we decided that the lower type -the easier detection -should be recorded. Hence, the assignment to the types might be imprecise. We accept this threat as it has no major implication for the conclusions of the experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4'>External Validity</ns0:head><ns0:p>There is also a threat to external validity in our usage of solutions from Google Code Jam. The submitted programs might not represent industrial software very well. Participants had a time limit set for turning in their solutions. Furthermore, the programming problems contained mostly reading data, performing some calculations on it and writing data. This might impact the method structure within the solutions. This threat reduces the generalisability of our</ns0:p><ns0:p>results. Yet, we expect that other, more complex software will introduce new kinds of difference categories (e.g. differences in GUI code) and only extend but not contradict our results.</ns0:p><ns0:p>For the study, we chose three well-known and stable clone detection tools.</ns0:p><ns0:p>Two of them analyse Java and C programs detecting type-1 to type-3 clones.</ns0:p><ns0:p>The third one detects type 4 clones and supports only programs written in C and only finds clones in complete functions. Overall, we are confident that these tools represent the available detection tools well.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>Conclusions and Future Work</ns0:head><ns0:p>In this paper, we investigated the characteristics of clones not created by copy&paste. We base our study on <ns0:ref type='bibr' target='#b13'>Juergens et al. (2010b)</ns0:ref>, but this is the first study with programs implementing different specifications in diverse programming languages including CCCD as concolic clone detector for type-4 clones. We found that a full syntactic similarity was detected in less than 1 % of clone pairs. Even partial syntactic similarity was only visible in less than 12 %. The concolic approach of CCCD can detect FSCs without syntactic similarity as type-4 clones.</ns0:p><ns0:p>Yet, a full detection was only possible in 0.1 % of clone pairs.</ns0:p><ns0:p>Our categorisation of the differences of clone pairs that were not syntactically similar showed that usually several characteristics make up these differences. On average, however, the differences were mostly small. Hence, we believe there is a huge opportunity to get a large improvement in the detection capabilities of type-4 detectors even with small improvements in tolerating additional differences. We provide a carefully selected benchmark with programs representing real FSCs. We hope it will help the research community to make these improvements.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Relation to Existing Evidence</ns0:head><ns0:p>We can most directly relate our results to <ns0:ref type='bibr' target='#b13'>Juergens, Deissenboeck, & Hummel (2010b)</ns0:ref>. We support their findings that using type-1-3 detectors, below 1 % is fully and below 10 % is partially detected. We can add that with the type-4 detection of CCCD, the partial clone recall can reach 16 %. They introduce categories which were derived from other sources but not created them with a systematic qualitative analysis. Yet, there are similarities in the categories.</ns0:p><ns0:p>Their category syntactic variation covers 'if different concrete syntax constructs are used to express equivalent abstract syntax'. We categorised this as small algorithm difference. Their category organisational variation 'occurs if the same algorithm is realized using different partitionings or hierarchies of statements or variables'. We categorise these differences as a medium algorithm difference. Their category delocalisation 'occurs since the order of statements that are independent of each other can vary arbitrarily between code fragments' is covered as difference in algorithm in our categorisation. Their category generalisation 'comprises differences in the level of generalization' which we would cover under object-oriented design. They also introduce unnecessary code as category with the example of a debug statement. We did not come across such code in our sample but could see it as a potential addition.</ns0:p><ns0:p>Finally, they clump together different data structure and algorithm which we categorised into separate categories. We would categorise these variations as either data structure or algorithm differences with probably a high degree of difference. They found that 93 % of their clone pairs had a variation in the category different data structure or algorithm. We cannot directly support this value but the tendency. We found that 91 % of the inspected clone pairs had a difference at least in either algorithm or data structure and especially for algorithm the difference was on average large.</ns0:p><ns0:p>Tiarks, Koschke, & Falke (2011) created a categorisation for differences in type-3 clones. Therefore, their focus was on classifying syntactic differences that probably hail from independent evolution of initially copied code. Yet, the larger the differences, the more their categories are similar to ours. For example, they abstract edit operations to type substitution or different algorithms. We believe, however, that our categorisation is more useful for FSCs and to improve clone detection tools along its lines.</ns0:p></ns0:div>
<ns0:div><ns0:head>25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Impact</ns0:head><ns0:p>Independently developed FSCs have very little syntactic similarity. Therefore, type-1-3 clone detectors will not be able to find them. Newer approaches, such as CCCD, can find FSCs but their effectiveness still seems limited. Hence more research in approaches more independent of syntactic representations is necessary. We will need to find ways to transfer the positive results of <ns0:ref type='bibr' target='#b10'>Jiang & Su (2009)</ns0:ref> with the Linux kernel to other languages and environments while overcoming the challenges in such dynamic detections as discussed, for example, in <ns0:ref type='bibr' target='#b2'>Deissenboeck et al. (2012)</ns0:ref>. We hope our benchmark will contribute to this.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3'>Limitations</ns0:head><ns0:p>The major limitation of our study is that we did not use a wide variety of types of programs that exist in practice. The programs from Google Code Jam all solve structurally similar problems, for example, without any GUI code.</ns0:p><ns0:p>We expect, however, that such further differences would rather decrease the syntactic similarity even more. The categories might have to be extended to cover these further differences. Nevertheless, the investigated programs were all developed by different programmers and are not artificial.</ns0:p><ns0:p>Furthermore, we had to concentrate on three clone detectors and two programming languages. Other tools and languages might change our results but we are confident that our selection is representative of a large class of detectors and programming languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>Future Work</ns0:head><ns0:p>We plan to investigate the differences between the tools and the detected clone pairs of different types in more detail. In particular, we would like to work with researchers who have built type-4 detectors to test them against our clone database and to inspect the found and not found clones.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Solution file A single program in one file implementing the solution to a programming problem Solution set A set of solution files all solving the the same programming problem Clone pair Two solution files from the same solution set which we assume to be functionally similar The structure of the remainder of the paper follows the guidelines in Jedlitschka & Pfahl (2005).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>C</ns0:head><ns0:label /><ns0:figDesc>programs: ConQAT<ns0:ref type='bibr' target='#b4'>(Deissenboeck et al., 2008)</ns0:ref> and Deckard<ns0:ref type='bibr' target='#b9'>(Jiang et al., 2007a)</ns0:ref>. ConQAT is described in<ns0:ref type='bibr' target='#b25'>Rattan et al. (2013)</ns0:ref> as modern, useful and fast open-source clone detector framework. In the studies we mentioned above, Deckard has shown to have good performance and scalability. Hence, both are well established and have been used in previous studies, especially Juergens</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Concolic analysis combines symbolic execution and testing. CCCD detects only clones in programs implemented in C. The concolic analysis allows CCCD to focus on the functionality of a program rather than the syntactic properties. Yet, it has the restriction that it only detects function-level clones. 9 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>3.6.1 Share of Syntactic Similarity (RQ 1)All solutions in a solution set solve the same programming problem and were accepted by Google Code Jam. Hence, their functionality can only differ slightly and, therefore, they are functionally similar. To understand how much of this similarity is expressed in syntactic similarity, we calculate the share of FSCs which are also type-1-2 or type-1-3 clones.Inspired by<ns0:ref type='bibr' target='#b13'>Juergens et al. (2010b)</ns0:ref>, we distinguish partial and full syntactic similarity. The share of full syntactic similarity is the ratio of clone pairs where all but a defined number of the statements of the solutions of the pair were detected as a clone in relation to all clone pairs. We set the threshold of this to a maximum of 16 lines of code difference within a clone pair, which leads to ratios of 5 % to 33 % of difference based on the solutions' lines of code.Share of full syntactic similarity =|Found full clone pairs| |All clone pairs| (1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Furthermore, it allowed</ns0:head><ns0:label /><ns0:figDesc>us to inspect more closely later on what was not detected as a clone. We called the ratio share of partial syntactic similarity. Share of partial syntactic similarity = |Found partial clone pairs| |All clone pairs| (2) For a more differentiated analysis, we calculated two different shares each representing certain types of clones. We first computed the share for type-1-2 clones. This means we only need to accept exact copies, reformatting and renaming. Then, we determined the shares for type-1-3 clones which includes type-1-2 and adds the additional capability to tolerate smaller changes. PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>fully/partially different clone pairs is sufficient for finding repeating categories and getting a quantitative impression of the numbers of clone pairs in each category. With larger samples, the categories just kept repeating. Therefore, we took a sample of 2 of the syntactically different clone pairs: 70 pairs each of the fully and partially different clone pairs (35 C and 35 Java).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>type-1-3 clone. This classification work was done in pairs of researchers in three day-long workshops in the same room. It helped us to discuss the categories and keep them consistent. The result is a set of categories of characteristics that 13 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>16</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: The categories of characteristics of differences between clone pairs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>20</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>First,Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Example category rating of a clone pair in the benchmark set</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Example of a high difference in the category Input/Output</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Structure of the benchmark set (overview)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>proposed to analyse potential executions by analysing method calls on the Bytecode level in Java programs. Nevertheless, the used approach as well as the examples of found semantic clones suggest that the syn-</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell>they also find all functionally similar clones as a completely different structure</ns0:cell></ns0:row><ns0:row><ns0:cell>and memory states can generate similar functionality.</ns0:cell></ns0:row><ns0:row><ns0:cell>Jiang & Su (2009) were the first to comprehensively detect functionally</ns0:cell></ns0:row><ns0:row><ns0:cell>similar code by using random tests and comparing the output. Hence, they</ns0:cell></ns0:row><ns0:row><ns0:cell>were also the first who were able to detect clones without any syntactic similar-</ns0:cell></ns0:row><ns0:row><ns0:cell>ity. They claim they are able to detect 'functionally equivalent code fragments,</ns0:cell></ns0:row><ns0:row><ns0:cell>where functional equivalence is a particular case of semantic equivalence that</ns0:cell></ns0:row><ns0:row><ns0:cell>concerns the input/output behavior of a piece of code.' They detected a high</ns0:cell></ns0:row><ns0:row><ns0:cell>number of functionally equivalent clones in a sorting benchmark and the Linux</ns0:cell></ns0:row><ns0:row><ns0:cell>kernel. Several of the detected clones are dubious, however, as it is not clear</ns0:cell></ns0:row><ns0:row><ns0:cell>how useful they are. They state: 'Assuming the input and output variables</ns0:cell></ns0:row><ns0:row><ns0:cell>identified by EQMINER for these code fragments are appropriate, such code</ns0:cell></ns0:row><ns0:row><ns0:cell>fragments are indeed functionally equivalent according to our definition. How-</ns0:cell></ns0:row><ns0:row><ns0:cell>ever, whether it is really useful to consider them functionally equivalent is still</ns0:cell></ns0:row><ns0:row><ns0:cell>a question worth of future investigation.'</ns0:cell></ns0:row><ns0:row><ns0:cell>Deissenboeck et al. (2012) followed an analogous approach to Jiang & Su</ns0:cell></ns0:row><ns0:row><ns0:cell>(2009) to detect functionally similar code fragments in Java systems based on</ns0:cell></ns0:row></ns0:table><ns0:note>tactic representation still plays a role and that the clones have been created by copy&paste. These semantic clone detection techniques cannot guarantee that 5 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The factorial design used in this experiment</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Programming language</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Java</ns0:cell><ns0:cell>C</ns0:cell></ns0:row><ns0:row><ns0:cell>Clone</ns0:cell><ns0:cell>CCCD</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>detection ConQAT</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>tool</ns0:cell><ns0:cell>Deckard</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary of the solution sets</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>#No. Sets #Files/Set #Procedures</ns0:cell><ns0:cell>LOC</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>2,908</ns0:cell><ns0:cell>64,826</ns0:cell></ns0:row><ns0:row><ns0:cell>Java</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell cols='2'>8,303 131,398</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Information on the solution sets</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Java</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>C</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Set #Files</ns0:cell><ns0:cell cols='2'>LOC #Proc.</ns0:cell><ns0:cell cols='2'>Set #Files</ns0:cell><ns0:cell cols='2'>LOC #Proc.</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='2'>100 11,366</ns0:cell><ns0:cell>823</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>100 3,917</ns0:cell><ns0:cell>233</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>7,825</ns0:cell><ns0:cell>523</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>100 3,706</ns0:cell><ns0:cell>167</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='2'>100 10,624</ns0:cell><ns0:cell>575</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>100 4,750</ns0:cell><ns0:cell>265</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>6,766</ns0:cell><ns0:cell>473</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>100 3,928</ns0:cell><ns0:cell>219</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>7,986</ns0:cell><ns0:cell>585</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>100 4,067</ns0:cell><ns0:cell>187</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell cols='2'>100 10,137</ns0:cell><ns0:cell>611</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell cols='2'>100 6,840</ns0:cell><ns0:cell>166</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='2'>100 13,300</ns0:cell><ns0:cell>869</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell cols='2'>100 4,701</ns0:cell><ns0:cell>263</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>8,568</ns0:cell><ns0:cell>614</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell cols='2'>100 4,679</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>8,580</ns0:cell><ns0:cell>717</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='2'>100 6,831</ns0:cell><ns0:cell>227</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>9,092</ns0:cell><ns0:cell>459</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell cols='2'>100 4,063</ns0:cell><ns0:cell>159</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>8,536</ns0:cell><ns0:cell>584</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell cols='2'>100 4,624</ns0:cell><ns0:cell>266</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell cols='2'>100 11,412</ns0:cell><ns0:cell>648</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell cols='2'>100 3,574</ns0:cell><ns0:cell>163</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>9,436</ns0:cell><ns0:cell>465</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell cols='2'>100 3,335</ns0:cell><ns0:cell>168</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>7,770</ns0:cell><ns0:cell>357</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell cols='2'>100 5,811</ns0:cell><ns0:cell>249</ns0:cell></ns0:row></ns0:table><ns0:note><ns0:ref type='bibr' target='#b33'>Wagner et al. (2014)</ns0:ref>. Every downloaded solution consisted of a single source code file.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Full and partial syntactic similarity (in %) </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Lang.</ns0:cell><ns0:cell>Tool</ns0:cell><ns0:cell cols='2'>Partially similar Type 1-2 Type 1-3 Mean SD Mean SD</ns0:cell><ns0:cell cols='2'>Fully similar Type 1-2 Type 1-3 Mean SD Mean SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Java</ns0:cell><ns0:cell>ConQAT Deckard Mean</ns0:cell><ns0:cell cols='2'>6.36 0.05 11.53 0.07 0.33 0.00 0.87 0.01 3.35 0.03 6.11 0.04</ns0:cell><ns0:cell>0.00 0.00 0.00 0.00 0.00 0.00</ns0:cell><ns0:cell>0.00 0.00 0.00 0.00 0.00 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ConQAT</ns0:cell><ns0:cell cols='2'>5.24 0.09 11.48 0.13</ns0:cell><ns0:cell>1.30 0.00</ns0:cell><ns0:cell>1.73 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>Deckard</ns0:cell><ns0:cell>0.28 0.00</ns0:cell><ns0:cell>1.44 0.01</ns0:cell><ns0:cell>0.01 0.00</ns0:cell><ns0:cell>0.01 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>1.82 0.00</ns0:cell><ns0:cell>4.32 0.06</ns0:cell><ns0:cell>0.47 0.00</ns0:cell><ns0:cell>0.58 0.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Grand mean</ns0:cell><ns0:cell>2.45 0.04</ns0:cell><ns0:cell>5.07 0.04</ns0:cell><ns0:cell>0.26 0.00</ns0:cell><ns0:cell>0.35 0.00</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>ANOVA results for variation in recalls (Type II sum of squares, * denotes a significant result)</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Partial type 1-2 Sum of Squares</ns0:cell><ns0:cell>F value</ns0:cell><ns0:cell>Pr(>F)</ns0:cell></ns0:row><ns0:row><ns0:cell>Language</ns0:cell><ns0:cell>0.0005</ns0:cell><ns0:cell>0.2352</ns0:cell><ns0:cell>0.6294</ns0:cell></ns0:row><ns0:row><ns0:cell>Tool</ns0:cell><ns0:cell cols='3'>0.0491 12.2603 3 • 10 −5 *</ns0:cell></ns0:row><ns0:row><ns0:cell>Partial type 1-3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Language</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.0210</ns0:cell><ns0:cell>0.8853</ns0:cell></ns0:row><ns0:row><ns0:cell>Tool</ns0:cell><ns0:cell cols='3'>0.1884 20.5846 1 • 10 −7 *</ns0:cell></ns0:row><ns0:row><ns0:cell>Full type 1-2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Language</ns0:cell><ns0:cell>1 • 10 −7</ns0:cell><ns0:cell>7.8185</ns0:cell><ns0:cell>0.0072 *</ns0:cell></ns0:row><ns0:row><ns0:cell>Tool</ns0:cell><ns0:cell>2 • 10 −8</ns0:cell><ns0:cell>1.1566</ns0:cell><ns0:cell>0.2871</ns0:cell></ns0:row><ns0:row><ns0:cell>Full type 1-3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Language</ns0:cell><ns0:cell>2 • 10 −7</ns0:cell><ns0:cell>7.7757</ns0:cell><ns0:cell>0.0074 *</ns0:cell></ns0:row><ns0:row><ns0:cell>Tool</ns0:cell><ns0:cell>5 • 10 −8</ns0:cell><ns0:cell>1.9439</ns0:cell><ns0:cell>0.1692</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Examples for the levels in the degree of difference per category</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Algorithm low</ns0:cell><ns0:cell>Only syntactic variations</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>medium Similarity in the control structure but different</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>method structure</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>high</ns0:cell><ns0:cell>No similarity</ns0:cell></ns0:row><ns0:row><ns0:cell>Data</ns0:cell><ns0:cell>low</ns0:cell><ns0:cell>Different data types, e.g. int -long</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>structure medium Related data types with different interface, e.g. array</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>vs. List</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>high</ns0:cell><ns0:cell>Standard data types vs. own data classes or structs</ns0:cell></ns0:row><ns0:row><ns0:cell>OO</ns0:cell><ns0:cell>low</ns0:cell><ns0:cell>Only one/few static methods vs. object creation</ns0:cell></ns0:row><ns0:row><ns0:cell>design</ns0:cell><ns0:cell cols='2'>medium Only one/few static methods vs. data classes or sev-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>eral methods</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>high</ns0:cell><ns0:cell>Only one/few static methods vs. several classes with</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>methods</ns0:cell></ns0:row><ns0:row><ns0:cell>Library</ns0:cell><ns0:cell>low</ns0:cell><ns0:cell>Different imported/included but not used libraries</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>medium Few different libraries or static vs. non-static import</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>high</ns0:cell><ns0:cell>Many different or strongly different libraries</ns0:cell></ns0:row><ns0:row><ns0:cell>I/O</ns0:cell><ns0:cell>low</ns0:cell><ns0:cell>Writing to file vs. console with similar library</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>medium Strongly different library, e.g. Scanner vs. FileReader</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>high</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Descriptive statistics of degrees of difference over categories and programming languages</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Lang. Category</ns0:cell><ns0:cell cols='3'>Share Median MAD</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Algorithm</ns0:cell><ns0:cell>96 %</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Libraries</ns0:cell><ns0:cell>86 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Java</ns0:cell><ns0:cell>I/O</ns0:cell><ns0:cell>83 %</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Data structure 72 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>OO design</ns0:cell><ns0:cell>71 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Algorithm</ns0:cell><ns0:cell>76 %</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell cols='2'>Libraries Data structure 66 % 73 %</ns0:cell><ns0:cell>1 1</ns0:cell><ns0:cell>1.5 1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>I/O</ns0:cell><ns0:cell>38 %</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Algorithm</ns0:cell><ns0:cell>86 %</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Libraries</ns0:cell><ns0:cell>79 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>OO design</ns0:cell><ns0:cell>71 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Data structure 69 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>I/O</ns0:cell><ns0:cell>60 %</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Correlation matrix with Kendall's correlation coefficient for the category degrees (all are significant)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>Algo. Data struct. OO design</ns0:cell><ns0:cell cols='2'>I/O Libraries</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithm</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell cols='2'>0.44 0.15</ns0:cell><ns0:cell>0.31</ns0:cell></ns0:row><ns0:row><ns0:cell>Data struct.</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell cols='2'>0.26 0.25</ns0:cell><ns0:cell>0.21</ns0:cell></ns0:row><ns0:row><ns0:cell>OO design</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell cols='2'>1.00 0.29</ns0:cell><ns0:cell>0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>I/O</ns0:cell><ns0:cell>0.15</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell cols='2'>0.29 1.00</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>Libraries</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell cols='2'>0.39 0.27</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Full and partial clone recall means over solution sets for CCCD (in %)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Partial 16.03 0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>Full</ns0:cell><ns0:cell cols='2'>0.10 0.00</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Kappa values for difference categories</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Kappa</ns0:cell></ns0:row><ns0:row><ns0:cell>Data structures</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>OO design</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>Libraries</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>Input/Output</ns0:cell><ns0:cell>0.47</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='8'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7721:1:1:NEW 9 Feb 2016)</ns0:note>
</ns0:body>
" | "RESPONSE TO REVIEWER COMMENTS FOR
“HOW ARE FUNCTIONALLY SIMILAR CODE CLONES
DIFFERENT?”
We thank the editor and reviewers for their kind and helpful feedback. We tried to
carefully incorporate all your comments. Please find our responses below. We also corrected
typos while doing the revision. We marked all our changes in the “tracked changes”
document in bold red. Only for better layouting for some of the tables we did not mark it
separately.
Editor
The third reviewer refers to earlier feedback you received. My impression is that you
have partially addressed this earlier feedback. Please double check the earlier feedback you
received, and kindly indicate in your rebuttal how you addressed the main points.
Response: We already incorporated most of the useful feedback from the conference.
We included all comments below with the feedback for reviewer 3.
I doubt whether the title does justice to the full paper. The question is not very clear (different from what?) and fails to clarify what the real contributions (such as the benchmark)
of the paper are. Maybe you want to reconsider the title?
Response: Thanks for the hint. Our new proposal is “How are functionally similar
code clones syntactically different? An empirical study and a benchmark”.
Details: - Usually there should be no space before a ”%”.
Response: We beg to differ. While there are several style guides proposing no space, the
ISO 31-0, for example, prescribes one. There is no consensus. (https://en.wikipedia.
org/wiki/Percent_sign#Spacing)
- There were quite a overfull hboxes in the pdf. Please double check the formatting.
Response: Yes, thank you. We fixed that.
Intro: - ”hundreds of millions” is a *lot*. There are not that many systems that large,
actually. Add a reference, or scale down maybe?
Response: Yes, hundreds of millions is rare. We changed it to “tens to hundreds”.
- I doubt Bellon et al were the first to introduce the type-1-4 terminology. Perhaps cite
the original paper?
1
2
RESPONSE TO REVIEWER COMMENTS
Response: Actually, as far as we could find out, it was introduced by Rainer Koschke
in one of his lectures when he was still at the University of Stuttgart. The earliest “publication” we know of and found is the diploma thesis of Stefan Bellon (supervised by Rainer).
We think the earliest publication outside of a diploma thesis is in a Dagstuhl paper from
Rainer from 2007. But we thought the TSE paper from the same year is a better reference
than a Dagstuhl paper.
RESPONSE TO REVIEWER COMMENTS
3
Reviewer 1
In the Intro. it is mentioned ”traditional clone detection approaches and tools are hardly
able to detect functionally equivalent clones...”. but the reference does not cover the recent
research in the past five years. Recently, there has been several work on Semantically
similar clone detection by exploiting various representation of program such as AST, PDG,
bytecode etc.
Response: There have been approaches like that before. We address this issue in the
related section (Section 2, semantic clones). We believe it is dealt with there better because
it would break the line of arguments in the intro. We looked at these approaches but all
examples they give as semantic clones are all clones that would be considered by many
as type-3 clones, maybe with a high threshold for allowed modifications. We would not
consider those FSCs. Yet, we agree that we should have included more recent examples as
well and added them in Section 2.
This sentence in Section 1.1 is not clear: “. . . make a functionally similar clone really
different from copy. . . ”. A FSC clone can be the result of copy&paste and then some
immediate modification and customization. this kind of clones belong to which group?
Response: Yes, we agree that this was not clear. We added “an individually developed
to the functionally similar clone. This what we are especially interested in. In particular,
any type-1 to type-3 clones is also an FSC, but we are interested in the independently
developed clones.
The tools selected for the study are fine, but the justification in Section 3.4.1 is a little
bit imprecise. For example the sentence that is supporting your choice is outdated (from
2010): ”They have been described as most up-to-date implementations of token based and
AST based clone detection algorithms”. Or another case, ”Commercial tools were not exact
enough in what they detect.”
Response: We removed the outdated source and added a new source to support the
choice of tools. For the second case we removed the sentence about the commercial tools.
Please report Number of methods in Table 3. Also, in Table 3, the size column is unnecessary.
Response: We add the number of procedures for each data set and removed the size
column.
Section 3.5.2, it is mentioned the parameters are set in a way to find more clones.
However, min clone length == 6 seems too restrictive, specifically for small programs.
Response: We experimented with the minimal clone length and saw that if we would
lower the clone length much further then 6, there would be many code structures selected
by the tools as clone which we normally would not classify as a clone. This might be due
to the mostly short programs. Also we based our parameters on the recommendations for
4
RESPONSE TO REVIEWER COMMENTS
practical work (e.g. given in Juergens, Elmar, Florian Deissenboeck, and Benjamin Hummel. ”Code similarities beyond copy & paste.” 14th European Conference on Software
Maintenance and Reengineering (CSMR), 2010. IEEE, 2010.) which is 10 to 15. However,
they finally choose 5 as the minimum clone length, but also called it ”very small”. So, we
don’t think 6 is too restrictive.
The most important aspect of Type-3 clone detection is the choice of similar threshold.
Please, discuss them in Section 3.5.2.
Response: The similarity threshold is mostly determined by the clone gap length.
Again, as with the clone length, we experimented with different values, starting at the
recommendation for ConQAT, to generate only reasonable clones. Too long gaps lead to
whole files being detected as clones although they are considerably different. Too short
leads to nearly no type-3 clones being detected. We finally decided to set the gap ratio to
0.3 which also leads to a similar selection of clones like the Levenshtein score 35 for CCCD.
The paper reports an interesting analysis which looks into the source code of programming competitions. I liked the idea of analyzing such code repositories for clone detection
research, and I believe this would be an interesting article for the community.
Response: Thank you!
RESPONSE TO REVIEWER COMMENTS
5
Reviewer 2
The paper is written clearly, very easy to read. It introduces an intriguing problem of
detecting functionally similar code with sufficient background, and motives well why it is
needed to study the characteristics of functionally similar code that cannot be detected as
syntactically similar code. What is unclear though is the definition of ”functionally similar
code”, in comparison with the definition of ”functionally the same code”; probably better
to clarify what is considered ”similar”, e.g., same input producing same output except for
output formats, etc.
Response: Thank you for that comment. It led us to write down more concretely why
and how we used the term functionally similar clone. We added the new subsection 1.4 to
discuss this in detail. We do not have a fully clear definition but we hope it is useful and
addresses the comment.
The contents are mostly self-contained except for more details about the tools and parameters used.
Response: Thank you. We added some more details on the parameter choice based on
a comment of reviewer 1.
Although the paper says all the data are available in their github project, no RUL link is
provided.
Response: The data is on GitHub but we also added it to ZENODO to get a permanent
DOI. That is cited in the paper. We replaced all occurrences of GitHub with the ZENODO
reference.
To study functionally similar code that cannot be detected as syntactically similar code,
the paper chose to use code written in different programming languages from Google Code
Jam Contest as the subjects for the study, which is a good choice. Code clone detection
tools that can handle different programming languages are used so that the study can be
carried out for different programming languages and different tools, which is a good design
for achieving better generalizability.
Response: Thank you!
However in the actual study, only two languages, Java and C, and 3 clone detection
tools, ConQAT, Deckard, and CCCD, are considered; this limits the general applicability
of the results, especially the hypotheses H1, H2, H4 which consider the differences between
different languages and tools. May consider revising the hypotheses to be more specific with
respect to the actual data used.
Response: We agree. We added the analysed to the hypotheses to make clear that we
do not claim that this holds for other languages and tools.
The paper aims to better understand the characteristics of functionally similar code that
cannot be detected as syntactically similar code. With the limited dataset from Google
6
RESPONSE TO REVIEWER COMMENTS
Code Jam Contest, it categorizes the code differences between functional clone pairs into 5
categories, each of which is split into 3 levels of difference degrees. The results provide a
new perspective on such differences, although the definition of ”low”, ”medium”, ”high” of
difference degrees are too vague and may not be easy to automate.
Response: The data set is limited in some sense as we only have small programmes
with a similar type of basic structure (reading something in, doing some calculations on
it, outputting it). Yet, we have a large number of those in two different languages and 14
different problems. Hence, we believe the systems are reasonably representative.
We agree that the degrees themselves are rather coarse-grained and somewhat vague.
Yet, we believe when looking into them, it is helpful for a stepwise improvement of FSC
detectors.
The benchmark containing categorized functional clones can be useful. However, the size
of the benchmark seems to be too small in comparison with all the code from Google Code
Jam Contest. Can the categorization of functional clones be automated, so as to make it
more scalable to have larger benchmarks?
Response: We would love to have a larger benchmark. Yet, the copyright of the Code
Jam solutions, as far as we know, lies with the authors. We cannot just copy it into the
benchmark.
The paper addresses an important, but challenging problem. It provides a categorization
of functionally similar code different from those categories in the literature, complementing
other studies. Although the results showing the categories are interesting, it would be much
better to have more insights and a more concrete future plan for designing a better functional
clone detection technique. Just based on the empirical study results, it is still unclear to me
how a better detection technique may be derived. So I think only the first part of the research
objectives (as stated in Section 1.2: to better understand the differences...to support future
research on their detection) is achieved.
Response: Thank you for your comment! We believe our categorisation as well as the
benchmark is a support for future research on their detection. We have not claimed to give
a simple plan how to get there. But with our classification and benchmark, it should be
much clearer what we have to work on (e.g. how can we tolerate differences in I/O?).
Minor issues about writing:
- Page 10, ”similar CVS Files” → CSV
- Page 14, ”The other categories of the pairs are very low, ideally zero.” I don’t understand
the rationale of the sentence.
Response: Thank you! Done!
- Page 14, ”A final set of clone pairs that are not detected as full clones...” Does this
mean the benchmark include functional clones that can be detected partial syntactic clones?
Why don’t exclude partial clones too to make the benchmark ”stronger”?
RESPONSE TO REVIEWER COMMENTS
7
Response: Yes, it does. We found it interesting for the benchmark to contain such
clones so that future detectors can experiment with all variations.
- Page 22, among the 3 advantages mentioned, only the second one may be considered
as a contribution of this paper; may make this point clear when comparing to other studies
in the literature.
Response: We are sorry but we did not fully get this point. All three advantages are
advantages of our benchmark. We made it available, we chose it fitting to our categorisation and we had the idea to use Google Code Jam. We are not aware of other similar
benchmarks. But we would be happy to include them in the paper.
8
RESPONSE TO REVIEWER COMMENTS
Reviewer 3
The presented submission is almost identical to a previous submission to a conference
for which I have been a reviewer. The submission was rejected at the conference and the
authors received detailed and valuable feedback. However, the current submission has not
considered the feedback and the only changes are the addition of a few tables and four
paragraphs of text.
Response: We beg to differ. We took several of the comments, we considered useful,
into account. We would consider four paragraphs and several tables a significant change.
Please see the detailed response below.
I appreciate the effort that the authors have invested in the paper and believe that in
the end, the work can lead to publishable material after the authors have considered the
comments they have received for their earlier version.
As the paper has not been significantly changed compared to the conference version, I
refer the authors to the comments they have received for that version.
Response: The manuscript had been sent to more then one conference before. The most
recent reject was at FSE. We include all the comments from FSE below with our comments.
1. The paper should be considered as an experimantal paper presenting empirical results
much rather than a pure research paper, as it does not propose a novel approach, algorithm
or a concept.
Response: We fundamentally disagree with the reviewer that an experimental paper
is not a research paper. In any case, nowhere in the paper is stated that it is not an
empirical/experimental paper.
2. The reason of existence of the problems tackled is questionable. It seems irrational
expecting any significant correlation between syntactic and semantic data of the same (irrespecively to clones). Furthermore, if Type 1-3 clone detectors would be able to find Type
4 clones, they would call themselves Type 1-4 clone detectors. The authors came to the
same conclusion at the end, but I don’t think anyone would expect other outcomes.
Response: We have not expected a significant correlation. Earlier work by Juergens
et al. has already pointed in the direction that the syntactic similarity will be small. The
reviewer accuses us of something we have not done. The point of the paper is not to
compare detectors but to understand what of functional similar code is also syntactically
similar and how we can categorise the syntactically dissimilar code so that we may make
progress in detecting the semantic similarities.
3. Regarding point 2 above, the authors performed a categorisation of the differences
of type-4 clones. They propose five aspects (which are still high level) which characterise
all the differences of funcional clones. They say that, understanding these differences may
help clone detectors to improve. However it is not clear if there was a clone detector that
would be able to find code parts that are similar in terms of these five categories, would they
RESPONSE TO REVIEWER COMMENTS
9
be functionally similar? Furthermore, these five categories are still to high-level to provide
any practical clue for the developers of clone detector tools. For example, how easy it is to
check if two code parts are algorithmically similar compared to chekching how functionally
simmilar they are?
Response: We fear the reviewer misunderstood our categorisation completely. We have
not categorised where they are similar but where they are not syntactically similar. Hence,
if code parts are similar in those categories, it says nothing about their functional similarity. Yet, we believe the categorisation is very helpful. It allows researchers to think about
different solutions for different categories. For example, for the difference in algorithm, we
most likely need some kind of dynamic analysis. But differences in I/O might be possible
to be normalised by static analysis and heuristics.
4. There are a lot of statistical methodologies used thoughout the paper, but there are
almost no words about why that particular is used, why could it be applied in the particular
case and what exactly the outcomes tell us. These sould have been explained in more detailed
way.
Response: Actually, we use only a few statistical tests (besides a lot of means). ANOVA
and MANOVA are standard hypothesis tests straight out of the text book (Wohlin et al.
Experimentation in Software Engineering). All decisions are documented, e.g. on page
15 for ANOVA we state “As our experiment is unbalanced, we use the Type II sum of
squares.” If any particular information is missing, we would be happy to provide it.
The structure of the paper is not so good, having a lot of sections, subsections, sub-sub
sections, making the whole paper really fragmented and uneasy to read and follow. This
is especailly true for the experimental desing part. It is disturbing that the reader reads
the research questions in the Experimental Setup, then in the Analysis Procedure section
the RQs are discussed again one-by-one, still not having any particular results at the end.
These two parts could have been merged. And in Section 4 only come the results.
Response: We follow standard ways of reporting empirical studies. The RQs are the
central means of structuring an empirical paper. Nothing is simply repeated. We cannot
follow this criticism.
There are some bad language constructs (e.g. ”... also was a means to detect problems
in the...”)
Response: We believe this is correct English.
It seems that that the paper is actually a research on a methodology how manually to
categorise the clones in a software. There are too many unnecessary details about how
many people were sitting in the same room, looking at the clones and what discussions
actually lead to the final categories.
Response: No, it is not a methodology how to manually categorise clones. This is qualitative data analysis in an empirical study. This information is necessary to fully describe
10
RESPONSE TO REVIEWER COMMENTS
the data collection and analysis.
In many places the authors write ”Tab.”, instead of ”Table”...would you please not abbreviate that?
Response: Why?
Introduction: ”Especially clones that resulted from copy&paste can now be detected reliably.” → some references to basic approaches would be desired.
Response: We added a reference to the recent literature review.
3.6.2 Classifying Differences (RQ 2): ”The set of fully syntactically different clone pairs
is the set of all pairs in all solution sets minus any pair detected by any of the type-1–3
detection.” → ”minus” is for numbers
Response: We have not found a rule that this use of minus is wrong. As none of the
other reviewers found this unsuitable, we would like to keep it.
”We apply random sampling to get pairs for further analysis: First, we randomly selected
one of the solution sets in a language.” → lowercase letters come after a double-dot. There
are some confusions around the tenses as well.
Response: We disagree. After a colon, uppercase is appropriate if a full sentence follows. We double-checked the tenses and found no problems. Only in the related work
section were some inconsistencies with the tenses. We fixed that.
Counting the elements in a category does not mean doing quantitative analysis as stated
in many places in this section.
Response: Actually, it does. If counting is not quantitative, we are not sure what is.
3.7 Validity Procedure: Why is this section not under Section 3.6.2?
Response: Why should it? It is the validity procedure for the whole study.
4.1 Share of Syntactic Similarity (RQ 1): ”As our experiment is unbalanced, we use
the Type II sum of squares. This is possible because we found no significant interactions
between the factors in any of the ANOVA results.” → when did you actually look at the
interractions between the factors? You should have tested first if interractions exist.
Response: We did analyse the interactions.
4.2 Categories of Differences (RQ 2): ”For evaluating H3, we calculated Kendall’s correlation coefficients for all combinations of categories. ” → Is there any reason for not
using Pearson’s correlation instead?
Response: Pearson assumes a normal distribution.
The contribution is close to Juergens et al. 2010, where the problem statement is the
same, as well as the two evaluated tools (Conqat and Deckard). This papers uses a new
RESPONSE TO REVIEWER COMMENTS
11
dataset that is larger and a new tool (CCCD). However, the results about CCCD are only
briefly discussed in section 4.3.
Response: Yes, to some degree it is a replication, which in itself is a valuable contribution (and replications were explicitly mentioned in the FSE CfP). Yet, our focus is different
than the Juergens et al. paper. In the end, we had so many differences that we did not
call it a replication anymore. However, we think it is useful to have some similarity so that
we can better compare the results.
Terminology: The literature uses already three terms for a very similar things type-4
clone, semantic clone and simion. Are they equivalent to your opinion? Why introducing
a fourth expression ”Functionally Similar Code”?
Response: As is explicitly stated in the paper on page 2, type-4 clones are defined
as functionally equivalent. We believe this is too restricted and instead want to analyse
functionally similar code. “Semantic clone” is often used if an analysis technique uses an
abstraction from source code. This is also not fitting to the clones we analyse. Because of
a comment from reviewer 2, we elaborated on that in section 1.4.
Dataset: how are the submissions to Google Code Jam evaluated? Using a test suite?
Do all considered programs completely pass this test suite? Is this test suite good? This is
important to assess whether your ground truth is a good one.
Response: We don’t know details about the used tests. It uses automated tests to give
the contestants direct feedback on their submissions. We added this to the paper. We
believe, however, that we can be confident about the tests as they judge about the success
or failure in a prestigious programming contest with hundreds of submissions.
Partiality: What is exactly a partial code clone pair? is it the same definition for Conqat
and Deckard? And for CCCD? (I really don’t see what partial equivalence means in the
context of concolic execution, hence what Table 8 means)
Response: Yes, it means that at least some part of the files in a pair have been identified
as a clone. For CCCD, it sometimes detected single functions but not the main function.
Manual evaluation: it’s not clear whether you annotated the 1400 programs for the
construction of the taxonomy. How many programs or program pairs were analyzed?
Response: We explicitly state in Sec. 3.6.2: “Therefore, we took a sample of 2 h of
the syntactically different 390 clone pairs: 70 pairs each of the fully and partially different
clone pairs (35 C and 35 Java).”
- H1, H2, H3 and H4 are about standard code clone detection, it’s noise with respect to
the core of the paper.
Response: We fully disagree. Only H2 is about “standard clone detection”. It helps to
establish how strong the influence of the usage of different detectors (and with it detection
algorithms) is.
12
RESPONSE TO REVIEWER COMMENTS
- Table 5: give the absolute numbers. How is the average computed? over the Jam sets?
over the program pairs (n*(n-1)/2)? both?
Response: All numbers are given in the also published data. The unit of analysis is
always a solution pair.
The paper presents a repetion of a study on functional similar programs to evaluate
how good clone detection approaches are for detecting functional similar code that is not
syntactically similar. The presented study uses the same two clone detectors on a set of
14 problems with 100 Java solutions and the two detections plus a concolic detector on the
same set of problems with 100 C solutions. The results show that detectors are not good at
detecting functional similar code.
Response: Just to clarify: We analysed 100 C and 100 Java solutions per problem.
Also the focus of the study was not to show how good or bad are clone detectors in detecting functionally similar code but what part of the code is syntactically similar.
The discussion of code that is functional similar but not syntactically similar is ongoing.
However, nobody would expect good results in using syntactic clone detectors to detect
functional similar code. The authors also don’t give any motivation why the detection of
functional similar code is necessary. I would have expected a discussion on plagiarism
and malware, but the only use case is that software developers in a refactoring session are
interested in functional similar code. Why is that the case? In practice, even identical
code is not refactored - so why would one think of refactoring functional similar code?
Libraries are full of functional similar code to allow clients to choose the most efficient
implementation for their problem.
Response: We believe the use case we gave is a valid one. One could discuss others
as the reviewers mentions them but this would lead away from our original story. We
would also like to disagree that identical code is not refactored in practice. While this is
the case in many projects, there are also many developers which are aware of duplication
and do refactor. We agree that there are situations where I would like to have different
implementations of similar functionality. But we also believe there are many where I don’t.
From the missing discussion on plagiarism and malware follows that the authors have
not looked at related work in that area.
Response: Could you please be more specific what related work we missed for example?
The whole setup of the study is based on Juergens et al.’s previous study. However,
while the description in Juergens et al’s paper is concise but sufficient, the most important
description on how to actually measure the similarity in the presented paper is vague, wrong
and too short. Without knowing the original experiment, it is not possible to understand
what is actually measured. Moreover, while the original experiment uses 90% statement
coverage as the threshold for full similarity, the present paper needs full coverage - but it
does not say *what has to be covered*.
RESPONSE TO REVIEWER COMMENTS
13
Response: We clarified this already in comparison to the FSE submission. The coverage is defined in Sec. 3.6.1. We are not sure what else in the description of how to actually
measure the similarity is vague, wrong and too short.
The largest issue is the lack of any data on the 1400 programs. Without knowing anything
on the candidates, the presented data cannot be put into perspective by the reader. Even the
most fundamental data, the size of the programs, is not given. In the original experiment,
a detailed analysis has been done given statistics on the size. The original experiment only
looked at trivially sized programs, where the minimal overlap of 5 lines made sense. In the
presented paper, this is not given and the choice of minimal clone size probably introduces
bias.
Response: Again, in comparison to the FSE submission, we added more detailed in
the PeerJ CS manuscript (in particular table 4). The potential bias by the minimal clone
length is discussed in Sec. 5.3.
The usage of only 2/3 detectors is also not any longer the state of the art. Deckard and
Conquat are very specific detectors and the experiment should have been done with a much
larger set of detectors. In addition, some standard similarity measures should have been
applied, too.
Response: We extended this discussion already in comparison to the FSE submission.
What is a standard similarity measure and what would that yield?
The paper is in most parts easy to read. However, as discussed above, from Section 4
on, only relative data is given and no absolute numbers or statistics on the candidate set
which makes it hard to follow the arguments of the paper.
Response: The full data has always been available in our open data set. We added
already more information about the solutions in the initial submission to PeerJ CS. Statistics, such as central tendencies and dispersions, are reported in the paper.
" | Here is a paper. Please give your review comments after reading it. |
68 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The Cross-Modal Retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of Multi-Modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called 'Improvement of Deep Cross-Modal Retrieval (IDCMR)', which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The Mean Average Precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In the era of big data, multimedia data such as text, image, audio, and video are growing at an unprecedented rate. Such Multi-Modal data has enriched people's lives and become a fundamental component to understand the real world. We access Multi-Modal data in various situations like education, entertainment, advertisements, social media, which are helpful to provide effective communication. Also, real-world articles use different modalities to provide comprehensive information about any concept or topic. In recent years, Image Captioning and Cross-Modal Retrieval (CMR) have become hot research directions in vision-language tasks <ns0:ref type='bibr' target='#b24'>(Xu, Li & Zhang, 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Yanagi et al., 2020)</ns0:ref>. The difference between them is shown in Figure <ns0:ref type='figure'>1</ns0:ref>. The Image Captioning system, as shown in Figure <ns0:ref type='figure'>1</ns0:ref>(A) and Figure <ns0:ref type='figure'>1(B)</ns0:ref>, takes an image from the MSCOCO dataset <ns0:ref type='bibr' target='#b8'>(Lin et al., 2015b)</ns0:ref> and retrieves the description of an image in the form of text. Here the retrieved information is provided by both modalities (e.g. the word 'cat' and pixels of 'cat' are closed to each other in a learning space). On the other hand, the CMR system provides flexible retrieval where the user can give any modality as the input and retrieves any other modality as the output. As shown in Figure <ns0:ref type='figure'>1</ns0:ref>(C) and Figure <ns0:ref type='figure'>1</ns0:ref>(D), an image of 'owl' from the XMedia dataset ('PKU XMediaNet Dataset')can be explained with multiple texts, which is not visible from the image only. Such comprehensive retrieval from the CMR system is widely used in applications like hot topic detection and personalized recommendation <ns0:ref type='bibr' target='#b15'>(Peng et al., 2017)</ns0:ref>. The real challenge in CMR is the heterogeneity gap <ns0:ref type='bibr' target='#b22'>(Wang et al., 2016b)</ns0:ref>, which is generated due to the different statistical properties of each modality. For example, an image representation is real-valued and dense in the form of pixels, whereas text representation is sparse and discrete. Such a different representation of each modality does not allow a direct comparison for retrieval. The most common solution to bridge the heterogeneity gap is to generate a common sub-space <ns0:ref type='bibr' target='#b29'>(Zhen et al., 2019)</ns0:ref> using a function, which transforms the different representation of modalities into a common representation, such that direct retrieval is possible. Many approaches for CMR have been proposed in the past to generate a common sub-space, which is categorized into binary-valued and real-valued representation. The binary-valued representation maps heterogeneous data into the encoded form using a hash function. The advantage of binary-valued representation is less storage, which leads to faster retrieval because hamming distance can be computed faster with the help of binary code using bit operations. However, binary-valued representation suffers from information loss, which leads to unsatisfactory performance. In this paper, real-valued representation is considered, which stores actual representation. Previous CMR methods like spectral hashing (SH) <ns0:ref type='bibr' target='#b23'>(Weiss, Torralba & Fergus, 2009)</ns0:ref>, cross-view hashing(CVH) <ns0:ref type='bibr' target='#b5'>(Kumar & Udupa, 2011)</ns0:ref>, Inter-Media Hashing (IMH) <ns0:ref type='bibr' target='#b18'>(Song et al., 2013)</ns0:ref>, Collective Matrix Factorization Hashing (CMFH) <ns0:ref type='bibr'>(Ding, Guo & Zhou, 2014)</ns0:ref>, Semantic Correlation Maximization (SCM) <ns0:ref type='bibr' target='#b27'>(Zhang & Li, 2014)</ns0:ref>, Latent semantic sparse hashing (LSSH) <ns0:ref type='bibr'>(Zhou, Ding & Guo, 2014)</ns0:ref> and Semantic Preserving Hashing (SePH) <ns0:ref type='bibr' target='#b6'>(Lin et al., 2015a)</ns0:ref> perform feature learning and correlation learning as an independent process to generate a common subspace. All these CMR methods perform feature learning using scale-invariant feature transform (SIFT) <ns0:ref type='bibr' target='#b9'>(Lowe, 2004)</ns0:ref> and histogram of oriented gradients (HoG) <ns0:ref type='bibr' target='#b2'>(Hardoon, Szedmak & Shawe-Taylor, 2004)</ns0:ref>. However, the correlation learning ignores the correlation between different modalities during feature learning, which may not achieve satisfactory performance. The standard statistical correlation-based method is Canonical Correlation Analysis (CCA) <ns0:ref type='bibr' target='#b2'>(Hardoon, Szedmak & Shawe-Taylor, 2004)</ns0:ref>, which learns linear projections from heterogeneous data, and a common sub-space is generated. However, Multi-Modal data is involved with non-linear relations, which cannot be learned with CCA. So, some kernel-based approach <ns0:ref type='bibr' target='#b3'>(Hongtao & Qinchuan, 2016)</ns0:ref> has been proposed which can handle the problem, but the selection of the kernel function is one of the open challenges. Motivated with great power and success of deep learning in the domain of representation learning, a variety of approaches have been proposed, which generates a common sub-space. The work presented in <ns0:ref type='bibr' target='#b12'>(Ngiam et al., 2011)</ns0:ref>, proposes a deep auto-encoder (DAE) to learn the correlation between Multi-Modal data and a Restricted Boltzmann Machine (RBM) to learn a common sub-space in an unsupervised way. In <ns0:ref type='bibr' target='#b19'>(Srivastava & Salakhutdinov, 2014)</ns0:ref>, a graphicalbased model called deep Boltzmann machine (DBM) is used which does not need supervised data for training, and each layer of the Boltzmann machine adds more level of abstract information. In <ns0:ref type='bibr' target='#b4'>(Jiang & Li, 2016)</ns0:ref>, a framework called Deep Cross-Modal Hashing (DCMH) is proposed, which generates a common sub-space in a supervised way, and similarity is preserved by forcing image and text representation to be as close as possible. In <ns0:ref type='bibr' target='#b21'>(Wang et al., 2016a)</ns0:ref>, Convolutional Neural Network (CNN) for image modality and Neural Language Model for text modality is used to learn a common sub-space using a mapping function. The Euclidean distance calculates the distance between image and text representation, which is useful for Cross-Modal learning. In <ns0:ref type='bibr' target='#b29'>(Zhen et al., 2019)</ns0:ref>, a framework called Deep Supervised Cross-Modal Retrieval (DSCMR) is proposed, which uses CNN for image modality and word2vec for text modality, which generates real-valued representation. A lot of work is carried out in CMR, but the performance of the CMR system can be further improved by maintaining both intra-modal and inter-modal similarity as much as possible. In this paper, we propose a novel framework called 'Improvement of Deep Cross-Modal Retrieval (IDCMR)', which generates a common sub-space by preserving similarity between image and text modality. The objective function of IDCMR preserves both inter-modal and intra-modal similarity. The main contributions of IDCMR are summarized as follow:</ns0:p><ns0:p> The proposed framework IDCMR performs feature learning and correlation learning in the same framework.</ns0:p><ns0:p> Our proposed framework preserves intra-modal semantic similarity for text modality.</ns0:p><ns0:p>Experiments are performed using various vectorization methods on Multi-Modal datasets for the selection of an appropriate vectorization method for text modality.</ns0:p><ns0:p> The IDCMR generates real-valued representation in the common sub-space, which preserves inter-modal and intra-modal similarities between image and text modality.</ns0:p><ns0:p> The Mean Average Precision (mAP) is used as a performance measure, and a comparison of the proposed framework is made with state-of-the-art methods.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The rest of the paper is divided as follows. Section 2 gives the background of vectorization methods for text modality. Section 3 covers the proposed model and the proposed algorithm. Section 4 covers experiments and discussion. At last, Section 5 gives the conclusion of our work.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background and Literature Survey</ns0:head><ns0:p>The biggest challenge in Natural Language Processing (NLP) is to design algorithms, which allows computers to understand natural language to perform different tasks. It is recommended to represent each word in form of a vector as most of the machine learning algorithms are not capable of processing text directly in its raw form. The process of converting a word into a vector is called vectorization, which represents each word into vector space. Broadly the vectorization methods are categorized into (a) local representation method and (b) distributional representation method. The most common local representation method is called bag-of-words (BoW), where each word is represented as vector with all 0's and one 1 at the index of</ns0:p><ns0:formula xml:id='formula_0'>ℝ |V| × 1</ns0:formula><ns0:p>the word in the corpus. However, the generated matrix is sparse in nature, which is inefficient for computation, and the similarity between different words is not preserved, as the inner product between two different one-hot vectors is zero. On the other hand, in distributional representation, each word in the corpus is represented by featurized representation, which is denoted as</ns0:p><ns0:formula xml:id='formula_1'>w i w i ∈</ns0:formula><ns0:p>, where each word is represented in d dimensions.</ns0:p></ns0:div>
<ns0:div><ns0:head>ℛ d</ns0:head><ns0:p>The distributional representation generates distributional word vectors, which follows the concept of the distributional hypothesis <ns0:ref type='bibr' target='#b11'>(Mikolov et al., 2013b)</ns0:ref>, which states that words that occur in the same contexts tend to have similar meanings. The distributional word vectors are generated from count-based models or prediction based models. The count-based models generate implicit distributional vectors using dimensionality-reduction techniques, which map data in the high-dimensional space to a space of fewer latent dimensions. The most popular method is Singular Value decomposition (SVD) <ns0:ref type='bibr' target='#b20'>(Van Loan, 1976)</ns0:ref>, which generates embedding of each word in the vocabulary using matrix factorization, but fails when the dimensionality of matrices is very large as the computational cost for m × n matrix is O (mn 2 ). The most popular count-based method is Glove <ns0:ref type='bibr' target='#b11'>(Mikolov et al., 2013b)</ns0:ref>, which generates implicit vector and achieve better performance in comparison with other matrix-based methods. Another broader classification for the generation of distributional word vector is prediction based models, which are neural network based algorithms. Such models directly create low-dimensional implicit distributional representations. An example of such a model is word2vec. The below section covers a detailed description of the generation of word vectors using Glove and word2vec. word2vec The word2vec is a feed-forward based neural network, which has two algorithms: continuous bag-of-words (CBOW) and skip-gram (SG) <ns0:ref type='bibr'>(Mikolov et al., 2013a,b)</ns0:ref>. Figure 2 <ns0:ref type='bibr' target='#b10'>(Mikolov et al., 2013a)</ns0:ref> shows a description of CBOW and SG where CBOW predicts the probability of center word w(t) and SG predicts the probability of surrounding words w(t+j).</ns0:p></ns0:div>
<ns0:div><ns0:head>Working of SG model</ns0:head><ns0:p>SG predicts the probability of surrounding given a center word. For training of the network, there is an objective function that maximizes the probability of surrounding words given center word for each position of text t in the window size of m.</ns0:p><ns0:formula xml:id='formula_2'>J ' (θ) = T ∏ t = 1 ∏ -m ≤ j ≤ m j ≠ 0 P(w t + j |w t ; θ)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Here is probability of surrounding words given center word . Eq. (1) can P(w t + j |w t ) w t + j w t be rewritten as equation ( <ns0:ref type='formula'>2</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_3'>J(θ) =- 1 T T ∑ t = 1 ∑ -m ≤ j ≤ m j ≠ 0 logP(w t + j |w t ; θ)</ns0:formula><ns0:p>(2) can be rewritten as , which specifies the probability of surrounding words</ns0:p><ns0:formula xml:id='formula_4'>P(w t + j |w t ) P(o│c) o</ns0:formula><ns0:p>given center word c, and softmax function is used to generate probability.</ns0:p><ns0:formula xml:id='formula_5'>P(o│c) = e u T 0 v c V ∑ w = 1 e u T w v c</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>Where specifies vector representation of the surrounding word at index 0 and specifies the u 0 v c vector representation of center word. Equation ( <ns0:ref type='formula'>3</ns0:ref>) can be applied in equation (2),</ns0:p><ns0:formula xml:id='formula_6'>J(θ) = log e u T 0 v c V ∑ w = 1 e u T w v c (4)</ns0:formula><ns0:p>Now, the objective is to optimize and . So need to take the derivative with respect to and</ns0:p><ns0:formula xml:id='formula_7'>v c u w v c . u w J(θ) = ∂ ∂v c log e u T 0 v c - ∂ ∂v c log V ∑ w = 1 e u T w v c<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where,</ns0:p><ns0:formula xml:id='formula_8'>∂ ∂v c log e u T 0 v c = 𝑢 0 (5.a) ∂ ∂v c log V ∑ w = 1 e u T w v c = 1 V ∑ w = 1 e u T w v c × ∂ ∂v c V ∑ w = 1 e u T w v c</ns0:formula><ns0:p>(5.b)</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>Computer Science = 1 V ∑ w = 1 e u T w v c × ∂ ∂v c 𝑉 ∑ x = 1 e u T x v c = 1 V ∑ w = 1 e u T w v c × V ∑ x = 1 ∂ ∂v c e u T x v c = 1 V ∑ w = 1 e u T w v c × 𝑉 ∑ x = 1 e u T x v c ∂ ∂v c e u T x v c = 1 V ∑ w = 1 e u T w v c × 𝑉 ∑ 𝑥 = 1 e u T x v c 𝑢 𝑥</ns0:formula><ns0:p>Combine (5.a) and (5.b),</ns0:p><ns0:formula xml:id='formula_10'>J(θ) = u 0 - 1 V ∑ w = 1 e u T w v c × V ∑ x = 1 e u T x v c u x</ns0:formula><ns0:p>Above equation can be rewritten as</ns0:p><ns0:formula xml:id='formula_11'>J(θ) = u 0 - V ∑ x = 1 P(x│c). u x (6)</ns0:formula><ns0:p>Where, is the actual ground truth and is the probability of each surrounding word x u 0 P(x│c)</ns0:p><ns0:p>given the center word c, and is the average of all possible surrounding words. So cost function u x of SG guarantees that the probability of occurring surrounding words maximizes given a center word.</ns0:p></ns0:div>
<ns0:div><ns0:head>Working of continuous bag-of-words model</ns0:head><ns0:p>CBOW predicts the probability of a center word given surrounding words. Input to CBOW is d dimensional one-hot vector representation of a center word. The representation of a center word is generated by multiplying d dimensional vector with the weight matrix W of size where p p × d is the featurized representation of a word.</ns0:p><ns0:formula xml:id='formula_12'>h p × 1 = W T p × d x d = V c (7)</ns0:formula><ns0:p>The above representation is a vector representation of the center word . The representation of V c outside words is generated by multiplying center representation with the weight matrix . W'</ns0:p><ns0:formula xml:id='formula_13'>u d × 1 = W' T d × p h p × 1 = V T w V c (8)</ns0:formula><ns0:p>Where, is a vector representation of the center word and is a vector representation of V c V w surrounding words. It is a prediction-based model so need to find the probability of a word given the center word P(w|c).</ns0:p><ns0:formula xml:id='formula_14'>y i = P(w│c) = φ(u i ) = e u i ∑ i' e u i = e V T w V c ∑ w' ∈ Text e V T w' V c<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>There is an objective function, which maximizes P(w|c) by adjusting the hyper parameters i.e., v c and .</ns0:p><ns0:formula xml:id='formula_15'>v w l(θ) = ∑ w ∈ Text log P(w|c;θ)<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>Put value of equation ( <ns0:ref type='formula' target='#formula_14'>9</ns0:ref>) in equation ( <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>),</ns0:p><ns0:formula xml:id='formula_16'>l(θ) = ∑ w ∈ Text log e V T w V c ∑ w' ∈ Text e V T w' V c = ∑ w ∈ Text log e V T w V c -∑ w ∈ Text log 1 ∑ w' ∈ Text e V T w' V c</ns0:formula><ns0:p>To optimize the hyper parameter, need to take derivation with respect to and .</ns0:p><ns0:formula xml:id='formula_17'>v c v w ∂𝑙 ∂v w = ∑ w ∈ Text v c - 1 ∑ w ' ∈ Text e V T w ' V c × ∂𝑙 ∂𝑤 e V T w' V c = ∑ w ∈ Text v c - 1 ∑ w ' ∈ Text e V T w ' V c × e V T w ' V c × v c = ∑ w ∈ Text v c -P(w|c)v c = ∑ w ∈ Text v c [1 -P(w|c)]</ns0:formula><ns0:p>For optimization, gradient descent algorithm is applied and hyper parameter is optimized.</ns0:p><ns0:formula xml:id='formula_18'>V w = V w -ηV c [1 -P(w│c)]<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>Similar steps are followed for hyperparameter .</ns0:p><ns0:formula xml:id='formula_19'>V c ] V c = V c -ηV 𝑤 [1 -P(w|c) (12)</ns0:formula><ns0:p>So CBOW and SG preserve the semantic similarity by following the distributional hypothesis in comparison with BoW model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Glove (Count Based Method)</ns0:head><ns0:p>In contrast to word2vec, Glove captures the co-occurrence of a word from the entire corpus <ns0:ref type='bibr' target='#b16'>(Pennington, Socher & Manning, 2014)</ns0:ref>. Glove first constructs the global co-occurrence matrix , which gives information about how often words i and j appear in the entire corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>X ij</ns0:head><ns0:p>The size of the matrix can be minimized by the factorization process, which generates a lowerdimensional matrix such that reconstruction loss is minimized. The objective of the Glove model is to learn the vectors (vector representation of i word) and (vector representation of j word), v i v j which are fruitful to information which is in the form of . The similarity between words is X ij captured by finding the inner product , which gives similarity between words i and j. This v T i v j similarity is proportional to P(j|i) or P(i|j), where P(j|i) gives the probability of word j given the word i.</ns0:p><ns0:formula xml:id='formula_20'>v T i v j = log P(j |i)</ns0:formula><ns0:p>Where, log P(</ns0:p><ns0:formula xml:id='formula_21'>j |i) = X ij ∑X ij = X ij X i v T i v j = log X ij -log X i (13) Similarly, v T j v 𝑖 = log X ij -log X j<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Equation ( <ns0:ref type='formula' target='#formula_22'>15</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_23'>16</ns0:ref>) are added,</ns0:p><ns0:formula xml:id='formula_22'>2v T j v 𝑖 = 2log X ij -log X 𝑖 -log X j v T j v i = log X ij - 1 2 log X i - 1 2 log X j<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>Here and are learnable parameters and , is word specific biases, which will be learned v i v j X i X j as well. The above equation can be rewritten as</ns0:p><ns0:formula xml:id='formula_23'>v T j v i + b i + b j = log X ij<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>Where is word specific bias for word i and is word specific bias for word j. All these b i b j parameters are learnable parameters, whereas is the actual ground truth that can be known X ij from the global co-occurrence matrix. Eq. ( <ns0:ref type='formula' target='#formula_23'>16</ns0:ref>) can be formulated as an optimization problem, which gives the difference between predicted value using model parameters and the actual value computed from the given corpus.</ns0:p><ns0:formula xml:id='formula_24'>min v i ,v j ,b i ,b j ∑ i,j (v T j v i + b i + b j -log X ij ) 2<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In comparison with word2vec, Glove maintains both the local and global context of a word from the entire corpus. To select an appropriate vectorization method, which maintains intra-modal semantic coherence, the below section covers experiments performed using different vectorization methods on Multi-Modal datasets. The Convolutional Neural Network (CNN) is adopted for image modality in the proposed framework, as it has shown promising performance in many computer vision applications <ns0:ref type='bibr' target='#b3'>(Hongtao & Qinchuan, 2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Proposed framework for cross-modal retrieval</ns0:head><ns0:p>In this section, we present our proposed framework, which generates real-valued common subspace. It also covers the learning algorithm outlined in Algorithm: IDCMR.</ns0:p></ns0:div>
<ns0:div><ns0:head>Problem formulation</ns0:head><ns0:p>The proposed framework has image and text modality, which is denoted by</ns0:p><ns0:formula xml:id='formula_25'>Ψ = {(X i , Y i )} n i = 1</ns0:formula><ns0:p>where and is image and text sample respectively. Each instance of has a semantic</ns0:p><ns0:formula xml:id='formula_26'>X i Y i (X i , Y i ) label vector</ns0:formula><ns0:p>, where C is the number of categories. The similarity matrix</ns0:p><ns0:formula xml:id='formula_27'>Z i = [z 1i , z 2i ,….,z Ci ] ∈ R C</ns0:formula><ns0:p>, if i th instance of image and text modality matches to the j th category, otherwise .</ns0:p><ns0:formula xml:id='formula_28'>S ij = 1 S ij = 0</ns0:formula><ns0:p>The feature vectors of image and text modality lie in different representation space, so direct composition is not possible for retrieval. The objective is to learn two functions, u i = f(x i ,θ x ) ∈ and for image and text modality respectively, where d is the dimension of ℝ 𝑑 v i = g(y i ,θ y ) ∈ ℝ 𝑑 a common sub-space. The and are hyper parameters of image and text modality, θ x θ y respectively. The generated common sub-space allows direct comparison for retrieval even though samples come from different statistical properties.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed framework: Improvement of deep cross-modal retrieval (IDCMR)</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the proposed framework for image and text modality. The convolutional layers of Convolutional Neural Network (CNN) for image modality are pretrained on ImageNet, which generates high-level representation for each image. CNN has five convolutional layers and three fully connected layers. Detailed configuration of the convolutional layer is given in the proposed framework. Each convolutional layer contains 'f: num size size', which specifies the × × number of the filter with specific size, 's' indicates stride, 'pad' indicates padding, and 'pool' indicates downsampling factor. The common representation for each image is generated by fully connected layers. The number in the last fully connected layer (fc8) indicates the number of neurons or dimensionality of the output layer. Similarly, the Glove model for text modality is pretrained on Google News, which represents each word in form of feature vector. The text matrix is given to fully connected layers to learn the common representation for text. To learn a common representation from image and text modality, the two sub-networks share the weights of the last layers, which generate the same representation for semantic similar image and text modality. In this work, real-valued coordinated representation is generated, which preserves PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science intra-modal and inter-modal semantic similarity. The inter-modal similarity is preserved by minimizing the (i) discrimination loss in the label space . The prediction of label from feature J 1 spaces is possible, by connecting a linear classifier on top of each network. (ii) discrimination loss in text and image representation ,and (iii) modality-invariant loss in the common sub-J 2 J 3 space. Further, the intra-modal similarity is preserved by selecting an appropriate training model for each modality. The biggest challenge in text modality is to preserve semantic similarities between words. There are many distributional representation methods available and the challenge is to select an appropriate method, which preserves intra-modal similarity between different words of text modality. The below section covers the learning algorithm, experiments of different distributional models, and performance comparison of the proposed framework with state-of-the-art methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm: IDCMR</ns0:head></ns0:div>
<ns0:div><ns0:head>Input:</ns0:head><ns0:p>where is the input image sample, and is the input text sample of ith</ns0:p><ns0:formula xml:id='formula_29'>Ψ = {(X i ,Y i )} n i = 1 X i Y i instance.</ns0:formula><ns0:p>where C is the number of categories</ns0:p><ns0:formula xml:id='formula_30'>Z i = [z 1i , z 2i ,….,z Ci ] ∈ R C {(X i ,Y i )→c ji } n i = 1 , 0 ≤ j ≤ C Output: The image representation , text representation V , hyper U = [u 1 ,u 2 ,….,u n ] = [v 1 ,v 2 ,….,v n ]</ns0:formula><ns0:p>parameter of image modality, hyper parameter of text modality, a common sub-space B.</ns0:p><ns0:formula xml:id='formula_31'>𝛉 𝐱 𝛉 𝐲 Initialization 0.1 ,batch size=128, , ,</ns0:formula><ns0:p>where n is</ns0:p><ns0:formula xml:id='formula_32'>θ x = ,θ 𝑦 = 0.1 η = 0.1,γ = 0.1 I x = ⌈ n batch size ⌉ I y = ⌈ n batch size ⌉ number of training data points. if S ij = 1 (X i ,Y i ) ∈ [z 1i , z 2i ,….,z Ci ] 1 ≤ j ≤ C otherwise S ij = 0 Method [Image Modality] for iteration = 1 to I x</ns0:formula><ns0:p>Step-1 Select 128 data points (batch size) from Image and Word Vector Y X</ns0:p><ns0:p>Step-2 Calculate learned image feature by forward propagation U i = f(X i ,θ x )</ns0:p><ns0:p>Step-3 Calculate the learned text feature by forward propagation</ns0:p><ns0:formula xml:id='formula_33'>V i = g(Y i ,θ y )</ns0:formula><ns0:p>Step-4 Calculate the discrimination loss in the label space. (Eq. ( <ns0:ref type='formula'>18</ns0:ref>))</ns0:p><ns0:p>Step-5 Calculate the discrimination loss of both text and image representation in the common sub-space. (Eq. ( <ns0:ref type='formula'>19</ns0:ref>))</ns0:p><ns0:p>Step-6 Calculate the modality wise invariance loss. [Eq. ( <ns0:ref type='formula' target='#formula_46'>23</ns0:ref>)]</ns0:p><ns0:p>Step-7 Update the linear classifier C parameters by minimizing the cost function</ns0:p><ns0:formula xml:id='formula_34'>C = (UU T ) -1 U T S + (VV T ) -1 V T S</ns0:formula><ns0:p>Step-8 Update the parameters of image network and text network using stochastic gradient descent, and</ns0:p><ns0:formula xml:id='formula_35'>θ x = θ x -η ∂J ∂θ x θ y = θ y -η ∂J ∂θ y end for</ns0:formula><ns0:p>[a] Calculate the discrimination loss in the label space.</ns0:p><ns0:p>Once the features are learned from image and text modality, a linear classifier C is connected to image and text sub networks, which predicts the semantic labels of the projected features. This predicted label should preserve the semantic similarity with label space. The discrimination loss in the label space is calculated by using the following equation:</ns0:p><ns0:formula xml:id='formula_36'>J 1 J 1 = 1 n ‖C T U -S‖ F + 1 n ‖C T V -S‖ F (18)</ns0:formula><ns0:p>Where, is Frobenius norm and n is the number of instances. ‖.‖ F [b] Calculate the discrimination loss of both text and image modality in the common subspace. The inter-modal similarity is further preserved by minimizing discrimination loss from image and text representation in the common sub-space, as denoted by Eq. ( <ns0:ref type='formula'>19</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_37'>J 2 = 1 n[ - n ∑ i,j = 1 [S ij θ ij -log (1 + θ ij )] ] + 1 n[ - n ∑ i,j = 1 [S ij φ ij -log (1 + 𝜑 ij )] ] + 1 n[ - n ∑ i,j = 1 [S ij 𝜙 ij -log (1 + 𝜙 ij )] ] (19)</ns0:formula><ns0:p>The first part of Eq. ( <ns0:ref type='formula'>19</ns0:ref>), preserves the semantic similarity between image representation U and text representation V with similarity matrix S, which is denoted as,</ns0:p><ns0:formula xml:id='formula_38'>θ ij = U T * i V * j</ns0:formula><ns0:p>The above equation should maximize the likelihood Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_39'>P(S ij |U * i ,V * j ) = when σ (θ ij ) S ij = 1 = when 1 -σ (θ ij ) S ij = 0<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where is a sigmoid function that exists between 0 to 1, and it is preferable when σ</ns0:p><ns0:formula xml:id='formula_40'>(θ ij ) = 1 1 + e -θ ij</ns0:formula><ns0:p>there is a need to predict the probability as an output. Since the probability of anything exists between a range of 0 to 1, sigmoid is the right choice. It is represented as,</ns0:p><ns0:formula xml:id='formula_41'>P(S ij |U * i ,V * j ) = π (σ (θ ij )) S ij (1 -σ (θ ij )) 1 -S ij = n ∑ i,j = 1 [S ij θ ij + log (1 -θ ij )]<ns0:label>(21)</ns0:label></ns0:formula><ns0:p>Eq. ( <ns0:ref type='formula' target='#formula_41'>21</ns0:ref>) can be rewritten as below cost function which forces representation to be larger when θ ij and vice versa.</ns0:p><ns0:formula xml:id='formula_42'>S ij = 1 J = - n ∑ i,j = 1 [S ij θ ij -log (1 + θ ij )] (22)</ns0:formula><ns0:p>So, here cost function forces to be larger when and vice versa.</ns0:p><ns0:formula xml:id='formula_43'>θ ij S ij = 1</ns0:formula><ns0:p>The second part and third part of the equation measures the similarities with image representation and text representations.</ns0:p><ns0:formula xml:id='formula_44'>φ ij = U T * i U * j</ns0:formula><ns0:p>is image representation, for instance, i and j whereas</ns0:p><ns0:formula xml:id='formula_45'>φ ij ϕ ij = V T * i V * j</ns0:formula><ns0:p>is text representation, for instance, i and j. 𝜙 ij</ns0:p><ns0:p>[c] Calculate the modality wise invariance loss</ns0:p><ns0:formula xml:id='formula_46'>J 3 = 1 n ‖U -V‖ F<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>The final objective function is, J = J 1 + λJ 2 + η J 3 (24) The final objective function of IDMR in Eq. ( <ns0:ref type='formula'>24</ns0:ref>) can be optimized during the stochastic gradient descent algorithm. The and are hyper parameters. The , , and are the loss functions, λ η J 1 J 2 J 3 used to preserve inter-modal similarity between image and text modality. The proposed framework has used the sigmoid activation function, which is a nonlinear function used to learn complex structures in the data. However, sometimes it suffers from vanishing gradient descent, which prevents deep networks to learn from learning effectively. The problem of vanishing gradient can be solved by using another activation function, like rectified linear activation unit(ReLU).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>To evaluate the effectiveness of the proposed framework, we have performed experiments on well-known datasets MSCOCO <ns0:ref type='bibr' target='#b8'>(Lin et al., 2015b)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>studies. The MSCOCO dataset has total of 3,28,000 images, which is divided into 91 categories and each image is associated with at least 5 captions. The MSCOCO dataset consists of daily scene images and their descriptions. The training set consists of 15,000 images and the query set consists of 4000 images. The Flickr8k dataset contains 8000 images and each paired with 5 different captions. The training set consists of 6000 images and the testing set consists of 1000 images. The XMedia dataset has text, image, video, and audio modality, which has a total of 20 different categories and each category has 600 media instances. The training set consists of 8000 images and the testing set consists of 1500 images. We perform experiments on GPU architecture, NVIDIA cuda cores-3840, the memory size of 12 GB GDDR5X, 32 GB RAM, 2TB hard disk, and Intel Core i7 8 th generation. We have considered mean Average Precision (mAP) as a statistical measure, which is used to measure the performance of the CMR system <ns0:ref type='bibr' target='#b15'>(Peng et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b25'>Yanagi et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Following is the analysis after performing experiments. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The work presented in the paper has proposed a framework called 'Improvement of Deep Cross-Modal Retrieval (IDCMR)', which is restricted to image and text modality. The generated heterogeneity gap is bridged by generating a common sub-space. The nature of the common subspace is real-valued, which preserves similarities between different modalities. The uniqueness PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of our proposed framework is that we consider both the inter-modal and intra-modal similarities between various modalities. The proposed framework outperforms state-of-the-art methods in text image and image text retrieval tasks on Multi-Modal datasets. However, there exist many → → types of noise and redundancies in Multi-Modal data, which need to be resolved to improve the performance of the CMR system. Here the proposed framework is restricted to image and text modality, which can be extended to other modalities.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>20) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>1. The biggest challenge in text modality is to preserve semantic similarity between different words. Here experiments are carried out between different vectorization methods like BoW, CBOW, SG, and Glove. The syntactic and semantic pairs are selected from each Multi-Modal dataset like MSCOCO, Flickr8k, and XMedia. The cosine similarity is used to calculate the similarity between different pairs of words. Figure4shows the performance of various vectorization methods on Multi-Modal datasets. The experiment shows that Glove outperforms other vectorization methods. The Glove achieves better performance as it can preserve the similarity of words by considering the entire corpus. Due to better performance, Glove is selected as a vectorization method in the proposed framework.2. The objective function of IDMR generates real-valued representation of image and text modality in the common sub-space, which preserves inter-modal and intra-modal similarity. The performance of IDCMR is compared with state-of-the-art CMR methods. Source codes of DCMH, DSCMR, SePH, SCM, DBM, and DAE are provided by the corresponding authors. Figure 5, Figure 6, Figure 7, and Figure 8 show the performance of IDCMR on MSCOCO and XMedia dataset for image text and text image retrieval, → → respectively. The experiment shows that IDCMR outperforms over state-of-the-art methods in both image text and text image retrieval. The advantage of IDCMR over → → other state-of-the-art methods is that the objective function of IDCMR preserves both inter-modal similarity and intra-modal similarity.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,327.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,322.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54155:1:1:NEW 24 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Charotar University of Science And Technology,
139, CHARUSAT Campus, Highway, Off, Nadiad - Petlad Rd,
Changa, Gujarat 388421, INDIA,
nikitabhatt.ce@charusat.ac.in
February 13th, 2021
Dear Editors,
We thank the editor and reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns.
In particular, I have made all the changes suggested by reviewers regarding justification of common subspace, reference for real-valued representation, splits of data, experiments and disadvantages of proposed method. In addition, I have also corrected all the grammatical mistakes thoroughly and figures are corrected and converted into appropriate format for the journal.
We believe that the manuscript is now suitable for publication in PeerJ.
Nikita Bhatt,
Assistant Professor, CE, CSPIT, CHARUSAT.
On behalf of all authors.
1.
It is not clear how the common subspace (Line 264) for image and text modalities is generated. The explanation and discussion/ mathematical analysis to that end is necessary.
The common representation for each image is generated by fully connected layers. The number in the last fully connected layer (fc8) indicates the number of neurons or dimensionality of the output layer. Similarly, the Glove model for text modality is pretrained on Google News, which represents each word feature vector. The text matrix is given to fully connected layers to learn the common representation for text. To learn a common representation from image and text modality, the two sub-networks share the weights of the last layers, which generate the same representation for semantic similar image and text modality.
Manuscript has been modified and suggestions are incorporated.
2. The sigmoid activation (line 272) typically demonstrates exploding gradient behavior. How is that handled? Can any other activation be more useful instead?
The proposed framework has used the sigmoid activation function, which is a nonlinear function used to learn complex structures in the data. However, sometimes it suffers from vanishing gradient descent, which prevents deep networks to learn from learning effectively. The problem of vanishing gradient can be solved by using another activation function, like rectified linear activation unit(ReLU).
Manuscript has been modified and suggestions are incorporated.
3. On line 218: ‘In this work, real-valued coordinated representation is generated, which preserves intra-modal and inter-modal semantic similarity’. Please provide a reference or solid reasoning for this.
The inter-modal similarity is preserved by minimizing the (i) discrimination loss in the label space J1(ii) discrimination loss in text and image representation J2 and (iii) modality-invariant loss J3 in the common sub-space. Further, the intra-modal similarity is preserved by selecting an appropriate training model for each modality. The biggest challenge in text modality is to preserve similarity between different words. There are many vectorization methods available, which map words into vectors. However, the selection of an appropriate method is a challenging task. Here experiments are performed using various vectorization methods on Multi-Modal datasets and selection has been made based on the performance of the model, which achieves intra-model similarity.
Manuscript has been modified and suggestions are incorporated.
4. The title states about real-valued representation, however, it is not so clear how it is achieved in the experimentation.
Manuscript has been modified and suggestions are incorporated.
5. How is the data split carried out?
The MSCOCO dataset has total of 3,28,000 images, which is divided into 91 categories and each image is associated with at least 5 captions. The MSCOCO dataset consists of daily scene images and their descriptions. The training set consists of 15,000 images and the query set consists of 4000 images. The Flickr8k dataset contains 8000 images and each paired with 5 different captions. The training set consists of 6000 images and the testing set consists of 1000 images. The XMedia dataset has text, image, video, and audio modality, which has a total of 20 different categories and each category has 600 media instances. The training set consists of 8000 images and the testing set consists of 1500 images.
Manuscript has been modified and suggestions are incorporated.
6. The uploaded code has files for DCMR; however, the results show multiple other models? Have the authors experimented and obtained the results reported in Fig 6, 7, 8? If not and these baseline results are taken directly from some standard papers, it is necessary to provide the references.
Yes, Source code of state-of-the-art methods are provided by corresponding authors.
7. On lines 113-115 Authors state “Traditionally, words are mapped into vectors using bag-of-words (BoW) but fail to preserve semantic similarity between words.”
The goal of Natural Language Processing (NLP) is to design algorithms, which allow computers to understand natural language. The process of converting a word into a vector is called vectorization, which represents each word into vector space. The most common local representation method is called bag-of-words (BoW), where each word is represented as a vector with all 0’s and one 1 at the index of the word in the corpus. However, the generated matrix is sparse in nature, which is inefficient for computation, and the similarity between different words is not preserved, as the inner product between two different one-hot vectors is zero.
Manuscript has been modified and suggestions are incorporated.
8. Are there any disadvantages of the proposed method?
There exist many types of noise and redundancies in Multi-Modal data, which need to be resolved to improve the performance of the CMR system. Here the proposed framework is restricted to image and text modality, which can be extended to other modalities.
9. Conclusion is weak and I suggest the authors elaborate more to bring out the contribution and novelty of the work.
Conclusion has been modified based on the given suggestions.
" | Here is a paper. Please give your review comments after reading it. |
69 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep neural networks have been widely explored and utilised as a useful tool for feature extraction in computer vision and machine learning. It is often observed that the last fully connected (FC) layers of convolutional neural network possess higher discrimination power as compared to the convolutional and maxpooling layers whose goal is to preserve local and low-level information of the input image and down sample it to avoid overfitting. Inspired from the functionality of local binary pattern (LBP) operator, this paper proposes to induce discrimination into the mid layers of convolutional neural network by introducing a discriminatively boosted alternative to pooling (DBAP) layer that has shown to serve as a favourable replacement of early maxpooling layer in a convolutional neural network (CNN). A thorough research of the related works show that the proposed change in the neural architecture is novel and has not been proposed before to bring enhanced discrimination and feature visualisation power achieved from the mid layer features. The empirical results reveal that the introduction of DBAP layer in popular neural architectures such as AlexNet and LeNet produces competitive classification results in comparison to their baseline models as well as other ultra-deep models on several benchmark data sets. In addition, better visualisation of intermediate features can allow one to seek understanding and interpretation of black box behaviour of convolutional neural networks, used widely by the research community.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep neural networks have been widely explored and utilised as a useful tool for feature extraction in computer vision and machine learning. It is often observed that the last fully connected (FC) layers of convolutional neural network possess higher discrimination power as compared to the convolutional and maxpooling layers whose goal is to preserve local and low-level information of the input image and down sample it to avoid overfitting.</ns0:p><ns0:p>Inspired from the functionality of local binary pattern (LBP) operator, this paper proposes to induce discrimination into the mid layers of convolutional neural network by introducing a discriminatively boosted alternative to pooling (DBAP) layer that has shown to serve as a favourable replacement of early maxpooling layer in a convolutional neural network <ns0:ref type='bibr'>(CNN)</ns0:ref>.</ns0:p><ns0:p>A thorough research of the related works show that the proposed change in the neural architecture is novel and has not been proposed before to bring enhanced discrimination and feature visualisation power achieved from the mid layer features. The empirical results reveal that the introduction of DBAP layer in popular neural architectures such as AlexNet and LeNet produces competitive classification results in comparison to their baseline models as well as other ultra-deep models on several benchmark data sets. In addition, better visualisation of intermediate features can allow one to seek understanding and interpretation of black box behaviour of convolutional neural networks, used widely by the research community.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Deep learning architectures such as convolutional neural networks, recurrent neural networks and deep belief networks have been applied to a wide range of applications in domains such as natural language processing, speech recognition, computer vision, and bioinformatics, where they have produced outstanding results approximately the same and in some scenarios better than the humans <ns0:ref type='bibr' target='#b17'>(He et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b54'>Silver et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b35'>LeCun et al., 1990;</ns0:ref><ns0:ref type='bibr' target='#b58'>Szegedy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b13'>Girshick et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b19'>Hinton et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b64'>Yu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b69'>Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b42'>Masumoto et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Le and Nguyen, 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Le, 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Do et al., 2020)</ns0:ref>. Among these deep models, convolutional neural network (CNN) is the most popular choice for automatically learning visually discriminative features memorised by the fully connected layers. The interest of researchers in CNN triggered when <ns0:ref type='bibr' target='#b27'>Krizhevsky et al. (Krizhevsky et al., 2012)</ns0:ref> showed record beating performance on ImageNet 2012 object classification data set with their CNN (AlexNet), achieving an error rate of 16.4% in comparison to 26.1% error shown by the runner up. Ever since then, various variants of deep convolutional models such as Visual Geometry Group (VGG)-VD (Very Deep) model <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2014)</ns0:ref>, GoogLeNet/Inception <ns0:ref type='bibr' target='#b58'>(Szegedy et al., 2015)</ns0:ref> and ResNet <ns0:ref type='bibr' target='#b18'>(He et al., 2016)</ns0:ref> have been introduced, increasing the depth of the models from 8 layers in AlexNet to 152 layers in ResNet. These models have not just progressed in depth but also their intricacy of connectivity, type of activation function and the training algorithm that prevents the diminishing gradient issue observed during training through back propagation in ultra deep models.</ns0:p><ns0:p>Keeping in account the success of deep neural models, many researchers have treated CNN as a black box feature extractor where end-to-end learning framework is utilised to draw discriminative features from the last fully connected (FC) layers. The last fully connected layers are successfully utilised to extract global image descriptors as they possess rich high level semantic information that can effectively PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science distinguish the object of interest from the background <ns0:ref type='bibr' target='#b53'>(Sharif Razavian et al., 2014)</ns0:ref>. In contrast, the intermediate layers of CNN are popular for extracting spatial and local characteristics of images which are important to extract expressive regions of objects, yet cannot serve very well as a global image descriptor.</ns0:p><ns0:p>To get improved performance on different classification tasks, most of the researchers have focused on increasing the depth of the convolutional neural model and varied the network's training strategy and activation functions <ns0:ref type='bibr' target='#b4'>(Bengio et al., 2013a;</ns0:ref><ns0:ref type='bibr' target='#b32'>LeCun et al., 2015)</ns0:ref>. We observe that designing and training such an ultra-deep architecture for global feature extraction is: (i) Expensive in terms of computation, <ns0:ref type='bibr'>(ii)</ns0:ref> results in model size large in terms of disk space utilisation and memory usage, (iii) prone to overfitting when the data set size is limited, and (iv) requires a large amount of labelled training data when fine tuning the model for new application domains. On account of these challenges, we take an introspective approach to understand the functionality and behaviour of the intermediate layers, in particular the convolutional and pooling layers of CNN, and propose a novel technique to improve their representational power along with increasing the discrimination performance of the model without going deeper with additional hidden layers. Visualisation of features allows one to functionally understand and interpret the behaviour of deep model's internal layers in connection to its progressive depth and output <ns0:ref type='bibr' target='#b48'>(Raghu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b47'>Poole et al., 2016)</ns0:ref>. Visualising the representational capacity of deep models has been a topic of recent interest to express general priors about the world that can help one identify the stimuli causing a certain output and ultimately design learning machines that can solve different AI tasks <ns0:ref type='bibr' target='#b5'>(Bengio et al., 2013b;</ns0:ref><ns0:ref type='bibr' target='#b67'>Zeiler and Fergus, 2013;</ns0:ref><ns0:ref type='bibr' target='#b43'>Mordvintsev et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b48'>Raghu et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b52'>Santos and Abel, 2019)</ns0:ref>. The topic has grabbed interest of the research community so much so that dedicated workshops in leading conferences like NIPS and CVPR are arranged to discuss the works under this theme. We have therefore laid our focus on developing a technique that can transform features from model's intermediate layers into a visually powerful tool for introspective analysis, as well as act as discriminative off the shelf feature extractor for image classification with simple and sophisticated machine learning classifiers. Our empirical results reveal that with the proposed technique, intermediate layers close to the input layer could also be made more competent for feature visualisation and discrimination tasks.</ns0:p><ns0:p>The main contributions of this work are outlined as follows: (1) Improving the classification performance of classical CNN architectures: LeNet and AlexNet on benchmark data sets without increasing their depth (hidden layers), (2) Improving the visualisation power of features learned by the intermediate layers of CNN, (3) Introducing discriminatively boosted alternative to pooling (DBAP) layer in the CNN architectures, that can serve independently as an efficient feature extractor for classification when used with classifiers such as k-nearest neigbour (k−NN) and support vector machines (SVM). The pretrained CNN with DBAP layer offers features that could be deployed in resource constrained environments where ultra-deep models could not be stored, retrieved and trained.</ns0:p><ns0:p>The remaining paper is structured as follows: Section 2 discusses the related research work carried out in the area of computer vision. Section 3 provides preliminary information required to understand the details of proposed methodology discussed in Section 4. Section 5 discusses the benchmark data sets, implementation details and evaluates the results of conducted experiments. We conclude this work in Section 6 with a discussion on the future work intended to further improve and extend this research in future. There is also a supplementary section (Section 7) that holds additional results to provide in depth analysis of the proposed change in convolutional neural models.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>There has been a recent surge of interest in understanding and visualising the intermediate layers of deep models for interpretability and explainability, leading to the development of more stable and reliable machine learning systems <ns0:ref type='bibr' target='#b68'>(Zeiler and Fergus, 2014;</ns0:ref><ns0:ref type='bibr' target='#b49'>Ren et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bau et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Hazard et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Gagne et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Hohman et al., 2018)</ns0:ref>. The visualisation techniques allow the researchers and practitioners understand what features are being learned by the deep model at each stage. Visualisation diagnostics may also serve as an important debugging tool to improve a model's performance, make comparisons and select optimal model parameters for the task at hand. This often requires monitoring the model during the training phase, identifying misclassified examples and then testing the model on a handful of well-known data instances to observe performance. Generally, the following parameters of deep model are visualised either during or after the training phase: (1) Weights on the neural connections <ns0:ref type='bibr' target='#b56'>(Smilkov et al., 2017)</ns0:ref>, (2) convolutional filters <ns0:ref type='bibr' target='#b68'>(Zeiler and Fergus, 2014;</ns0:ref><ns0:ref type='bibr' target='#b62'>Yosinski et al., 2015)</ns0:ref> (3) neuron activations in response to a single or group of instances <ns0:ref type='bibr' target='#b14'>(Goodfellow et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b62'>Yosinski et al., 2015)</ns0:ref>, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(4) gradients for the measurement and distribution of train error (D. <ns0:ref type='bibr' target='#b8'>Cashman and Chang, 2017)</ns0:ref>, and</ns0:p><ns0:p>(5) model metrics such as loss and accuracy computed at each epoch. This work focuses on improving the visualisation power of deep neural models in addition to enhancing their discrimination ability as a classifier and feature extractor.</ns0:p><ns0:p>The fully connected (FC) layers of deep convolutional neural network have often been utilised to extract features due to their higher discriminative ability and semantic representation of image concepts that makes them a powerful global descriptor <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>He et al., 2016)</ns0:ref>. The FC features have demonstrated their advantage over VLAD (Vector of Locally Aggregated Descriptors) and Fisher vector descriptors and are known to be invariant to illumination and rotation to some extent, however they lack the description of local patterns captured by the convolutional layers. To address this limitation, some researchers have proposed to utilise the intermediate layers of deep models to improve their performance on various tasks <ns0:ref type='bibr' target='#b7'>(Cimpoi et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b2'>Babenko and Lempitsky, 2015;</ns0:ref><ns0:ref type='bibr' target='#b41'>Liu et al., 2017b;</ns0:ref><ns0:ref type='bibr' target='#b65'>Yue-Hei Ng et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b39'>Liu et al., 2015)</ns0:ref>. For instance, Ng et al. <ns0:ref type='bibr' target='#b45'>(Ng et al., 2015)</ns0:ref> aggregated convolutional layer activations using vector of locally aggregated descriptors (VLAD) and achieved competitive performance on image retrieval task. Tolias et al. <ns0:ref type='bibr' target='#b59'>(Tolias et al., 2015)</ns0:ref> max pooled the activations of the last convolutional layer to represent each image patch and achieved compelling performance for object retrieval. Lie et al. <ns0:ref type='bibr' target='#b40'>(Liu et al., 2017a)</ns0:ref> built a powerful image representation using activations from two consecutive convolutional layers to recognise images. <ns0:ref type='bibr' target='#b28'>Kumar et al. (Kumar et al., 2009</ns0:ref><ns0:ref type='bibr' target='#b66'>, 2012)</ns0:ref> introduced the use of Volterra theory for the first time to learn discriminative convolution filters (DCF) from the pixel features on gray-level images.</ns0:p><ns0:p>In addition to the convolutional layers, researchers have also explored the use of various types of pooling functions from simple ones such as max, average, and stochastic pooling to complex ones, like spatial pyramid pooling network (SPP-Net), which allows the convolutional neural model to take images of variable scales using spatial pyramid aggregation scheme <ns0:ref type='bibr' target='#b16'>(He et al., 2014)</ns0:ref>. The pooling layers have traditionally been utilised in CNN to avoid overfitting by reducing the size of the detected features by a factor of two. However, the fact that they lose spatial information and keep no track of the relationship between the features extracted by the convolutional layers, makes them less appealing and strongly criticised by front end researchers like Geoffrey Hinton. In order to avoid the limitations of pooling operations, it is suggested to use dynamic routing (routing-by-agreement) scheme, in replacement of the max-pooling operation and name this newly proposed model as Capsule Network <ns0:ref type='bibr' target='#b51'>(Sabour et al., 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b57'>Springenberg et al. (Springenberg et al., 2014</ns0:ref>) also proposed to discard the pooling layer in favour of architecture that only consists of repeated convolutional layers. In order to reduce the size of the representation, he suggested using larger stride in convolutional layer once in a while. Discarding pooling layers has also been found important in training good generative models, such as variational autoencoders (VAEs) or generative adversarial networks (GANs) <ns0:ref type='bibr' target='#b63'>(Yu et al., 2017)</ns0:ref>. From these moves, it seems likely that the future architectures will feature very few to no pooling layers. Keeping in view these recent trends of research to improve deep models as classifiers, we hereby take inspiration from the characteristics of local binary pattern (LBP) operator, known widely for its simplicity and discriminative power to improve the representational power of CNN's intermediate layers and utilise it for gaining better discrimination performance on image classification task. Similar work has been carried out by Xu et al. <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref>, who proposed an efficient non-linear approximation of convolutional layers in the convolutional neural network. Their proposed model namely local binary convolutional neural networks (LBCNN) <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref> utilises a hybrid combination of fixed sparse and learnable weights and local binary patterns (LBP). In contrast, this work deploys dense weights and resides on regularisation techniques like dropout and batch normalisation to avoid overfitting issues.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PRELIMINARIES</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Local Binary Patterns</ns0:head><ns0:p>Local binary pattern (LBP) is a non-parametric approach that extracts local features of images by comparing the intensity of each center pixel in a patch with adjacent pixels in its defined neighbourhood <ns0:ref type='bibr' target='#b46'>(Ojala et al., 1994)</ns0:ref>. If the neighbours have intensity greater than the center pixel, they are assigned the value of 1, otherwise 0. LBP has traditionally worked well with window patches of size 3 × 3, 5 × 5 and 7 × 7, etc, scanned through the image in an overlapping fashion. This bit string is read sequentially in a specified order and is mapped to a decimal number (using base 2) as the feature value assigned to the central pixel. <ns0:ref type='table' target='#tab_11'>2020:10:54285:1:2:CHECK 19 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>efficacy of the proposed approach on benchmark data sets. LeNet is the pioneering neural network proposed by Yann LeCun consisting of 7 layers (5 hidden), and is known to work very well for recognising digits and zip codes <ns0:ref type='bibr' target='#b37'>(LeCun et al., 1998)</ns0:ref>. AlexNet, named after Alex Krizhevsky <ns0:ref type='bibr' target='#b27'>(Krizhevsky et al., 2012)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>METHODOLOGY</ns0:head><ns0:p>Algorithm 1 Discriminatively Boosted Alternative to Pooling (DBAP) Layer in CNN. for Each Image do 3:</ns0:p><ns0:formula xml:id='formula_0'>Input: Input Image, X ( j) ={x ( j) i } d i=1 ;</ns0:formula><ns0:p>Mean normalise the incoming image pixels X ( j) and store them in X ( j) norm .</ns0:p></ns0:div>
<ns0:div><ns0:head>4:</ns0:head><ns0:p>Compute the convolutional features from normalised image X ( j) norm by convolving kernel K.</ns0:p></ns0:div>
<ns0:div><ns0:head>5:</ns0:head><ns0:p>Apply activation function on convolved features to map them in non-linear space.</ns0:p></ns0:div>
<ns0:div><ns0:head>6:</ns0:head><ns0:p>Forward propagate the non-linear result of activation function to DBAP layer.</ns0:p></ns0:div>
<ns0:div><ns0:head>7:</ns0:head><ns0:p>Partition the received image into overlapping blocks of equal size using the stride, S and filter size, F.</ns0:p></ns0:div>
<ns0:div><ns0:head>8:</ns0:head><ns0:p>Compute the LBP for each block using formula:</ns0:p><ns0:formula xml:id='formula_1'>9: LBP R,P = ∑ P−1 p=0 s(g p − g c ).</ns0:formula><ns0:p>2 p , where s(g p − g c ) = 1 if g p ≥ g c , 0 otherwise. % Here g p and g c denote the gray values of the central pixel and its neighbours. Continue forward pass and perform backpropagation to learn model parameters.</ns0:p><ns0:p>12: end for 13: end while In order to enhance the discrimination power and representation capability of intermediate layers in CNN, we reformulate its architecture by introducing a discriminatively boosted alternative to pooling (DBAP) layer embedded at early stage of feature learning. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> demonstrates how LeNet and AlexNet models stack convolutional and pooling layers to learn local spatial features. We first preprocess each input image by performing standardization approach. The goal of standardization is to bring all the features at the same scale so that each feature is treated equally important and none dominates the other during features learning. Each image pixel x ( j) i is standardized by computing the mean, µ i and standard deviation, σ i of each feature i in an image j by utilising the following formula:</ns0:p><ns0:formula xml:id='formula_2'>x ( j) i = x ( j) i − µ i σ i (1)</ns0:formula><ns0:p>Standardizing input data is a common approach used in neural networks and machine learning in general, to learn parameters, optimise and converge the models faster <ns0:ref type='bibr' target='#b60'>(Xiang and Li, 2017)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>EXPERIMENTS AND RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1'>Data Sets Used</ns0:head><ns0:p>We have evaluated the efficacy of the proposed approach on different benchmark data sets with baseline convolutional neural networks and their other very deep counterparts such as GoogleNet <ns0:ref type='bibr' target='#b58'>(Szegedy et al., 2015)</ns0:ref>, LBCNN <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref> and MobileNet <ns0:ref type='bibr'>(Howard et al., 2017)</ns0:ref>. There are four standard data sets used in this paper: MNIST, SVHN, FASHION-MNIST and CIFAR-10. These are benchmark computer vision data sets that are well understood and highly used by the researchers to provide basis for any improvement in the proposed learning algorithm or neural architecture. Their popularity has won them a The Modified National Institute of Standards and Technology (MNIST) data set <ns0:ref type='bibr' target='#b33'>(LeCun et al., 1989)</ns0:ref> consists of 60,000 training and 10,000 test images of hand written digits with a resolution of 28 × 28 pixels. The database contains grayscale images of digits 0 to 9. Despite the success of deep models with large scale data sets, MNIST enjoys the title of most widely used test bed in deep learning, surpassing CIFAR 10 ( <ns0:ref type='bibr' target='#b25'>Krizhevsky and Hinton, 2009)</ns0:ref> and ImageNet <ns0:ref type='bibr' target='#b9'>(Deng et al., 2009)</ns0:ref> in its popularity via Google trends 2 . We have therefore selected this data set to benchmark the results of our proposed approach with state of the art comparative methods.</ns0:p><ns0:p>The FASHION-MNIST (F-MNIST) data set <ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.2'>Tools Used and Computational Requirements of the Proposed Model</ns0:head><ns0:p>The proposed neural model with DBAP layer was trained on Google Colab's (col, 2019) Tesla K-80 graphics processing unit (GPU) using Keras <ns0:ref type='bibr' target='#b6'>(Chollet et al., 2015)</ns0:ref> and TensorFlow deep learning frameworks implemented in Python. Colab is a cloud based service that allows researchers to develop deep learning applications with free GPU support. The system used had Intel(R) Xeon(R) 2.3GHz processor with two cores and 16GB of RAM. To achieve results in optimal time, it is recommended to run the deep learning framework on premium GPU cards with at least 8GB of RAM. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Evaluation Metrics Used for Monitoring Classification Performance</ns0:head><ns0:p>The evaluation metrics used to monitor the quality of classification framework are accuracy, precision, recall, F1-score, and, area under the curve (AUC). These are standard model evaluation metrics used in research to carry out investigation and perform analysis <ns0:ref type='bibr' target='#b30'>(Le, 2019;</ns0:ref><ns0:ref type='bibr' target='#b10'>Do et al., 2020)</ns0:ref>. Accuracy is not regarded as a good measure of judging model's performance when the class distribution is imbalanced, i.e. when the number of samples between two or more classes vary significantly. Such imbalance can affect the traditional classifiers as well as the deep models, commonly resulting in poor performances over the minority classes. Since, class instances of all the data sets used in this work are not balanced (in specific SVHN), we have demonstrated precision, recall, F-1 score, and receiver operating characteristics (in addition to accuracy to judge the performance of the proposed features and classifiers.) .</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4'>Visual Diagnostics Used to Evaluate Feature Information Quality</ns0:head><ns0:p>In order to understand how the input image is transformed by each intermediate layer of CNN, the activations of neurons in pooling layer and DBAP layer are visualised. The feature maps are visualised in three dimensions: Width, height and depth (channels). Since each channel encodes independent information, one appropriate way to visualise these features is to plot 2D images of each channel separately. Given our existing knowledge of deep neural models, the initial layers act as edge detectors and retain most of the information of the input image. As we go higher, the activations become increasingly abstract and less interpretable visually. The sparsity of activations increases with the depth of the layer, i.e. more and more filters would go blank and the pattern encoded in the image could not be seen. We thus expect that the activation filters of DBAP layer should be more interpretable and semantically meaningful given the input image, model is observing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5'>Implementation Details for Model Training</ns0:head><ns0:p>In this section, we discuss how the choice of different hyper-parameters such as kernel's filter size, batch size, learning rate, epochs and optimisation algorithm is made to train the CNN models for each specific In order to assess if the model is overfitting with the chosen set of parameters or hyper-parameters, the performance is compared on train and validation sets in Figure <ns0:ref type='figure' target='#fig_10'>3 and 4</ns0:ref>. If the model behaves very well on the train set but fails to classify examples from the validation set by a huge margin, it means that it is overfitting and shall not perform well on unseen test examples. Some of the ways in which model overfitting could be avoided are: Cross-validation, usage of more train data, early stopping, regularisation and removal of features. We have regularised the models which were overfitting with the help of the validation set.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5.1'>Impact of Learning Rate and Epochs on Model Training</ns0:head><ns0:p>The training of CNN depends largely on the learning rate and number of epochs used to learn the parameters. The learning rate hyperparameter controls the speed at which the model learns. For small learning rate, large number of epochs are required to train the model, whereas for large learning rate, small number of epochs are needed to navigate in the parameter space of the neural model. A learning rate that is too large can cause the model to converge too quick to a sub-optimal solution, whereas a learning rate that is too small can cause the learning process to become very slow. Therefore, it is advised to choose a value that is neither too large nor too small. Its value typically ranges between 0 and 1. We have configured the best value for learning rate using grid search method. Grid search involves picking values approximately on a logarithmic scale within the set range:{10 −4 , 10 −3 , 10 −2 , 10 −1 , 10 0 }, and observes the validation loss while keeping the value of epochs fixed. We confined the value of epochs to 50 and observed the impact of changing learning rate on the validation set. Figures <ns0:ref type='figure' target='#fig_10'>3 and 4</ns0:ref> demonstrate the accuracy of LeNet and AlexNet models, when the learning rate was fixed at 0.01 and the model was run for 50 epochs. Since the validation error is lowest when η = 0.01, and the gap between the train and validation error is not significantly large, the model does not tend to overfit and 0.01 turns out to be the most suitable value for learning rate.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5.2'>Impact of Batch Size on Model Training</ns0:head><ns0:p>Batch size is also an important hyperparameter that impacts a model's performance. Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> shows the best batch size for each data set when learning rate and epochs are fixed at 0.01 and 50 respectively using the AlexNet architecture. A similar comparison was also performed for LeNet architecture and best batch sizes for MNIST, Fashion-MNIST, SVHN and CIFAR-10 were chosen as 128, 128, 128 and 256 respectively. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5.3'>Impact of Optimisers</ns0:head><ns0:p>In order to update the parameters of convolutional neural network, different popular optimisers such as stochastic gradient decent (SGD), adam <ns0:ref type='bibr' target='#b24'>(Kingma and Ba, 2014)</ns0:ref> and ADADELTA <ns0:ref type='bibr' target='#b66'>(Zeiler, 2012)</ns0:ref>, were tested and evaluated on the validated set. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5.4'>Impact of LBP Filter Size on CNN 323</ns0:head><ns0:p>We have also assessed different kernel sizes used in DBAP layer to capture local features of images that 324 add to the discriminative ability of neural models. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows that 3 × 3 window gives best accuracy 325 on the validation set in comparison to larger size filters on all the data sets. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.6'>Model Testing</ns0:head></ns0:div>
<ns0:div><ns0:head>327</ns0:head><ns0:p>After fine tuning the neural models with optimal parameters and hyperparameters, we next compute the (d) Architecture of AlexNet with DBAP Layer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.6.1'>Analysis of CNN Model with DBAP Layer as a Classifier</ns0:head><ns0:p>When deploying CNN as a classifier, the test data is passed to the trained CNN model with DBAP layer, whose last layer consisting of softmax units is utilised for object categorisation. The discrimination performance of the model is assessed with the help of following evaluation metrics: Accuracy, precision, recall, F1-score, and area under the curve (AUC), discussed in Section 5.3 and 7. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> shows improvement in the discrimination performance yielded by the proposed approach in comparison to the baseline AlexNet and LeNet architectures on four different benchmark data sets. We have also compared our results with local binary convolutional neural network (LBCNN) that offers to provide an alternative to standard convolutional layers in the convolutional neural network <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref>, GoogleNet (also known as Inception V1) <ns0:ref type='bibr' target='#b58'>(Szegedy et al., 2015)</ns0:ref> and MobileNet <ns0:ref type='bibr'>(Howard et al., 2017)</ns0:ref>. GoogleNet is a In Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, one may observe that unlike other data sets, the classification results of DBAP features on CIFAR-10 data set are a lot worse in comparison to LBCNN <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref>. This is because the images in CIFAR-10 possess natural objects with rich textures as compared to the hand written digit images present in other data sets. For this reason, LBCNN works exceptionally better on CIFAR-10 in comparison to AlexNet with DBAP features. Also LBCNN replaces all convolutional layers of AlexNet with LBP inspired layers which is popular for extracting discriminative texture descriptors, whereas our proposed model only replaces the first MaxPooling layer with LBP inspired feature detectors, hence the performance gap is higher in contrast. Similar impact in performance could also be observed in area under the curve graphs shown in the supplementary section.</ns0:p><ns0:p>We have conducted experiments to compare the discrimination power of LBP operator with DBAP features in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.6.2'>Analysis of CNN Model with DBAP Layer as a Feature Extractor</ns0:head><ns0:p>In order to assess the discrimination power of features learned by DBAP layer, we have also checked their accuracy with simple off the shelf classifiers like k-nearest neighbour (k-NN) and support vector machines (SVM). We selected pre-trained CNN models with and without DBAP layer to extract features for image classification task. The results shown in Tables <ns0:ref type='table' target='#tab_5'>5, 6</ns0:ref>, 7 and 8 demonstrate that DBAP layer can serve as a competitive feature extractor in comparison to the intermediate layer features such as MaxPooling layer of AlexNet and LeNet. For SVM classifier, the optimal value of parameter C is searched via grid-search method on the validation set and shown against each data set in the tables. Similarly, for k-nearest neighbour (k-NN), the optimal value of k is searched using the validation set and then used for the test data in each benchmark data set. The empirical results reveal that DBAP features could be used as readily available features from a pre-trained model for applications where quick retrieval and classification results are required.</ns0:p><ns0:p>We have also assessed the impact of DBAP layer on FC layer features. The fully connected (FC) layers are known to retain better discrimination power for classification tasks, however with the inclusion of DBAP layer, their ability to classify objects is further improved as can be seen in the last two columns of Tables <ns0:ref type='table' target='#tab_5'>5, 6</ns0:ref>, 7 and 8 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.6.3'>Statistical Significance of Models</ns0:head><ns0:p>We have also applied hypothesis testing to estimate the statistical significance of the proposed models.</ns0:p><ns0:p>Statistical tests help us identify the behaviour of models if the test set changes. Since our data sets are standardised, we assume a normal distribution of features and have applied McNemar's test or 5 × 2 cross-validation with a modified paired Student t-test. The null hypothesis assumes that the two samples came from the same distribution. In contrast, the alternative hypothesis assumes that the samples came from two different distributions and hence there is a difference between the tested models or classifiers.</ns0:p><ns0:p>With 0.05 level of confidence/significance, the p values attained for LeNet with DBAP layer and AlexNet with DBAP layer models are 0.007 and 0.011 respectively. In both the cases, p < 0.05, shows the samples generated from the proposed architectures are statistically different from the ones without DBAP layer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.7'>Visualisation of Filters</ns0:head><ns0:p>We have also visualised the mid-level features learned by DBAP layer and compared them with the features learned by max-pooling layers used in classical CNN architectures. Figures <ns0:ref type='figure' target='#fig_13'>5 and 6</ns0:ref> demonstrate the improvement in visual representation of intermediate features learned by the two CNN architectures in comparison to their baseline counterparts with maxpooling layer. One can observe that DBAP layer learns semantically better features from the input images as compared to the maxpooling layer used in classical LeNet and AlexNet architectures. As we go higher in the model hierarchy, the filters become more abstract and sparsity of the activations increases, i.e. the filters become more blank and the pattern encoded by the image is not showcased by the filter (Franc ¸ois, 2017).</ns0:p><ns0:p>Improving the visualisation strength of neural models can help us explore and understand the black box learning behaviour of deep models. Better visualisation can serve as a great diagnostic tool <ns0:ref type='bibr' target='#b38'>(Liu et al., 2019)</ns0:ref> for observing the evolution of features during model training and diagnose potential problems with the model via online/offline feature representations. This facilitates the researchers to fix their training practices and find models that can outperform an existing successful deep model. For example, the deconvolutional technique proposed for visualising the hidden layer features suggested an architectural change of smaller convolutional filters that lead to state of the art performance on the ImageNet benchmark in 2013 <ns0:ref type='bibr' target='#b68'>(Zeiler and Fergus, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.8'>Proposed Model's Complexity</ns0:head><ns0:p>We next compare the count of trainable parameters in LeNet and AlexNet containing DBAP layers with their baseline counter parts in Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>. The total number of CNN parameters are the sum of all its weights and biases connecting the convolutional, input, output and fully connected layers. The pooling layers in the architecture do not contribute to the count of model parameters as they contain hyper-parameters such as pool size, stride, and padding which do not need to be learned during the training phase. The number of model parameters before the advent of DBAP layer remain fixed. However, when we replace the first pooling layer with DBAP layer, the output tensor of Layer 2 is not down sampled as it does in regular LeNet and AlexNet architectures, rather the tensor scale remains the same as its input (i.e. 26 × 26 × 6 for LeNet and 14 × 14 × 96 for AlexNet). This impacts the size of the kernel in the following Manuscript to be reviewed Computer Science convolutional layer, and the effect is carried out forward to the next maxpooling and fully connected layers.</ns0:p><ns0:p>Overall, there is an increase of 380.33% in LeNet parameters and an increase of 14.57% in AlexNet model parameters with the inclusion of DBAP layer.</ns0:p><ns0:p>Keeping in view the size of model parameters, the proposed model is not well suited for resource constrained environments, where storage and computation of large number of parameters becomes a bottleneck. However, it offers two fold advantage in comparison to the state of the art models: (1) Effective intermediate feature visualisation power and (2) competitive discrimination performance as a feature extractor and classifier. Models such as LBCNN <ns0:ref type='bibr' target='#b23'>(Juefei et al., 2017)</ns0:ref> propose to use a compact neural model whose convolutional layers are all replaced by LBP operator. This move reduces the number of learnable parameters massively to around 0.352million, thus making it very suitable for resource constrained environments.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION & FUTURE WORK</ns0:head><ns0:p>In this paper, we propose to induce discrimination into the intermediate layers of the convolutional neural network by introducing a novel local binary pattern layer that can serve as a replacement of the first standard maxpooling layer used at early stage of feature learning in the convolutional neural network.</ns0:p><ns0:p>The empirical results on benchmark data sets as well as the visual feature maps of intermediate layers demonstrate the strength of the proposed idea to learn more discriminative features without building ultra deep models. Our experiments reveal that the proposed approach can strengthen the discriminative power of mid-level features as well as high level features learned by fully connected (FC) layers of convolutional In future, we aim to improve the training complexity of the proposed approach by reducing the number of learnable parameters for model training. In this regard, we shall explore sparsity in the neural connections to adopt suitable regularisation technique for fast model learning.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>SUPPLEMENTARY</ns0:head><ns0:p>The supplementary section shows some additional results to support reproducible research and make the main text more readable and understandable. We have shown precision, recall and F1-score of LeNet and Alexnet models along with their improved counterparts in Tables <ns0:ref type='table' target='#tab_12'>10 and 11</ns0:ref>. These evaluation metrics in combination with the accuracy show how good the proposed models are in comparison to their baseline models.</ns0:p><ns0:p>One can also observe the area under the curve (AUC) for the developed classifiers in Figures <ns0:ref type='figure' target='#fig_20'>7, 8</ns0:ref>, 9 and 10. AUC ranges between 0 and 1. Higher the AUC, better the model is at predicting classes correctly as positive and negative, significantly above the random chance. AUC is good at catching the performance of models when the class distribution is skewed. We observe that with the addition of DBAP layer in CNN architecture, AUC in ROC either increases or remains the same as shown in few cases. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>These aggregate feature values represent the local texture in the image. The parameters and configurations 3/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021) Manuscript to be reviewed Computer Science of LBP could be tweaked by customising the window size, base, pivot (pixel treated as physical center of the patch) and ordering (clockwise/ anticlockwise encoding).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Classical CNN architectures: LeNet (LeCun et al., 1998) and AlexNet (Krizhevsky et al., 2012) used to outperform the state of the art image classification results on MNIST and ImageNet data sets.</ns0:figDesc><ns0:graphic coords='5,168.52,100.20,360.00,202.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Graphical abstract of DBAP layer embedded in classical convolutional neural network models for boosting discrimination performance and feature visualisation power.</ns0:figDesc><ns0:graphic coords='5,151.61,371.03,393.80,157.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>3. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Convolutional Neural Networks (CNN) Convolutional neural network (CNN) is a multi-layered feed forward artificial neural network consisting of neurons in different layers to detect high level features from visual patterns automatically. Unlike the traditional feature extraction approaches where the features are hand engineered, CNN draws the features automatically by retaining their temporal and spatial information. The classical architecture of CNN consists of the following layers: (a) Input layer, (b) Convolutional layer, (c) Pooling layer, (d) Fully Connected/Dense layer and (e) Output layer. Except for the input and output layers, the remaining layers change their order and count giving rise to various types of neural architectures.Ever since the successful exhibit of CNN for large scale image classification and retrieval<ns0:ref type='bibr' target='#b27'>(Krizhevsky et al., 2012)</ns0:ref>, various architectures of CNN have been proposed that alter the hidden layers' order, count, types of activation functions and learning algorithm to improve the model's discrimination performance and retrieval speed. We have chosen two popular architectures: LeNet and AlexNet to showcase the 4/19 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>, is a groundbreaking CNN consisting of five convolutional and three fully connected layers showing outstanding performance on large scale image recognition data set. The two architectures are demonstrated in Figure 1. The gradient of CNN's cost function is computed through backpropagation algorithm and the model parameters are updated through stochastic gradient descent (SGD) learning algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>10:Concatenate all the feature blocks represented by DBAP layer and forward pass the learned features in vectorised form to the next layer in CNN.11:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>. After doing standardization, the d dimensional features are passed to the convolutional layer to capture the local features of the image. This result is next passed to the activation function to map the learned features in a non-linear space. Conventionally, the CNN architecture forward propagates the result of activation functions to a pooling layer that uses 2 × 2 filter window to down sample the features detected in non-linear space. The proposed framework replaces the first pooling layer of CNN with an alternative layer named as discriminatively boosted alternative to pooling (DBAP) layer. See Figure 2 for illustration of the proposed changes in the CNN architecture. The DBAP layer takes its inspiration from local binary pattern (LBP) operator that acts as a powerful descriptor to summarise the characteristics of local structures in an image. The layer processes the features received from the previous layer by following the steps 5/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)Manuscript to be reviewed Computer Science outlined in Algorithm 1. A 3 × 3 window with replicated boundary pixel padding is deployed to capture the local features of the image. Each pixel in the image is treated as a pivot (center pixel) to replace its intensity in connection to the intensity of pixels in its surrounding defined by the filter window. For each image patch, the neighbouring pixel values acquire the value 1 if their magnitude is equivalent or greater than the magnitude of the centre pixel. The magnitude is taken as 0 otherwise. For the example demonstrated in Figure2, the resulting LBP value for the center pixel is 11000111, equivalent to 227 in decimal number system. We move the filter one stride forward to compute LBP feature for each pixel in the image. For the given filter size, the DBAP layer computes 8-bit binary values for all the image pixels and converts them into their decimal equivalent. These values are totally based on the properties of the pixels in relationship to their neighbours. Our proposed DBAP layer is non-parametric and extracts morediscriminative and visually powerful features as compared to the maxpooling layer used in benchmark CNN architectures. After processing the data through DBAP layer, it is forward propagated to the next layers in each architecture (LeNet and AlexNet) and treated in a conventional manner. In LeNet, this information passes on to the following layers in sequence: Convolutional, Pooling, Fully Connected, Fully Connected, and Fully Connected layers, whereas in AlexNet, the flow of information after DBAP takes the following route in sequence: Convolutional, Pooling, Convolutional, Convolutional, Convolutional, Pooling, Fully Connected, Fully Connected, Fully Connected layers. We discuss the implementation details regarding CNN model's training and testing in Section 5.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Train and test accuracy curves of LeNet with DBAP layer are demonstrated on state-of-the-art benchmark data sets. The softmax activation function is used to enable LeNet for classification task.</ns0:figDesc><ns0:graphic coords='7,141.73,291.89,425.40,271.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Train and test accuracy curves of AlexNet with DBAP layer are demonstrated on state-of-the-art benchmark data sets. The softmax activation function is used to enable AlexNet for classification task.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,425.40,271.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>photometric distortions making the recognition problem a challenge just as in a general-purpose object recognition or natural scene understanding system.The CIFAR-10 data set<ns0:ref type='bibr' target='#b26'>(Krizhevsky et al., 2014)</ns0:ref> contains 60,000 color images from 10 different classes: Trucks, cats, cars, horses, airplanes, ships, dogs, birds, deer and frogs. The images have spatial dimension of 32 × 32 pixels. The data set consists of 5 training batches with each batch comprising of 10,000 train images. The test batch contains 10,000 images with 1000 randomly-selected images from each class.1 https://trends.google.com/trends/explore?date=all&q=mnist,%2Fg%2F11gfhw_78y,SVHN,%2Fg% 2F11hz37p042,Imagenet 2 https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Visualising the response of neurons in the MaxPool layer and DBAP layer present in baseline LeNet and LeNet with DBAP layer respectively. With 6 filters/kernels deployed in the first MaxPool layer of LeNet, one can observe that the visualisations of DBAP layer demonstrate more meaningful information about the input image as compared to the MaxPool layer.</ns0:figDesc><ns0:graphic coords='9,141.73,164.82,425.40,354.60' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Visualising the response of neurons in MaxPool layer and DBAP layer with baseline AlexNet and AlexNet with DBAP layer respectively. AlexNet uses 96 filters/kernels of size 3 × 3. in the first MaxPool layer and one can see that DBAP layer retains most of the input image's information as compared to the MaxPool layer..</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>data set on board. To decide on this, we first divide our data set into three different subsets: Train set, cross validated set and test set. For the selected benchmark data sets discussed in Section 5.1, the train and test set segregation exists already. The cross validated set is obtained by splitting the train data randomly in 80:20 ratio, reserving 20% of the data points for the validation purpose and 80% of the train instances9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021) Manuscript to be reviewed Computer Science for the training objective. When deciding optimal values of epochs, learning rate, batch size, filter size and optimiser, 80% of these train instances are used to train both the neural models and their performance is judged on the 20% validation set examples. Once optimal values of these parameters are decided, the entire train set is used to train both the neural models and their performance is assessed on the available test sets. The train time of the proposed CNN models varies within this wall clock range [2.5, 3 hours], when run on Google Colab.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>328</ns0:head><ns0:label /><ns0:figDesc>classification performance of the proposed model on unseen test examples of each standard data set. 329 11/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)Manuscript to be reviewed Computer Science Table6. Accuracy of SVM classifier on DBAP features derived from pre-trained AlexNet with DBAP layer. The classification results are better than the results obtained by MaxPool features derived from a regular AlexNet. The inclusion of DBAP layer also shows better FC features from the model giving better classification results in comparison to the FC features from regular AlexNet model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>22-layer CNN inspired by LeNet, whereas MobileNet is an efficient CNN architecture with 17 layers streamlined for mobile applications. We observe that the classification performance of the proposed model with DBAP layer is competitive to the state of the art results shown by ultra deep convolutional neural models. The precision, recall and F1 scores of the proposed model further reassure the precision and discrimination power of the proposed deep model for unseen test examples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>neural network. The experiments with simple classifier like k-NN and popular industry classifier like SVM, suggest the use of intermediate DBAP layer and its following fully connected layers in the deep learning pipeline for off-line feature extraction and classification tasks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. ROC Curve of k-NN, Softmax and SVM Classifiers on MNIST Data Set. .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. ROC Curve of k-NN, Softmax and SVM Classifiers on Fashion MNIST Data Set.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. ROC Curve of k-NN, Softmax and SVM Classifiers on SVHN Data Set.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 1 Classical</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,133.23,4.25,425.40,271.80' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,199.12,525.00,210.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,219.37,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,219.37,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,219.37,525.00,437.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,219.37,525.00,437.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Filter Size, F; Stride, S; Number of Neighbours, P; Index of Neighbour, p; Kernel, K. Output: DBAP features, 1 × d</ns0:figDesc><ns0:table /><ns0:note>1: while not converge do 2:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overall Accuracy of the Proposed System on the Validation Set Using Different Batch Sizes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data Sets</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>Batch Sizes 128</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell cols='3'>99.1 % 98.6 % 99.5 %</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>FASHION-MNIST 90.8 % 90.2 % 91.8 %</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell cols='3'>94.4 % 92.5 % 93.1 %</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell cols='3'>78.3 % 78.8 % 80.6 %</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Overall Accuracy of the Proposed System on the Validation Set Using Different Types of Optimisers for Training AlexNet.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data Sets</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell cols='2'>Optimisers Adam AdaDelta</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell cols='2'>98.1 % 98.6 %</ns0:cell><ns0:cell>99.5 %</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>FASHION-MNIST 91.5 % 90.6 %</ns0:cell><ns0:cell>90.1 %</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell cols='2'>93.2 % 92.1 %</ns0:cell><ns0:cell>91.7 %</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell cols='2'>78.2 % 78.1 %</ns0:cell><ns0:cell>77.6 %</ns0:cell></ns0:row></ns0:table><ns0:note>Table 2 highlights the accuracy of AlexNet with DBAP layer when different types of optimisers were used. We observe that for MNIST data set, Adadelta optimiser shows the best results, whereas for FASHION-MNIST, SVHN and CIFAR-10 data sets, SGD optimiser outperforms the remaining optimisation algorithms. A similar analysis was also performed for LeNet with DBAP layer and best optimisers were selected accordingly. 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Classification Accuracy on the Validation Set of Four Benchmark Data Sets with Varying Filter Size in DBAP Layer of AlexNet.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data Sets</ns0:cell><ns0:cell>3 × 3</ns0:cell><ns0:cell>Window Size 5 × 5</ns0:cell><ns0:cell>7 × 7</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell cols='3'>99.5% 97.1% 96.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>F-MNIST</ns0:cell><ns0:cell cols='3'>91.8% 88.9% 88.2%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell cols='3'>94.4% 93.2% 91.5%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>CIFAR-10 80.6% 74.5% 76.2%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification accuracy yielded by LeNet and AlexNet (in %) after incorporation of DBAP layer. The classifier used is softmax by both the models. One can observe that the results are better than those achieved by the baseline models and competitive to the discrimination results of other popular deep models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Data Sets Baseline</ns0:cell><ns0:cell>LeNet</ns0:cell><ns0:cell>Baseline</ns0:cell><ns0:cell>AlexNet</ns0:cell><ns0:cell cols='2'>LBP Features LBP Features</ns0:cell><ns0:cell>MobileNet</ns0:cell><ns0:cell>GoogLeNet</ns0:cell><ns0:cell>LBCNN</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LeNet</ns0:cell><ns0:cell cols='3'>with DBAP AlexNet with DBAP</ns0:cell><ns0:cell>with k-NN</ns0:cell><ns0:cell>with SVM</ns0:cell><ns0:cell cols='3'>(Howard et al., 2017) (Szegedy et al., 2015) (Juefei et al., 2017)</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>99.0 %</ns0:cell><ns0:cell>99.1%</ns0:cell><ns0:cell>99.2 %</ns0:cell><ns0:cell>99.5%</ns0:cell><ns0:cell>88.7%</ns0:cell><ns0:cell>83.7%</ns0:cell><ns0:cell>94.59 %</ns0:cell><ns0:cell>97.98%</ns0:cell><ns0:cell>99.51%</ns0:cell></ns0:row><ns0:row><ns0:cell>F-MNIST</ns0:cell><ns0:cell>89.8 %</ns0:cell><ns0:cell>91.0%</ns0:cell><ns0:cell>90.5 %</ns0:cell><ns0:cell>91.5%</ns0:cell><ns0:cell>78.3 %</ns0:cell><ns0:cell>73.5%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>93.5 %</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>86.7 %</ns0:cell><ns0:cell>88.3%</ns0:cell><ns0:cell>87.3 %</ns0:cell><ns0:cell>94.4%</ns0:cell><ns0:cell>29.6%</ns0:cell><ns0:cell>25.9%</ns0:cell><ns0:cell>90.8%</ns0:cell><ns0:cell>92.3%</ns0:cell><ns0:cell>94.50%</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>72.3 %</ns0:cell><ns0:cell>74.8%</ns0:cell><ns0:cell>73.7 %</ns0:cell><ns0:cell>80.6%</ns0:cell><ns0:cell>28.3%</ns0:cell><ns0:cell>27.6%</ns0:cell><ns0:cell>65.6 %</ns0:cell><ns0:cell>76.5%</ns0:cell><ns0:cell>92.99%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Accuracy of SVM classifier on DBAP features derived from pre-trained LeNet with DBAP layer. The DBAP features show better classification results than the MaxPool features in LeNet. The fully connected (FC) layers of LeNet with DBAP also tend to show better discrimination ability as compared to FC layer features extracted from regular LeNet on all benchmark data sets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Data Sets MaxPool Layer</ns0:cell><ns0:cell>DBAP Layer</ns0:cell><ns0:cell cols='2'>FC Layer from FC Layer from LeNet</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Layer 2)</ns0:cell><ns0:cell>(Layer 2)</ns0:cell><ns0:cell cols='2'>LeNet (Layer 7) with DBAP (Layer 7)</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>98.1% (C=100)</ns0:cell><ns0:cell>98.3% (C=100)</ns0:cell><ns0:cell>98.4% (C=100)</ns0:cell><ns0:cell>99.0% (C= 1)</ns0:cell></ns0:row><ns0:row><ns0:cell>F-MNIST</ns0:cell><ns0:cell>88.6% (C=100)</ns0:cell><ns0:cell>89.0% (C=100)</ns0:cell><ns0:cell>90.4% (C=100)</ns0:cell><ns0:cell>91.3% (C=100)</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>81.2% (C= 10)</ns0:cell><ns0:cell>82.0% (C= 10)</ns0:cell><ns0:cell>83.9% (C= 10)</ns0:cell><ns0:cell>86.8% (C=100)</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>52.1% (C= 10)</ns0:cell><ns0:cell>52.9% (C=100)</ns0:cell><ns0:cell>57.4% (C= 10)</ns0:cell><ns0:cell>65.3% (C= 10)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Accuracy of k-NN classifier on DBAP features derived from pre-trained LeNet with DBAP layer. The results achieved are better than the ones obtained by MaxPool layer in a regular LeNet. The inclusion of DBAP layer also improves the FC features for discrimination task.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Data Sets MaxPool Layer DBAP Layer</ns0:cell><ns0:cell>FC Layer</ns0:cell><ns0:cell>FC Layer from LeNet</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Layer 2)</ns0:cell><ns0:cell>(Layer 2)</ns0:cell><ns0:cell cols='2'>from LeNet (Layer7) with DBAP (Layer 7)</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>97.4% (k=2)</ns0:cell><ns0:cell>97.7% (k=2)</ns0:cell><ns0:cell>98.8% (k=2)</ns0:cell><ns0:cell>99.0% (k=2)</ns0:cell></ns0:row><ns0:row><ns0:cell>F-MNIST</ns0:cell><ns0:cell>77.6% (k=2)</ns0:cell><ns0:cell>78.2% (k=2)</ns0:cell><ns0:cell>84.6% (k=2)</ns0:cell><ns0:cell>91.2% (k=16)</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>77.2% (k=32)</ns0:cell><ns0:cell>78.6% (k=32)</ns0:cell><ns0:cell>85.9% (k=2)</ns0:cell><ns0:cell>86.6% (k=8)</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>56.0% (k=2)</ns0:cell><ns0:cell>59.7% (k=9)</ns0:cell><ns0:cell>63.8% (k=4)</ns0:cell><ns0:cell>65.0% (k=27)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Accuracy of k-NN classifier on DBAP features derived from pre-trained AlexNet with DBAP layer. The classification results achieved are better than those obtained by MaxPool features derived from regular AlexNet. The inclusion of DBAP layer also improves the discrimination quality of FC features in AlexNet with DBAP layer.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Data Sets MaxPool Layer DBAP Layer</ns0:cell><ns0:cell>FC Layer from</ns0:cell><ns0:cell>FC Layer from AlexNet</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Layer 2)</ns0:cell><ns0:cell>(Layer 2)</ns0:cell><ns0:cell>AlexNet (Layer 11)</ns0:cell><ns0:cell>with DBAP (Layer 11)</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>97.9% (k=2)</ns0:cell><ns0:cell>98.0% (k=2)</ns0:cell><ns0:cell>98.7% (k=2)</ns0:cell><ns0:cell>99.1%(k=4)</ns0:cell></ns0:row><ns0:row><ns0:cell>F-MNIST</ns0:cell><ns0:cell>83.1% (k=2)</ns0:cell><ns0:cell>87.2% (k=2)</ns0:cell><ns0:cell>88.6% (k=2)</ns0:cell><ns0:cell>89.5% (k=16)</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>66.0% (k=2)</ns0:cell><ns0:cell>68.4% (k=2)</ns0:cell><ns0:cell>88.0% (k=2)</ns0:cell><ns0:cell>94.4% (k=16)</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>52.7% (k=32)</ns0:cell><ns0:cell>53.6% (k=41)</ns0:cell><ns0:cell>68.9% (k=16)</ns0:cell><ns0:cell>83.7% (k=16)</ns0:cell></ns0:row></ns0:table><ns0:note>12/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54285:1:2:CHECK 19 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of the number of trainable parameters in regular LeNet, LeNet with DBAP layer, regular AlexNet and AlexNet with DBAP layer.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Layer Name Tensor Size Number of Parameters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Input Image</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Conv-1</ns0:cell><ns0:cell>26 × 26 × 6</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DBAP</ns0:cell><ns0:cell>26 × 26 × 6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Conv-2</ns0:cell><ns0:cell>24 × 24 × 16</ns0:cell><ns0:cell>880</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>MaxPool-2</ns0:cell><ns0:cell>12 × 12 × 16</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FC-1</ns0:cell><ns0:cell>120 × 1</ns0:cell><ns0:cell>276,600</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FC-2</ns0:cell><ns0:cell>84 × 1</ns0:cell><ns0:cell>10,164</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FC-3</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>850</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Output</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell cols='2'>288,554 (∼ 0.28M)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>(b) Architecture of LeNet with</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>DBAP Layer.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Layer Name Tensor Size Number of Parameters</ns0:cell><ns0:cell>Layer Name</ns0:cell><ns0:cell>Tensor Size</ns0:cell><ns0:cell>Number of Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>Input Image</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>Input Image</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv-1</ns0:cell><ns0:cell>14 × 14 × 96</ns0:cell><ns0:cell>960</ns0:cell><ns0:cell>Conv-1</ns0:cell><ns0:cell>14 × 14 × 96</ns0:cell><ns0:cell>960</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPool-1</ns0:cell><ns0:cell>7 × 7 × 96</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>DBAP</ns0:cell><ns0:cell>14 × 14 × 96</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv-2</ns0:cell><ns0:cell>7 × 7 × 256</ns0:cell><ns0:cell>614,656</ns0:cell><ns0:cell>Conv-2</ns0:cell><ns0:cell>14 × 14 × 256</ns0:cell><ns0:cell>614,656</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPool-2</ns0:cell><ns0:cell>3 × 3 × 256</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>MaxPool-2</ns0:cell><ns0:cell>6 × 6 × 256</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv-3</ns0:cell><ns0:cell>3 × 3 × 384</ns0:cell><ns0:cell>885,120</ns0:cell><ns0:cell>Conv-3</ns0:cell><ns0:cell>6 × 6 × 384</ns0:cell><ns0:cell>885,120</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv-4</ns0:cell><ns0:cell>3 × 3 × 384</ns0:cell><ns0:cell>1,327,488</ns0:cell><ns0:cell>Conv-4</ns0:cell><ns0:cell>6 × 6 × 384</ns0:cell><ns0:cell>1,327,488</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv-5</ns0:cell><ns0:cell>3 × 3 × 256</ns0:cell><ns0:cell>884,992</ns0:cell><ns0:cell>Conv-5</ns0:cell><ns0:cell>6 × 6 × 256</ns0:cell><ns0:cell>884,992</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxPool-3</ns0:cell><ns0:cell>1 × 1 × 256</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>MaxPool-3</ns0:cell><ns0:cell>2 × 2 × 256</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>FC-1</ns0:cell><ns0:cell>4096 × 1</ns0:cell><ns0:cell>1,052,672</ns0:cell><ns0:cell>FC-1</ns0:cell><ns0:cell>4096 × 1</ns0:cell><ns0:cell>41,98,400</ns0:cell></ns0:row><ns0:row><ns0:cell>FC-2</ns0:cell><ns0:cell>4096 × 1</ns0:cell><ns0:cell>16,781,312</ns0:cell><ns0:cell>FC-2</ns0:cell><ns0:cell>4096 × 1</ns0:cell><ns0:cell>16,781,312</ns0:cell></ns0:row><ns0:row><ns0:cell>FC-3</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>40,970</ns0:cell><ns0:cell>FC-3</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>40,970</ns0:cell></ns0:row><ns0:row><ns0:cell>Output</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>Output</ns0:cell><ns0:cell>10 × 1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>21,588,170 (∼ 21M)</ns0:cell><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>24,733,898 (∼ 24M)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(c) AlexNet Architecture</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The classifiers used for the purpose are k-NN and SVM. One can observe that LBP operator on its own does not yield as good classification results as the DBAP layer introduced in LeNet and AlexNet architectures. The open source code developed for these experiments is available at :https://github.com/shakeel0232/DBAP-CNN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>13/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Precision, Recall and F1-Score of LeNet and LeNet with DBAP Layer Using Softmax Classifier.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>LeNet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>LeNet-with-DBAP</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Data Sets</ns0:cell><ns0:cell cols='6'>Precision Recall F1-Score Precision Recall F1-Score</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION-MNIST</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>86 %</ns0:cell><ns0:cell>85 %</ns0:cell><ns0:cell>86 %</ns0:cell><ns0:cell>88 %</ns0:cell><ns0:cell>87 %</ns0:cell><ns0:cell>87 %</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>72 %</ns0:cell><ns0:cell>72 %</ns0:cell><ns0:cell>72 %</ns0:cell><ns0:cell>74 %</ns0:cell><ns0:cell>75 %</ns0:cell><ns0:cell>74 %</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Precision, Recall and F1-Score of AlexNet and AlexNet with DBAP Layer Using Softmax Classifier.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>AlexNet</ns0:cell><ns0:cell /><ns0:cell cols='2'>AlexNet-with-DBAP</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Data Sets</ns0:cell><ns0:cell cols='6'>Precision Recall F1-Score Precision Recall F1-Score</ns0:cell></ns0:row><ns0:row><ns0:cell>MNIST</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>100 %</ns0:cell><ns0:cell>99 %</ns0:cell><ns0:cell>100 %</ns0:cell></ns0:row><ns0:row><ns0:cell>FASHION-MNIST</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell><ns0:cell>91 %</ns0:cell></ns0:row><ns0:row><ns0:cell>SVHN</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>90 %</ns0:cell><ns0:cell>94 %</ns0:cell><ns0:cell>94 %</ns0:cell><ns0:cell>94 %</ns0:cell></ns0:row><ns0:row><ns0:cell>CIFAR-10</ns0:cell><ns0:cell>71 %</ns0:cell><ns0:cell>72 %</ns0:cell><ns0:cell>71 %</ns0:cell><ns0:cell>77 %</ns0:cell><ns0:cell>78 %</ns0:cell><ns0:cell>77 %</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "We are highly grateful to the respected reviewers and editor for giving time and an in-depth analysis to the submitted manuscript. The comments and observations have made us think more deeply and further add the missing details and justifications. We have provided more validation results and details in the manuscript based on the weakness highlighted by the reviewers. We believe that a few details have been not properly conveyed and have now been addressed accordingly to further clarify the rationale of the proposed research technique. We hope that the manuscript is now more clear, concise and invites food for thought to researchers in the interesting field of computer vision.
S. No.
Comments/Suggestion/Recommendation
Action for Incorporations
Page No. and Section
Reviewer 1
1.
The graphical representation needs to be improved in Figure 1.
The arrow sizes are different, and some are not straight. Does arrow length carry any information? If not, they should all be of the same size.
Traditionally, the neural network layers are depicted from left to right, such as the schematic shown in Figure 2. Therefore, it is recommended to demonstrate the two CNN architectures in Figure 1 from left to right.
The graphical representation in Figure 1 has now been improved and the architecture reflects the flow of information from left to right as suggested. The length of the arrow does not carry any specific information and hence their length has been fixed for maintaining consistency.
Figure 1 , Page 4
2.
The English language should be improved in Lines 117-132 to ensure your points are clearly described in a scientific language
The paragraph in lines 117-132 has been proofread and corrected according to the norms of scientific writing and English language.
Page 3, Paragrah 2
3
Line 177: The terms normalization and standardization are sometimes used interchangeably, but they usually refer to different things. Normalization usually means to scale a variable to have a value between 0 and 1, while standardization transforms data to have a mean of zero and a standard deviation of 1 (z-score). Hence, it is recommended to use “standardization” instead of “normalization”.
As suggested by the reviewer, changes have been made at line 177 and the lines following the technique.
Section 4, Page 5
4
I recommend reporting some of the tables presented in the main manuscript to supplementary data so that the key points are highlighted, and additional results are available for an interested reader. One key missing analysis on the performance evaluation is the significance test in comparison with previous architectures such as LeNet and AlexNet in Table 5-10. Reporting p-value will provide additional information about the effectiveness and significance of the proposed approach.
We have moved tables 5 and 6 to the supplementary section. Tables 5 and 6 showcase the precision, recall and F1 score of improved Lenet and Alexnet architectures with state of the art models using softmax classifier. For readers interested in more details, ROC curves of selected datasets have also been shown that show the significance of the proposed approach.
Reviewer 2
1
English language should be improved.
We have proofread the paper again and have corrected all the grammatical errors.
2
It is necessary to provide more literature review related to the datasets that they used.
There are four standard data sets used in this paper: MNIST, SVHN, FASHION-MNIST and CIFAR-10.
These are benchmark computer vision data sets that are well understood and highly used by the researchers to provide the basis for any improvement in the proposed learning algorithm or neural architecture. Their popularity has won them a regular place in many deep learning frameworks such as Keras, TensorFlow and Torch. Consequently, their off the shelf use is constantly on the rise, more than PASCAL VOC and ImageNet data sets till date (See https://trends.google.com/trends/explore?date=all&q=mnist,%2Fg%2F11gfhw_78y,SVHN,%2Fg%2F11hz37p042,Imagenet).
There is hardly any deep learning architecture or algorithm that has not used these data sets as a test bed to showcase their strength. We have discussed some of the comparable CNN variants such as MobileNet, GoogleLeNet and LBCNN in the literature review and experiments section, where their performance on these benchmark data sets is compared with our proposed neural architectures.
Section 5.1, page 6,lines214-222
Table 4
3
Deep learning (i.e., CNN) has been used in previous works in a variety of fields such as https://doi.org/10.7717/peerj-cs.177 and https://doi.org/10.1016/j.neucom.2019.09.070. Therefore, it is suggested to provide more references in this description.
The research pointed out by the respected reviewer shows the use of CNN for classification purposes in the field of bioinformatics. When mentioning the scope of CNN for various fields in the introduction section, we have now cited several papers including these two publications.
Section 1, Line 30
Section 5.6.1.
4
How did the authors perform hyperparameter tuning in the models? It should be described clearly. Did the cross-validation technique apply?
The following hyperparameters of LeNet and AlexNet were fine tuned using the validation set: batch_size,optimisation algorithm, learning rate,kernel size. Validation set is a subset chosen randomly from the train set in 80:20 ratio, reserving 20% of the data points for the validation purpose and 80% of the train instances for the training objective. When deciding optimal values of epochs, learning rate, batch size, filter size and optimiser, 80% of these train instances are used to train both the neural models and their performance is judged on the 20% validation set examples. Once optimal values of these parameters are decided, the entire train set is used to train both the neural models and their performance is assessed on the available test sets of each standard data set.
Section 5.5, Lines 274-282
5
Hardware specification to train model should be described.
The proposed neural model with DBAP layer was trained on Google Colab's Tesla K-80 graphics processing unit (GPU) using Keras and TensorFlow deep learning frameworks implemented in Python. Colab is a cloud based service that allows researchers to develop deep learning applications with free GPU support. The system used had Intel(R) Xeon(R) 2.3GHz processor with two cores and 16GB of RAM. To achieve results in optimal time, it is recommended to run the deep learning framework on premium GPU cards with at least 8GB of RAM.
Section 5.2,
Page 7
6
The authors should release source codes for replicating the results.
The source code is available at :https://github.com/shakeel0232/DBAP-CNN
Section 5.6.1, lines 336
7
There is a need to refer to some works related to measurement metrics (i.e., recall, precision, accuracy, ...) of deep learning models such as PMID: 31362508 and PMID: 32613242.
We have used the following evaluation metrics for making comparisons of the proposed approach with state of the art techniques: Accuracy, Precision, Recall, F1- Score and Area Under the Curve.The comparison of accuracy has been shown in Tables 4, 5, 6, 7 and 8, whereas its precision, recall and area under the curve are shown and discussed in the supplementary section. These papers have also been cited in Section 1 and Section 5.3
Section1, Section 5.3.
8
In addition to the aforementioned metrics, the authors should report ROC curves and AUC values for prediction models.
We have calculated ROC and AUC for all the datasets and the graphs are shown in the supplementary section.
9
The authors should have some validation data on the models.
The validation set was used throughout the experiments to optimise the models’ parameters and hyperparameters.
Section 5.5, Tables 1,2 ans 3.
10
It looks like some models contained a lot of overfitting after few epochs (i.e., Fig. 3-4). How did the authors address this critical point?
In order to assess if the model is overfitting or not, the performance comparison is shown on train and validation sets in Figure 3 and 4. If the model behaves very well on the train set but fails to classify examples from the validation set by a huge margin, it means it is overfitting and shall not perform well on unseen test examples. Some of the ways in which model overfitting could be avoided are: Cross-validation, usage of more train data, early stopping, regularisation and removal of features. We have regularised the models which were overfitting with the help of the validation set.
Section 5.5, page 9 and page 10
11
Training time should be reported.
The train time of the proposed CNN models varies within this wall clock range [2.5, 3 hours], when run on Google Colab.
Section 5.5,paragraph 1, page 9.
12
The authors should perform some statistical tests to compare the performance results among different models.
The authors should compare the predictive performance with previous works that focused on the same datasets.
We have also applied hypothesis testing to estimate the statistical significance of the proposed models. Statistical tests help us identify the behaviour of models if the test set changes. Since our data sets are standardised, we assume a normal distribution of features and have applied McNemar's test or 5x2 cross-validation with a modified paired Student t-test. The null hypothesis assumes that the two samples came from the same distribution. In contrast, the alternative hypothesis assumes that the samples came from two different distributions and hence there is a difference between the tested models or classifiers. With 0.05 level of confidence/significance, the p values attained for Lenet with DBAP layer and AlexNet with DBAP layer models are 0.007 and 0.011 respectively. In both the cases, p<0.05, shows the samples generated from the proposed architectures are statistically different from the ones without DBAP layer.
Page 14, Section 5.6.3
" | Here is a paper. Please give your review comments after reading it. |
70 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Simultaneous Escape Routing (SER) is escaping of circuit pins simultaneously from inside two or more pin arrays. This is comparatively difficult as compared to routing in a single array and has not been addressed by previous studies. The increase in pin array complexity has made the manual SER in PCBs a very inefficient and tedious task and there surely is need for the automated routing algorithms. In this work we propose network flow based optimal algorithm that uses integer linear program to solve SER problem and area routing problem in two stages. In the first stage, pins are escaped to the boundaries of pin arrays simultaneously. These escaped pins are connected with each other in the second stage. The proposed algorithm is tested for different benchmark sizes of grids and the results show that it is not only better in terms of routability but also outperforms existing state of the art algorithms in terms of time consumption. The existing algorithms either fails to achieve higher routability or have larger time complexities. Whereas the proposed algorithm achieves 99.9% routability and it is also independent of grid topology and component pin arrangement which shows the superiority of proposed algorithm over the existing algorithms.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Printed circuit boards (PCB) support and connect the electrical components and provide conduction between them. Design of PCB is one of the most important things in electrical circuit and plays a vital role in performance of the circuit. There are multiple Integrated circuits (ICs) which are placed on the PCB and ever evolving manufacturing techniques have increased the complexity of these ICs and PCBs.</ns0:p><ns0:p>The enhanced manufacturing techniques have not only reduced the size of the package but, have also exponentially increased the number of pins in an IC. The footprints of ICs have shrunk in size while the number of pins have increased. Around 2000 pins are present in the modern day high end packages <ns0:ref type='bibr' target='#b47'>Yan et al. (2012)</ns0:ref> with very small routing resources. This makes the footprint of an IC a very dense entity and manual routing becomes a hectic task in such a dense environment. Some other constraints, in addition to the PCB density, such as planar and pairwise routing, and length matching <ns0:ref type='bibr' target='#b17'>Lee W Ritchey (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Mitzner (2009)</ns0:ref> are also inflicted upon PCB routing.</ns0:p><ns0:p>These constraints combined together with the increasing density of the packages impose a bigger challenge on the PCB routing and therefore, take manual routing out of the consideration. This is where the automated routers come into the play for the solution of these issues and are currently used for the PCB routing. Automated routers help solve various PCB routing problems including escape routing and the area routing problem. The escaping of pins to the array boundary is known as escape routing and connecting these escaped pins in the intermediate area is known as area routing. Escape routing and area routing together play very important role in PCB routing. The pins on boundary of a pin array can easily be escaped and connected to their corresponding pins by keeping the via constraint in mind. But there are many pins which are inside the pin array and they cannot be connected directly and have to be escaped towards the boundary first which is known as escape routing problem. Escape routing problem is PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science one of the critical challenges in the PCB design because the modern ICs have a pin array that contains a large number of pins. The studies show that escape routing has three different types <ns0:ref type='bibr' target='#b48'>Yan and Wong (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b47'>Yan et al. (2012)</ns0:ref> namely Unordered Escape, Ordered Escape and Simultaneous Escape Routing (SER). Ordered and unordered escape routing consider a single pin array while two ICs are considered simultaneously in SER.</ns0:p><ns0:p>Simultaneous escape routing (SER) is relatively less explored as compared to the ordered and unordered escape routing. SER is also considerably difficult to achieve because of the reason that instead of a single IC, there are two ICs in SER and their pins need to be escaped simultaneously so that they could be connected to each other. There are very few studies on SER, however <ns0:ref type='bibr' target='#b29'>Ozdal et al. (2007)</ns0:ref> can be considered as the pioneer as this work consider two pin arrays simultaneously for the first time and minimize the net ordering mismatch. This work generated different patterns for a single pin and then selects one of the patterns such that the mismatching in net ordering along the boundaries is minimized.</ns0:p><ns0:p>They used polynomial time algorithm for smaller problems and use randomized algorithm approach for routing in the case of larger problems without the inclusion of performance constraints <ns0:ref type='bibr' target='#b29'>Ozdal et al. (2007)</ns0:ref>. Some heuristic solutions <ns0:ref type='bibr' target='#b16'>Lee et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Chin and Chen (2013)</ns0:ref> have also been proposed for SER problem via using boundary and planar routing. Either too much time is consumed by these solutions or the achieved routability is less than acceptable percentage. In our recent study <ns0:ref type='bibr' target='#b1'>Ali et al. (2017)</ns0:ref>, we propose the solution to routability problem through a network flow approach where we introduce two different models; one is based on the links and second on nodes. They are introduced to solve SER problem in PCBs through optimization technique. The models are successful in routing over small grids but consumed too much time when run over the larger grids.</ns0:p><ns0:p>In this work, we extend our previous study <ns0:ref type='bibr' target='#b1'>Ali et al. (2017)</ns0:ref> in order to solve both the problems including SER and area routing while considering time and routability constraints for smaller as well as larger grids. In our previous work, we did not consider larger grids. This research propose an optimal routing algorithms for both SER and area routing for smaller and larger grids. The proposed algorithm uses the network flow approach to develop an integer linear program for solution of these two problems.</ns0:p><ns0:p>The problems of simultaneous escape and area routing are solved in multiple stages. Two different integer linear programs are developed for these two stages. The first program simultaneously escapes the pin from inside of both the pin arrays in order to reach the boundary of pin arrays. The second program fetches the results from first program and the purpose of second program is to connect the escaped points with each other.</ns0:p><ns0:p>Existing SER solutions suffers from various challenges including fixed pattern generation and higher time complexity. Some solutions in literature also lead to the pin blockage problem and resource wastage.</ns0:p><ns0:p>These studies employed randomized approaches and heuristic algorithms but fail to provide efficient solution. In this study, we provide an optimal solution having a time complexity of under a minute along with 99.9% routability for the problem of SER in the printed circuit boards. This shows that the proposed solution is better than the solutions existing in the literature in terms of routability, time complexity and computational costs. Followings are the contributions of this work:</ns0:p><ns0:p>• Mapping of the PCB routing problem to network flow routing problem,</ns0:p><ns0:p>• Proposal of an algorithm for solution of SER that uses an integer linear program and provides optimal solution,</ns0:p><ns0:p>• Proposal of an algorithm for area routing that uses an integer linear program and provides optimal solution,</ns0:p><ns0:p>• Linkage between the proposed algorithms to obtain the end-to-end routing,</ns0:p><ns0:p>• 99.9% routability for pins,</ns0:p><ns0:p>• Reduction of time complexity.</ns0:p><ns0:p>Rest of the paper is organized as: Related literature is described in the next section followed by problem formulation. Then proposed network flow approach and dual node based routing are detailed.</ns0:p><ns0:p>We finally discuss results before concluding the work.</ns0:p><ns0:p>ICs that are to be placed on the PCBs but has also increased the pin count of ICs. This makes the process of manual PCB routing a very hectic task and demands automated PCB routing. The problems that are to be solved in the PCB routing are escape routing, area routing, length matching, and number of layers that are to be used. Many studies in the literature have addressed these routing problems related to the PCBs. Some studies have proposed the heuristics algorithms to obtain a length matching routing <ns0:ref type='bibr' target='#b50'>Zhang et al. (2015b)</ns0:ref> and others have used differential pair escape <ns0:ref type='bibr' target='#b20'>Li et al. (2012</ns0:ref><ns0:ref type='bibr' target='#b19'>Li et al. ( , 2019))</ns0:ref>, single signal escape routing <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref> or both <ns0:ref type='bibr' target='#b37'>Wang et al. (2014)</ns0:ref> for escape routing along with addressing the length matching problem. The most notable technique used for the PCB routing is optimization. There are various studies in literature that have mapped different PCB routing problems to the optimization problem such as longest common interval sequence problem <ns0:ref type='bibr' target='#b13'>Kong et al. (2007)</ns0:ref>, multi-layer assignment problem <ns0:ref type='bibr' target='#b46'>Yan et al. (2009)</ns0:ref>, network flow problem <ns0:ref type='bibr' target='#b32'>Sattar and Ignjatovic (2016)</ns0:ref>, pin assignment and escape routing problem <ns0:ref type='bibr' target='#b18'>Lei and Mak (2015)</ns0:ref>, and maximum disjoint set of boundary rectangles problem for bus routing <ns0:ref type='bibr' target='#b0'>Ahmadinejad and Zarrabi-Zadeh (2016)</ns0:ref>. A recent study has proposed a routing method that is based upon maze and it uses a hierarchical scheme for bus routing and the proposed method uses a rip-up and re-route technique as well in order to improve the efficiency <ns0:ref type='bibr' target='#b5'>Chen et al. (2019)</ns0:ref>.</ns0:p><ns0:p>There are different problems to be solved in the PCB routing as discussed earlier and one of the most important problem among these is the escape routing where the pins from inside of an IC are to be escaped to the boundary of IC. Many studies in the literature have proposed solutions for escape routing problem among which some have considered single layer <ns0:ref type='bibr' target='#b18'>Lei and Mak (2015)</ns0:ref> in the PCB while, multiple layers <ns0:ref type='bibr' target='#b2'>Bayless et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b49'>Zhang et al. (2015a)</ns0:ref> have also been considered. There are studies that only consider escape routing <ns0:ref type='bibr' target='#b24'>McDaniel et al. (2014)</ns0:ref> while, there are some studies which consider other problems like length matching <ns0:ref type='bibr' target='#b50'>Zhang et al. (2015b)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Yan (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Chang et al. (2019)</ns0:ref> along with the escape routing as well. Some have used staggered pin array <ns0:ref type='bibr'>Ho et al. (2013a,b)</ns0:ref> while others have used grid pin array <ns0:ref type='bibr' target='#b10'>Jiao and Dong (2016)</ns0:ref> as well. Apart from electrical circuits, escape routing is also used for designing micro-fluid biochips PCBs <ns0:ref type='bibr' target='#b25'>McDaniel et al. (2016)</ns0:ref>. The most widely used technique for the solution of escape routing is the optimization theory and it has been used for the past many years and also been employed by the current studies <ns0:ref type='bibr' target='#b11'>Katagiri et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Serrano et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b0'>Ahmadinejad and Zarrabi-Zadeh (2016)</ns0:ref>.</ns0:p><ns0:p>The problem of escape routing is not the only problem to which the optimization theory provides solution to. The optimization theory has also been deployed by several other fields in which routing of packets in the wireless networks is the most prominent one. There is a recent study in which optimization theory is used in the wireless sensor networks (WSNs) for the smart power grids <ns0:ref type='bibr' target='#b15'>Kurt et al. (2017)</ns0:ref>.</ns0:p><ns0:p>The authors have proposed a novel mixed integer program with the objective function of maximizing the lifetime of a WSN through joint optimization of data packet size and the transmission power level.</ns0:p><ns0:p>Apart from wireless sensor networks, the rechargeable sensor networks also face the problem of energy harvesting. Optimization techniques have also been used in these types of networks in order to jointly optimize the data sensing and transmission and achieve a balanced energy allocation scheme <ns0:ref type='bibr' target='#b51'>Zhang et al. (2015c)</ns0:ref>. The optimization theory is also used in addressing the power optimization issues of the communication systems to fulfill the different demands of different message types regarding the QoS, length of packet and transmission rates ?. The optimization theory has also been used to increase the efficiency of the urban rail timetable <ns0:ref type='bibr' target='#b43'>Xue et al. (2019)</ns0:ref>. The authors propose a nonlinear integer program in order to obtain a rail timetable which is efficient. The model is then simplified into an integer program with a single objective. A 9.5% reduction in wasted capacity is achieved through the use of a genetic algorithm in this study.</ns0:p><ns0:p>The application of optimization theory is not only limited to the fields of communication and networking but it is also been used in the vehicle routing problem <ns0:ref type='bibr' target='#b36'>Thongkham and Kaewman (2019)</ns0:ref>, network reconfiguration for Distribution Systems <ns0:ref type='bibr' target='#b7'>Guo et al. (2020)</ns0:ref>, microgrid systems <ns0:ref type='bibr' target='#b40'>Wu et al. (2019)</ns0:ref>, distributed energy resource allocation in virtual power plants <ns0:ref type='bibr' target='#b12'>Ko and Joo (2020)</ns0:ref>, control of humanoid robots Tedrake (2017), smart grid ecosystem <ns0:ref type='bibr' target='#b14'>Koutsopoulos et al. (2016)</ns0:ref> and Computation of the Centroidal Voronoi Tessellations (CVT) that is widely used in the computer graphics <ns0:ref type='bibr' target='#b21'>Liu et al. (2016)</ns0:ref>. <ns0:ref type='bibr'>Ozdal et al. Ozdal and</ns0:ref><ns0:ref type='bibr'>Wong (2004, 2006)</ns0:ref> can be regarded as the initial work on SER as they consider two pin arrays simultaneously and minimize the net ordering mismatch. There are some studies in the literature that have proposed a simultaneous pin assignment <ns0:ref type='bibr' target='#b41'>Xiang et al. (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Wang et al. (1991)</ns0:ref>, <ns0:ref type='bibr'>Ozdal et al. Ozdal and</ns0:ref><ns0:ref type='bibr'>Wong (2004, 2006)</ns0:ref> propose a methodology to escape the pins to boundaries in such a way that crossings are minimized in the intermediate area. Polynomial time algorithm is used for smaller problems and <ns0:ref type='table'>2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:ref> Manuscript to be reviewed Computer Science randomized algorithm approach is proposed for routing in the case of larger examples.</ns0:p><ns0:p>It is stated that the routing resources available inside the pin array are less as compared to the routing resources that are available in the intermediate area. This is because of the reason that pins are densly packed inside the pin array. Also, using vias is not allowed within the pin array and the routing inside pin array must be free from conflicts and overlapping. Via usage is allowed in the intermediate area as opposed to the pin arrays. The problem is formulated as to find out best escape routing solution within the pin array so that the conflicts in intermediate area are minimized. There are two phases to solve the problem. In the first phase, a single layer is taken at a time and maximum possible non conflicting routes are packed on that layer. The conflicted routes are routed on the next layer and hence, escape routing is completed in this manner <ns0:ref type='bibr'>Ozdal and</ns0:ref><ns0:ref type='bibr'>Wong (2004, 2006)</ns0:ref>.</ns0:p><ns0:p>In the second phase of problem solution, a congestion based net-by-net approach is used for routing in the intermediate area. Routing conflicts are allowed at the beginning and optimal solution is found by rip-up and re-route approach. The ripping up of routes that are formed in the first phase is discouraged and in order to find a conflict free route for all pins which is obtained eventually. A number of routing patterns are defined for each net and a polynomial time algorithm is proposed in order to choose the best possible combination from all the given possible routing patterns. The results for different industrial data sets are obtained which show that optimal routes are increased and time required for routing is decreased as compared to other algorithms. Although it is a good approach, but 99.9% routability is not achieved in the escape routing problem and too much time is taken in the area routing problem <ns0:ref type='bibr'>Ozdal and</ns0:ref><ns0:ref type='bibr'>Wong (2004, 2006)</ns0:ref>. <ns0:ref type='bibr' target='#b29'>Ozdal et al. (2007)</ns0:ref> to solve the escape routing problems in multiple components simultaneously. Some design constraints have also been considered along with routing. The escape patterns considered in this algorithm are more generalized in comparison to the algorithm proposed in their previous research <ns0:ref type='bibr' target='#b28'>Ozdal and Wong (2004)</ns0:ref>. The objective is to minimize the crossover in intermediate area, hence less via usage. For smaller circuits, results given by their proposed randomized algorithm are within 3% of optimal solution, but optimal solution is not achieved in the case of larger circuits due to time complexity. They have also assumed fixed pattern generation which can be successful in the case of smaller circuits where components are well aligned, but complex problems cannot be solved easily by using these approaches <ns0:ref type='bibr' target='#b28'>Ozdal and Wong (2004)</ns0:ref>; <ns0:ref type='bibr' target='#b29'>Ozdal et al. (2007)</ns0:ref>. B-Escape <ns0:ref type='bibr' target='#b22'>Luo et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Luo et al. (2011)</ns0:ref> solves this issue by removing the consumed routing area by a pin and shrinking the boundary. B-Escape reduced the time complexity and also increased the routability but, it can cause pin blockage when certain area is removed and boundaries are shrunk. Also, the time taken to solve most of the problems is still greater than one minute which is not optimal and needs to be reduced further than that <ns0:ref type='bibr' target='#b22'>Luo et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Luo et al. (2011)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Same authors propose an algorithm in</ns0:head><ns0:p>In addition to the above mentioned studies, other techniques have also been used in the literature for SER solution and one of them is negotiated congestion based routing scheme Qiang <ns0:ref type='bibr'>Ma et al. (2010)</ns0:ref> . Negotiated congestion based routing already has its application in FPGA routing. However, it is not being used for solving SER problem. A negotiated congestion based router is proposed which applies the negotiated congestion routing scheme on an underlying routing graph. In negotiated congestion approach, all pins are forced to negotiate for a resource which is the routing space in case of PCB. The pin needing most resources is determined. Some resources are shared between pins and these pins are routed again and again iteratively till no resources are shared among the pins. Results have been compared with Cadence PCB router Allegro and it was found that proposed router is able to achieve 99.9% routability in case of such circuits which are not routed completely by the Allegro. Time consumption is also less, but the issue is that there are some examples in which the proposed router is not able to achieve 99.9% routability while Allegro is able to achieve. It is proposed that the router should be used in collaboration with Allegro so that all the problems can be solved.</ns0:p><ns0:p>There is another research <ns0:ref type='bibr' target='#b44'>Yan et al. (2011)</ns0:ref> which uses ordered escape routing in order to solve the SER problem. The SER problem is solved by determining the net order. A bipartite graph is created on the basis of location of escape pins and transformation of net numbers is done. After doing SER, the net numbers are recovered. Basically, the orders of escape nets are found along the boundary in such a way that there is no crossover in the intermediate area. This approach achieves 99.9% routability along with reduction in time consumption, but the achieved net order is not routable in all cases. This approach can fail in some specific cases where the ordering proposed by the approach is not routable. Quite recently, net ordering has been incorporated with the SER through proposal of a novel net ordering algorithm in Manuscript to be reviewed Computer Science order to reduce the running time <ns0:ref type='bibr' target='#b33'>Sattar et al. (2017)</ns0:ref>. Recently, we have proposed the solution to this problem through a network flow approach <ns0:ref type='bibr' target='#b1'>Ali et al. (2017)</ns0:ref>. We proposed the solution to this problem through a network flow approach where we introduced two different models; one is based on the links and called Link based routing, while the other is based on nodes and is called Nodes based routing. They are introduced to solve SER problem in PCBs through optimization techniques. The models were successful in routing over small grids but consumed too much time when run over the larger grids.</ns0:p><ns0:p>As per our findings, the SER has been explored by these studies and they have provided some good solutions with various limitations. There are some major issues that still seem to be unresolved and they include fixed pattern generation and time complexity. Most of these studies are unable to solve problems like pin blockage and resource under utilization. The studies have used randomized approaches and heuristic algorithms and optimal solutions are not provided. In this study, we provide an optimal solution having a time complexity of under a minute along with 99.9% routability for the problem of SER in the PCBs.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROBLEM FORMULATION</ns0:head><ns0:p>The number of pins in an IC has been increased considerably and the footprint of an IC has shrunk in recent years making the escape routing problem very difficult. The studies in literature have used randomized approaches and heuristic algorithms to solve the escape routing problem which have not been able to reduce time complexity, achieve 99.9% routability, and provide optimal solutions. The problem of SER has been solved in this study through proposal of an optimal algorithm that not only achieves the 99.9% routability but also consumes very less time and memory to find out the optimal solution.</ns0:p><ns0:p>The problem of SER basically boils down to connection of pins of two components. There are many components on a PCB but there are also many layers available for routing of these components. We solve this problem by taking two components at a time and producing the best possible routing solution for them. The routing of two components can be generalized for multiple components. The basic problem is to escape the pins from inside the pin array of two components and then to connect them by using area routing. As the size of a pin array is very small and routing resources are very less inside the pin array so vias cannot be used inside the pin array and all the pins inside pin array must be routed on a same layer.</ns0:p><ns0:p>Once these pins are escaped onto the boundary, different layers can be used for their routing to avoid the conflicts in area routing.</ns0:p><ns0:p>The problem is formulated such that these conflicts in the intermediate area are minimized and least possible vias are used. The problem can be understood with the help of illustrations. Three ICs can be seen in the Figure <ns0:ref type='figure'>1</ns0:ref>, in which some pins are represented with lines inside a hollow circle. Different patterns of the lines inside hollow circles can be seen. The pins represented by same patterned circles have to be connected with each other. The pins represented by the simple lines inside the hollow circle have to be connected with the pins represented by the simple lines inside the hollow circle and the pins represented by the crossed lines inside the hollow circle have to be connected with the pins represented by the crossed lines inside the hollow circle. Before connection, these points have to be escaped towards the boundary first as shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>It can be seen in the Figure <ns0:ref type='figure'>2</ns0:ref> that all the pins have been escaped to the boundary of pin arrays without any conflicts inside the pin array, but there is a conflict route in the intermediate area as shown in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>In this situation, one of these routes will be routed on the same layer while the other will be left to be routed on the second layer as shown in Figure <ns0:ref type='figure'>4</ns0:ref>. This shows that the basic problem is to escape the pins from inside of the pin array on the same layer and then connect these escaped pins by using minimum possible layers in the intermediate area. For this purpose, the pins will be escaped to the boundary in such a way so that conflicts are minimized in the intermediate area. For the sake of simplicity, routing is done only on a single layer. So the problem can be stated as: ' To simultaneously escape the pins from inside the pin arrays to the boundaries and connect the escaped pins while achieving 99.9% routability and minimizing the time required '</ns0:p></ns0:div>
<ns0:div><ns0:head>NETWORK FLOW APPROACH</ns0:head><ns0:p>The problem formulated in the previous section was to escape the pins from inside the pin arrays to the boundary of pin arrays and then to connect them in the intermediate area. Manuscript to be reviewed Computer Science pins are to be connected are placed on a board in a manner which is shown in the Figure <ns0:ref type='figure'>5</ns0:ref>. The square boxes and remaining points in the Figure <ns0:ref type='figure'>5</ns0:ref> represent the ICs and grid points for intermediate area routing respectively. The Objective is to connect these pins with each other by first escaping them from inside of the IC simultaneously and then connecting the escaped pins together. This issue is similar to the traffic flows in a network and hence, it can be mapped to a network flow problem. In the network, a node needs to communicate with another node and hence there is a need to establish a route between them. Similarly, the pins in our problem can be considered as nodes of a network the connection between them can be considered as a route. The purpose is to create a route from one node to another node i.e. connect two pins with each other. A grid is shown in Figure <ns0:ref type='figure'>5</ns0:ref> and this grid is similar to a network grid and there multiple nodes in that network in a mesh topology.</ns0:p><ns0:p>Different types of pins are shown in Figure <ns0:ref type='figure'>6</ns0:ref> with the help of circular points where the pins that need to be connected are shown with a hollow circle and crossed lines inside the circles. The filled circles show the intermediate nodes through which these pins can be connected to each other. In terms of the network flow, these cross lined hollow circles are considered as the source and destination nodes while the filled circles are considered as the intermediate nodes. The objective is to create a route from the source node towards the destination node by making use of the intermediate nodes. There can be multiple pins that need to be connected with each other and hence, there can be multiple source-destination pairs and these pairs are considered as flows in the network.</ns0:p><ns0:p>These pairs of pins can be mapped to a network flow. The immediate neighbors of a source and destination pin are also mapped to the neighboring network nodes. As there is a link between network nodes, this link is considered as the pin connection. The Table <ns0:ref type='table'>1</ns0:ref> shows how the mapping from the routing in a PCB to flows in a network is carried out. The terms related to PCB routing are mentioned in the PCB Routing column and their mapping to network flow is shown in the Network Flow column.</ns0:p></ns0:div>
<ns0:div><ns0:head>DUAL MODEL NODE BASED ROUTING</ns0:head><ns0:p>The problem of SER and area routing has been solved in two stages with the help of two different integer linear programs. We name this approach as dual model node based routing. The dual model node based routing has two stages and these stages are named as local routing and global routing. The pins of pin arrays are escaped to the boundary of pin arrays in the first stage which is known as local routing. These escaped pins are then connected with each other in the second stage which is known as global routing.</ns0:p><ns0:p>The concept of local and global routing is explained as follows:</ns0:p><ns0:p>• Local Routing,</ns0:p><ns0:p>• Global Routing. This idea of local and global routing of this approach can be understood with the help of Figure <ns0:ref type='figure'>7</ns0:ref>:</ns0:p><ns0:p>Local Routing The grid was divided into two parts as it can be seen in the Figure <ns0:ref type='figure'>7</ns0:ref>. The hollow circles represent the points that are used in the local routing. Idea is to escape the points that are to be connected to one of the boundary points. The filled circles on the boundary of hollow circles are the boundary points.</ns0:p><ns0:p>When the desired points have selected a boundary point each, then these boundary points are saved and their coordinates are passed to the second model through a script. The script checks for those points for which boundary points are selected and then saves those boundary points as source and destination points for the second model. This is called as the Local routing.</ns0:p></ns0:div>
<ns0:div><ns0:head>Global Routing</ns0:head><ns0:p>The work of first stage known as local routing ends after escaping the desired points to the boundary points. The second model starts working from there and the source and destination boundary points are connected through second model of this approach. This is called the Global routing. End-to-end routing with less memory consumption is ensured by these two models together.</ns0:p></ns0:div>
<ns0:div><ns0:head>Addressing Link Based and Node Based Routing Issues</ns0:head><ns0:p>Memory consumption is the main issue with the link based and node based routing proposed in our earlier work <ns0:ref type='bibr' target='#b1'>Ali et al. (2017)</ns0:ref>. There are large number of the variables used in those approaches and as the number of grid points increase, the memory consumption also increases. These issues are solved by the dual model node based routing proposed in this paper which not only divides the total memory consumption between two models known as local routing and global routing but, also ensures the solution So in the second model of this approach, the decision variable is: 3: X [flows, connecting points]</ns0:p><ns0:p>For the 10 x 10 grid: 4: Flows = 1,connecting points =64</ns0:p><ns0:p>So the number of variables became: 5: Variables = flows x connecting points 6: Variables = 1 x 64 = 64</ns0:p><ns0:p>The total number of the variables in both models is 100 which is equal to the number of variables used in the node based routing proposed in our previous study <ns0:ref type='bibr' target='#b1'>Ali et al. (2017)</ns0:ref> but, these variables are divided among two models and are solved separately. Hence there is no issue of memory consumption and the end to end route with uniqueness is also ensured by the dual model node based routing. The mathematical model and the related terminology of dual model node based routing is explained in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Terminology and Mathematical Model</ns0:head><ns0:p>The Table <ns0:ref type='table'>2</ns0:ref> shows the terminology that is helpful in understanding the dual model node based routing.</ns0:p><ns0:p>The local routing algorithm of the dual model node based routing is explained in algorithm 3 and it uses the constraints which are detailed in algorithm 3. A flowchart for algorithm 3 is shown in Supplementary Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>Algorithm 3 : Local Routing Model 1: The source and destination points must be selected 2: One neighbour of the source and destination point must also be selected 3: If a point is selected,other than source and destination, then two of its neighbors must also be selected 4: Not more than two neighbours of a point must be selected 5: At least one boundary point for each flow must be selected 6: At least one boundary point for the destination of each flow must be selected 7: A point should not be selected for more than one flow 8: Subject to: min</ns0:p><ns0:formula xml:id='formula_0'>∑ (i, j)εSD ∑ (l,m)εP X i, j,l,m</ns0:formula></ns0:div>
<ns0:div><ns0:head>Constraints for Local Routing Model</ns0:head><ns0:p>The objective function of Local Routing Model is as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>Computer Science min ∑ (i, j)εSD ∑ (l,m)εP X i, j,l,m</ns0:formula><ns0:p>subject to:</ns0:p><ns0:formula xml:id='formula_2'>X i, j,i, j = 1, ∀ (i, j) εLF (1) ∑ (l,m)εN(LF) X i, j,l,m = 1, ∀ (i, j) εLF (2)</ns0:formula><ns0:p>The equation 1 makes sure that the starting point for a particular flow is selected. The equation 2 makes sure that one of the neighbors of that starting point must be selected. This is used in order to start the routing from source and keep it moving towards its neighbor.</ns0:p><ns0:p>These equations (1, 2) start the flow but do not make sure that it will continue towards the boundary points. We have used three equations for that. The equations 3, 4 and 5 ensure the continuity of flow towards the boundary points and also make sure that a neighbor which is already selected, does not get selected again. This is used to make sure that flow does not go back towards source.</ns0:p><ns0:p>2</ns0:p><ns0:formula xml:id='formula_3'>× X i, j,l,m ≤ ∑ (a,b)εN(l,m) X i, j,a,b , ∀ (i, j) εLF, ∀ (l, m) εCP − LF (3) ∑ (c,d)εN(l,m) X a,b,c,d ≤ 2, ∀ (a, b) εLF, ∀ (l, m) εCP (4) ∑ (l,m)εBP X i, j,l,m = 1, ∀ (i, j) εLF (5)</ns0:formula><ns0:p>Now, we have the routed the source points towards the boundary points but we also need an equation that routes the same destination points to the boundary points. Therefore, we use the same equations for the destination points which were used for the source points. The equation 6 and equation 7 make sure that the destination points of the flow are selected along with one of their neighbors. This is done to make sure that the route starts from destination point and move towards one of its neighbors.</ns0:p><ns0:formula xml:id='formula_4'>X a,b,a,b = 1, ∀ (i, j) εLF, ∀ (a, b) εD (LF) (6) ∑ (l,m)εN(a,b) X a,b,l,m = 1, ∀ (i, j) εLF, ∀ (a, b) εD (LF)<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>The equations 8, 9 and 10 and 11 ensure the continuity of flow of destination point towards the boundary points. The previously mentioned equations only selected the destination point and one of the neighbors but, these equations make sure that an already selected neighbor is not selected again and the route continues to flow towards the boundary points.</ns0:p><ns0:p>2 Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>× X i, j,l,m ≤ ∑ (a,b)εN(l,m) X i, j,a,b , ∀ (l, m) εCP − D (LF) , ∀ (i, j) εD (LF)<ns0:label>(8</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_6'>∑ (c,d)εN(l,m) X a,b,c,d ≤ 2, ∀ (a, b) εD (LF) , ∀ (l, m) εCP (9) ∑ (l,m)εBP X a,b,l,m = 1, ∀ (a, b) εD (LF)<ns0:label>(10)</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>∑ (i, j)εSD X i, j,a,b , ∀ (i, j) εLF, ∀ (a, b) εCP<ns0:label>(11)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Constraints for Global Routing Model</ns0:head><ns0:p>The global routing algorithm of the dual model node based routing is explained in algorithm 4 and it uses the following objective function and constraints in equations 12 & 13. A flowchart for algorithm 4 is shown in Supplementary Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>Algorithm 4 : Global Routing Model 1: The source boundary point must be selected 2: The sum over all the neighboring boundary points of the source boundary point must be less than one 3: If a boundary point is selected,other than source and destination boundary points, then two of its neighboring boundary points must also be selected 4: Not more than two neighbors of a boundary point must be selected 5: The destination boundary point must also be selected 6: A boundary point should not be selected for more than one local flow objective function:</ns0:p><ns0:formula xml:id='formula_8'>max ∑ (i, j)εLF ∑ (c,d)εN(startx[i, j],starty[i, j]) Y i, j,c,d</ns0:formula><ns0:p>The local routing model has selected the boundary points for both the source and destination points.</ns0:p><ns0:p>The local routing model has provided these boundary points to the global routing model. Now, it is the responsibility of the global routing model to connect these boundary points together. The first step is to select these boundary points and the equations 12 and 13 ensure that the boundary points provided by the local routing model are selected in the global routing model. subject to: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>Y i, j,startx[i, j],starty[i, j] = 1, ∀ (i, j) εLF (12) Y a,b,endx[a,b],endy[a,b] = 1, ∀ (a, b) εLF<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_10'>2 ×Y a,b,l,m ≤ ∑ (c,d)εN(l,m) Y a,b,c,d , ∀ (a, b) εLF, ∀ (l, m) εBP − SDBP (LF) (15) ∑ (c,d)εN(l,m) Y a,b,c,d ≤ 2, ∀ (a, b) εLF, ∀ (l, m) εBP<ns0:label>(16)</ns0:label></ns0:formula><ns0:p>There is a possibility that two or more than two flows will select a same point in their route. This possibility will cause the crossover and uniqueness of the routes will be affected. These chances of flows crossover in the second model is avoided by equation 17.</ns0:p><ns0:formula xml:id='formula_11'>∑ (i, j)εLF Y i, j,a,b ≤ 1, ∀ (a, b) εBP<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>The issues of memory consumption and the route uniqueness are solved by the dual model node based routing. Apart from that, very small time was consumed to find out the optimal path. In the next section, some results are included to support the models proposed.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSIONS</ns0:head><ns0:p>The dual model node based routing is composed of two separate stages. The pins are simultaneously escaped to the boundary of the pin arrays in the local routing stage and these escaped pins are connected with each other in the global routing stage. A Mathematical Programming Language (AMPL) is used to write the script of the algorithm and Gurobi solver is used to solve the algorithm. There are also other solvers available apart from Gurobi but Gurobi is the best among all of them in terms of calculation time and optimality. The dual model node based routing is based upon Integer Linear Programming approach and for Integer Linear Programming approach, Gurobi, Bonmin, and Minos are a few good choices. Gurobi has been preferred over the others because of time complexity and integrity issues.</ns0:p><ns0:p>The integrality is relaxed by the Minos solver in order to produce optimal results and this loss of integrality is not recovered at the output due to which the results are not desirable in some cases.</ns0:p><ns0:p>Integers are replaced with decimal numbers in some cases which is not feasible for the proposed model as integrality must be maintained. Strict integrality is maintained by the Bonmin solver, on the other hand, which is also not feasible for the proposed model as too much time is taken to find out the optimal solution. The solution of these two problems is provided by Gurobi. The integrality is relaxed by Gurobi during the problem solution and it is recovered when the results are generated. In this way, less time is consumed to find optimal results with strict integrality. The time consumption of each solver depends upon the complexity of the problem. We selected Gurobi through hit and trial because different solvers are suitable for different problems. In our case, Gurobi is the best possible selection as it retains the integrity and also takes less time as compared to other solvers. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>These characteristics of</ns0:head><ns0:p>Computer Science cores) CPU -2x Intel Xeon X5660, 64GB RAM, and 2x 500GB/2TB SATA drives disk. The purpose of using these two machines was to show that the proposed algorithm can be used by PC users as well as server operators. We have mentioned specifications of both the machines so that anyone can easily use the same specifications and check the validity of results and also compare the results with their proposed solution. There was definitely a difference between time consumed by these two machines which is mentioned in the next section.</ns0:p><ns0:p>The dual model node based routing is tested on three girds of different sizes. These three grids were of small, medium and large sizes respectively. The intention was to run the proposed model on grids of different sizes so that efficiency of model could be claimed for all types of grids. The specifications of three grids against which the model was tested are as follows:</ns0:p><ns0:p>• Small Grid,</ns0:p><ns0:p>• Medium Grid,</ns0:p><ns0:p>• Large Grid.</ns0:p></ns0:div>
<ns0:div><ns0:head>Small Grid</ns0:head><ns0:p>The first grid has a size of 20x20 and according to the dual model node based routing, this grid is divided It can be seen that not all the escaped pins are connected by the global routing model. It is because of the reason that not enough routing resources are available to route all these escaped pins on a single layer.</ns0:p><ns0:p>There are two solutions to this, either the remaining escaped pins can be routed on other layers or the grid can be expanded sideways in order to route all the escaped pins on a single layer. The second solution is not feasible because the solver used for solving this problem is an academic version of Gurobi which cannot handle more than a specified number of constraints. The commercial version of Gurobi needs to be purchased in order to solve the larger grid for the area routing problem. Therefore, it has been shown with the help of a relatively smaller grid that area routing can be done with the help of the proposed model.</ns0:p><ns0:p>The proposed dual model node based routing is equally good for area routing of the large grids.</ns0:p></ns0:div>
<ns0:div><ns0:head>Medium Grid</ns0:head><ns0:p>The results of the dual model node based routing have also been compared with a study in literature <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref>. The solver used by <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref> is min-cost flow solver CS2. Although, the solver used by them is different from our solver but the choice of a solver depends upon the model and in our case, Gurobi proved to be the best solver in all cases. In <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref>, network flow model has been proposed for escaping the pins to the boundaries and few results have been obtained in this study using industrial data. In one of their cases, 42 pins were escaped in a grid of 11 x 27 in 0.08 seconds. We designed a 30x30 grid in which the first model was a 11 x 27 grid similar to the study <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref>. We were able to escape 68 pins in the same grid within 0.12 seconds through NEOS server. This shows that all the desired pins have been escaped to the boundary. The memory used for the processing was 68.48 MB. The memory consumption is very less and the time taken to solve the problem of routing is also quite less. The results of escaped pins are shown in Figure <ns0:ref type='figure'>10</ns0:ref>. After this, the global routing model will connect these escaped pins but it has not been shown because of the reason mentioned in the previous sub section that the academic version of the gurobi cannot handle more than a specified number of constraints. If we use another solver instead of Gurobi, we would not be able to get optimal results. Also, we do not have enough resources to buy the commercial version of Gurobi. We have already tested the global routing model for the smaller cases in order to show that our model works perfectly well.</ns0:p></ns0:div>
<ns0:div><ns0:head>Large Grid</ns0:head><ns0:p>The large grid taken to evaluate the dual model node based routing is a 50x50 grid. The results have been compared with another case of the same study <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science contained 17 x 34 grid similar to this case in the study <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref>. We were able to escape 112 pins in the same grid within 0.02 seconds through NEOS server. This shows that the all the desired pins are escaped to the boundary within a minimal time. The memory consumed during processing was 190.6 MB which is also quite less. The number of escaped pins can also be increased by changing our grid topology a bit which is out of scope of this work. The results of escaped pins are shown in Figure <ns0:ref type='figure'>11</ns0:ref>.</ns0:p><ns0:p>The case used for comparison uses only a 17 x 34 grid and we use a 50 x 50 grid in which we escape 112 points using an internal grid of 17 x 34. If we use only a grid of 17 x 34 and run first model only, then we can get results in half time i.e. 0.01 seconds and memory consumption is also reduced to 97.05 MB.</ns0:p><ns0:p>This is a considerable reduction in time as compared to the time (0.16seconds) taken by the case taken in the compared study. The results of area routing in both the above mentioned cases have not been obtained due to the limitations of the academic version of the Gurobi solver. The results for small, medium, and large grid are shown in Table <ns0:ref type='table'>3</ns0:ref>. It can be seen that as the number of pins increases, our proposed model takes less and less time as compared to <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref>. This also shows the scalability of our model. As the number of pins is increasing, our model is taking less time and hence, it would still be suitable if number of pins is scaled up. We could not test all the cases provided in <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref> because of academic solver's limitations and relevancy. We choose the most relevant cases from <ns0:ref type='bibr' target='#b39'>Wu and Wong (2013)</ns0:ref> for comparison.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> shows that the proposed dual model node based routing method is great in terms of routability and it also takes very less time and memory to provide the routing solution. We have compared our model's efficacy with state-of-the-art models and our model outperforms state-of-the-art in the terms of time consumed. The memory consumed by the proposed dual model node based routing method is also considerably small which shows that no high end machines are required to run this model.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This work address two problem at the same that includes simultaneous escape routing and area routing in PCB. Main contributions of this research is the mapping of PCB routing problem to network flow problem and proposal of two algorithms for SER and area routing using integer linear programs. This work also links the local and global routing algorithms in order to achieve the end-to-end routing on a PCB. Proposed algorithms are efficient in terms of routing and time consumption as they outperform the existing algorithms and achieve 99.9% routability. Currently, we are working to propose a blend of ordered escape routing with the SER for further optimizing routability and improve time consumption.</ns0:p><ns0:p>We also aspire to scale this algorithm for dual layer and multi-layer PCBs.</ns0:p></ns0:div>
<ns0:div><ns0:head>AUTHOR CONTRIBUTIONS</ns0:head><ns0:p>• Asad Ali wrote the initial draft, devised the methodology and also did the software development for design and analysis of the proposed solution.</ns0:p><ns0:p>• Asad Ali and Anjum Naveed conceptualize the solution, did formal analysis. Anjum Naveed also did the validation and supervised this work.</ns0:p><ns0:p>• Muhammad Zeeshan reviewed and editing the manuscript, did the validation and also finalized and approved the final draft.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>The components whose 5/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>variable = X [a,b] 2: Number of variables = a x b So in the first model of this approach, the decision variable is: 3: X [flows, connecting points] For the 10 x 10 grid: 4: Flows = 1, connecting points =36 So the number of variables became: 5: Variables = flows x connecting points 6: Variables = 1 x 36 = 36 with unique routes. For a simple 10 x 10 grid, the number of variables used in the dual model node based routing can be calculated by using algorithm 1. Local routing model of the dual model node based routing used 36 variables. The number of variables in the global routing model can be calculated a presented in algorithm 2. Algorithm 2 : Global Routing Model 1: Decision variable = X [a,b] 2: Number of variables = a x b</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>After the selection of boundary points, the next step is to select one of the neighbors of the source boundary points and then continue the flow towards the destination boundary points. The equations 14, 15 and 16 are used to ensure the continuity of the flow from source boundary point to destination boundary point. ∑ (c,d)εN(startx[i, j],starty[i, j]) Y a,b,c,d ≤ 1, ∀ (a, b) εLF (14) 9/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Gurobi solver make it a perfect choice for solution of dual model node based routing. AMPL is chosen for execution of the algorithms because freedom of high level implementation is provided by it. A wide range of optimization problems can be solved with the help of AMPL and same syntax is used for declaration of data, model and commands. The local routing and global routing models are written in a model file which reads the data from a separate data file. Two models are included in the proposed algorithm and these two models are connected together with the help of a script that gathers values from the output of the local routing model and provides these values to the input of the global routing model. The proposed algorithm was written in AMPL and solved with the help of Gurobi solver. The Gurobi solver and AMPL software were run on two different machines in order to have a comparison of algorithm running time on different machines. The first machine used for algorithm execution was an Intel Core 2 Duo PC with a processor of 2.10 GHz and a RAM of 2GB. The other machine used for the execution of the algorithm was NEOS server which can be accessed on-line. NEOS server consisted of a 2.8GHz (12 10/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54435:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8. These escaped pins are connected through the second model as shown in Figure 9.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,392.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,392.25' type='bitmap' /></ns0:figure>
</ns0:body>
" | " Rebuttal Letter
Allow us to thank reviewers for their valuable comments; we have tried to address all possible comments to improve the quality of work and its presentation.
This document consists of two sections. First section is a tabular format response to each comment and section two further elaborates the changes made in manuscript against each comment. We have use different color to highlight changes made against each reviewer and red, blue, and green correlates to each reviewer comments.
1. Tabular ReferenceResponse
1.1 Reviewer 1
Reference
1.
There are many grammatical, typographical, and sentence construction errors here and there throughout the paper. In the following, I have listed some such problems. The authors are required to fix the rest of the similar problems and fix those before the resubmission.
1a. Please use one convention throughout the documents for abbreviation, i.e., Printed circuit boards (PCBs) or printed circuit boards (PCBs).
Abstract (Page 1)
Related work (1st & 2ndparagraph)
Page 4 (2nd last paragraph)
1b.Abbreviate one time and then use the abbreviation in the rest of the documents.
Abstract (Page 1)
Page 1 (Last paragraph)
Related work (1st & 2ndparagraph)
1c. Use one convention for referencing or citing Algorithms 1 or algorithm1
Page 2, bullet points
Page 8 (2nd last paragraph)
Page 9 (1st & 2nd paragraph)
1d. There are many complex sentences in the updated version of the manuscript, please make them small and clear to understand the meaning, i.e., First sentence of the last paragraph of page 8.
Page 10 (para at line 374)
Page 9 (para at line 349)
1e. Use the same pattern in the paper, i.e., use comma before every “and” if using at the start.
Related work (1st & 2nd paragraph)
Page 10- Results and Discussions - 1st paragraph
2.
Please compare your model efficacy with state-of-the-art model, i.e., when simultaneous Escape Routing is escaping of circuit pins simultaneously from inside two or more pin arrays
Page 12 (para at line 467)
3.
Please provide or figure in which you compare Gurabi, Minos and Bonmin in terms of time and integrity, which will strengthen your selection.
Page 10- Results and Discussions – 2nd paragraph
4.
Please provide figures in the results section that show time consumption and memory consumption.
Table 3 inserted
Page 12 (para at line 472)
5.
Authors claim that they achieve 100% routability but statistically it is not the correct statement. You can write 99.9% but not 100%.
Updated on multiple locations in abstract, page 2, page 4, page 5, and page 6
6.
Please plot some graphs to check the scalability of your proposed techniques.
Page 12, 2nd paragraph
7.
I suggest to please add flowchart also so it would be easy to understand your work.
Two flowchart Figures are added in supplementary section
1.2 Reviewer2
Reference
1.
INTRODUCTION states clearly what are the context, challenges and contributions
We thank the reviewer
2.
Quite sufficient relevant works and their limitations are posed and well-organized for the problem formulation of this paper.
We thank the reviewer
3.
Figures such as Figure 10 and 11 are suggested to use better quality in its resolution.
Figure 10 and 11 have been enhanced
4.
In RESULTS AND DISCUSSIONS, result statistics of test cases are better to be made into a table for a clear view of comparison with others’ previous works.
Table 3 has been inserted for comparison
5.
In INTRODUCTION and RELATED WORK, cross usage of abbreviations and full names are not preferred, such as PCBs vs. Printed Circuit Boards and SER vs. Simultaneous Escape Routing.
Printed Circuit Board has used in full form first time and later PCB is been used.
SER has been used throughout the paper after introducing it for the first time.
Page 3-2nd and 3rd paragraph and many other places
6.
In DUAL MODEL NODE BASED ROUTING, algorithms are well explained and unambiguous.
We thank the reviewer
7.
The algorithm was designed in a way to ensure 100% routability with its proper constraints.
We thank the reviewer
8.
The lack of evaluation of the global routing model in both Medium Grid and Large Grid cases make the conclusion weak in Line 443-445 and Line 461-462.
Page 11- Last paragraph
9.
In Line 386-391 in RESULTS AND DISCUSSION, lack of provision of statistics from different solvers Bonmin and Minos as a proof of the advantages of utilizing Gurobi solver.
Page 10-4th and 5th paragraph
10.
Not all cases provided in Wu and Wong(2013) were tested in this paper, it is hard to conclude that the proposed algorithm outperforms over the one in Wu and Wong(2013) as stated in Line 463-464.
Page 12 (line 460-466)
11.
Different solver used than Wu and Wong(2013), the comparison of experiment results don’t have exactly the same base line.
Page 11- Last paragraph (Line 431-435)
12.
The paper claims significant improvement in both the memory consumption and runtime in comparison to the previous work Wu and Wong(2013), however, the machine specification used on NEOS server was not specified.
Page 10 (Line 393-398)
13.
The contribution stated in Line 90-91 is not convincing as over half of the cases are not tested for global routing model in Line 436-466.
Page 11(Line 437-445)
14.
The paper sounds promising in the beginning in INTRODUCTION and the background study is quite sufficient. The overall algorithm design is proper and unambiguous. However, there is a room for improvement in the experimental design. The experimental design makes the contribution of the paper unconvincing due to the incompletion of case testing on global routing and the test environment difference between the one in others’ work.
Page 10-11
15.
If academic version of Gurobi solver has a limitation in constraints, you may need to consider other solvers for a full comparison in both the local and global routing models instead of leaving the global routing model untested for over half of the test cases. Or it would be more convenient if you could borrow the commercial license from some other institutes. The lack of test results makes the conclusion and contribution really weak.
Page 11 & 12
16.
Machine specification is also a concern when comparing the runtime with others.
Page 10 (Last paragraph)
1.3 Reviewer3
Reference
1.
The research work is interesting and is organized well.
We thank Reviewer 3
2.
The manuscript require professional language editing.
Manuscript has been Edited
3.
The design and analysis is sound
We thank Reviewer 3
4.
Abstract and conclusion need to be rewritten for easy and clear understanding.
Abstract and Conclusion have been updated
2. Descriptive changes in Manuscript
2.1 Reviewer #1
1. There are many grammatical, typographical, and sentence construction errors here and there throughout the paper. In the following, I have listed some such problems. The authors are required to fix the rest of the similar problems and fix those before the resubmission.
1a. Please use one convention throughout the documents for abbreviation, i.e., Printed circuit boards (PCBs) or printed circuit boards (PCBs).
PCBs are widely used for the modern age electric circuit’s fabrication. The design of a PCB plays a vital role in the performance of electric circuits. The ever evolving technology has not only reduced the size of ICs that are to be placed on the PCBs but has also increased the pin count of ICs. This makes the process of manual PCB routing a very hectic task and demands automated PCB routing. The problems that are to be solved in the PCB routing are escape routing, area routing, length matching, and number of layers that are to be used. Many studies in the literature have addressed these routing problems related to the PCBs.
Some studies have proposed the heuristics algorithms to obtain a length matching routing Zhang et al. (2015b) and others have used differential pair escape Li et al. (2012, 2019), single signal escape routing Wu and Wong (2013) or both Wang et al. (2014) for escape routing along with addressing the length matching problem. The most notable technique used for the PCB routing is optimization. There are various studies in literature that have mapped different PCB routing problems to the optimization problem such as longest common interval sequence problem Kong et al. (2007), multi-layer assignment problem Yan et al. (2009), network flow problem Sattar and Ignjatovic (2016), pin assignment and escape routing problem Lei and Mak (2015), and maximum disjoint set of boundary rectangles problem for bus routing Ahmadinejad and Zarrabi-Zadeh (2016). A recent study has proposed a routing method that is based upon maze and it uses a hierarchical scheme for bus routing and the proposed method uses a rip-up and re-route technique as well in order to improve the efficiency Chen et al. (2019).
1b. Abbreviate one time and then use the abbreviation in the rest of the documents.
Simultaneous Escape Routing (SER) is escaping of circuit pins simultaneously from inside two or more pin arrays. This is comparatively difficult as compared to routing in a single array and has not been addressed by previous studies. The increase in pin array complexity has made the manual SER in PCBs a very inefficient and tedious task and there surely is need for the automated routing algorithms. In
this work we propose network flow based optimal algorithm that uses integer linear program to solve simultaneous escape routing problem and area routing problem in two stages. In the first stage, pins are escaped to the boundaries of pin arrays simultaneously. These escaped pins are connected with
each other in the second stage. We tested the proposed algorithm for different benchmark sizes of grids and the results show that it is not only better in terms of routability but also outperforms existing state of the art algorithms in terms of time consumption. The existing algorithms either fails to achieve higher routability or have larger time complexities. Whereas the proposed algorithm achieves 99.9% routability
and it is also independent of grid topology and component pin arrangement which shows the superiority of proposed algorithm over the existing state of art algorithms.
1c. Use one convention for referencing or citing Algorithms 1 or algorithm1
• Proposal of an algorithm for area routing that uses an integer linear program and provides optimalsolution
• Linkage between the proposed algorithms to obtain the end-to-end routing.
For a simple 10 x 10 grid, the number of variables used in the dual model node basedrouting can be calculated by using algorithm 1. Local routing model of the dual model node based routingused 36 variables. The number of variables in the global routing model can be calculated a presented inalgorithm 2.
The local routing algorithm of the dual model node based routing is explained in algorithm 3 and ituses the constraints which are detailed in algorithm 3.
The global routing algorithm of the dual model node based routing is explained in algorithm 4.
1d. There are many complex sentences in the updated version of the manuscript, please make them small and clear to understand the meaning.
The integrality is relaxed by the Minos solver in order to produce optimal results and this loss of integrality is not recovered at the output due to which the results are not desirable in some cases. Integers are replaced with decimal numbers in some cases which is not feasible for the proposed model as integrality must be maintained. Strict integrality is maintained by the Bonmin solver, on the other hand, which is also not feasible for the proposed model as too much time is taken to find out the optimal solution. The solution of these two problems is provided by Gurobi. The integrality is relaxed by Gurobi during the problem solution and it is recovered when the results are generated. In this way, less time is consumed to find optimal results with strict integrality. The time consumption of each solver depends upon the complexity of the problem. We selected Gurobi through hit and trial because different solvers are suitable for different problems. In our case, Gurobi is the best possible selection as it retains the integrity and also takes less time as compared to other solvers.
Now, we have the routed the source points towards the boundary points but we also need an equationthat routes the same destination points to the boundary points. Therefore, we use the same equations forthe destination points which were used for the source points. The equation 6 and equation 7 make surethat the destination points of the flow are selected along with one of their neighbors. This is done to makesure that the route starts from destination point and move towards one of its neighbors.
The local routing model has selected the boundary points for both the source and destination points.The local routing model has provided these boundary points to the global routing model. Now, it is theresponsibility of the global routing model to connect these boundary points together. The first step is toselect these boundary points and the equations 12 and 13 ensure that the boundary points provided by thelocal routing model are selected in the global routing model.
1e. Use the same pattern in the paper, i.e., use comma before every “and” if using at the start.
The problems that are tobe solved in the PCB routing are escape routing, area routing, length matching, and number of layers thatare to be used. Many studies in the literature have addressed these routing problems related to the PCBs.
There arevarious studies in literature that have mapped different PCB routing problems to the optimization problemsuch as longest common interval sequence problem Kong et al. (2007), multi-layer assignment problemYan et al. (2009), network flow problem Sattar and Ignjatovic (2016), pin assignment and escape routingproblem Lei and Mak (2015), and maximum disjoint set of boundary rectangles problem for bus routingAhmadinejad and Zarrabi-Zadeh (2016).
The dual model node based routing is based upon Integer Linear Programmingapproach and for Integer Linear Programming approach, Gurobi, Bonmin, and Minos are a few goodchoices.
2. Please compare your model efficacy with state-of-the-art model, i.e., when simultaneous Escape Routing is escaping of circuit pins simultaneously from inside two or more pin arrays.
Table 3 shows that the proposed dual model node based routing method is great in terms of routabilityand it also takes very less time and memory to provide the routing solution. We have compared ourmodel’s efficacy with state-of-the-art models and our model outperforms state-of-the-art in the terms oftime consumed. The memory consumed by the proposed dual model node based routing method is alsoconsiderably small which shows that no high end machines are required to run this model.
3. Please provide or figure in which you compare Gurabi, Minos and Bonmin in terms of time and integrity, which will strengthen your selection.
The time consumption of each solver depends upon the complexity of the problem. We selected Gurobi through hit and trial because different solvers are398suitable for different problems. In our case, Gurobi is the best possible selection as it retains the integrity399and also takes less time as compared to other solvers.
4. Please provide figures in the results section that show time consumption and memory consumption
We have inserted Table 3 for this purpose:
5. Authors claim that they achieve 100% routability but statistically it is not the correct statement. You can write 99.9% but not 100%.
Eithertoo much time is consumed by these solutions or the achieved routability is less than 99.9 percent.
99.9 percent routability for pins,
Results have been compared with CadencePCB router Allegro and it was found that proposed router is able to achieve 99.9 percent routability incase of such circuits which are not routed completely by the Allegro. Time consumption is also less, butthe issue is that there are some examples in which the proposed router is not able to achieve 99.9 percentroutability while Allegro is able to achieve. It is proposed that the router should be used in collaborationwith Allegro so that all the problems can be solved.
The studies in literature have usedrandomized approaches and heuristic algorithms to solve the escape routing problem which have notbeen able to reduce time complexity, achieve 99.9 percent routability, and provide optimal solutions. Theproblem of SER has been solved in this study through proposal of an optimal algorithm that not onlyachieves the 99.9 percent routability but also consumes very less time and memory to find out the optimalsolution
6. Please plot some graphs to check the scalability of your proposed techniques.
The results for small, medium, and large grid are shown in Table 3. It can be seen that as the number of pins increases, our proposed model takes less and less time as compared to Wu and Wong (2013). This also shows the scalability of our479model. As the number of pins is increasing, our model is taking less time and hence, it would still be480suitable if number of pins is scaled up.
7. I suggest to please add flowchart also so it would be easy to understand your work.
Two flow chart figures have been added for this purpose in Supplementary Section.
2.3 Reviewer #2
1. INTRODUCTION states clearly what are the context, challenges and contributions
We thank the reviewer for the comment.
2. Quite sufficient relevant works and their limitations are posed and well-organized for the problem formulation of this paper.
We thank the reviewer for the comment.
3. Figures such as Figure 10 and 11 are suggested to use better quality in its resolution.
Figure 10 and 11 have been enhanced.
4. In RESULTS AND DISCUSSIONS, result statistics of test cases are better to be made into a table for a clear view of comparison with others’ previous works.
Table 3 has been inserted for this purpose:
5. In INTRODUCTION and RELATED WORK, cross usage of abbreviations and full names are not preferred, such as PCBs vs. Printed Circuit Boards and SER vs. Simultaneous Escape Routing.
Printed Circuit Board has used in full form first time and later PCB is been used.SER hasbeen used throughout the paper after introducing it for the first time.
Some studies have proposed the heuristics algorithms to obtain a length matching routing Zhanget al. (2015b) and others have used differential pair escape Li et al. (2012, 2019), single signal escaperouting Wu and Wong (2013) or both Wang et al. (2014) for escape routing along with addressing thelength matching problem. The most notable technique used for the PCB routing is optimization.
There are different problems to be solved in the PCB routing as discussed earlier and one of themost important problem among these is the escape routing where the pins from inside of an IC are tobe escaped to the boundary of IC. Many studies in the literature have proposed solutions for escaperouting problem among which some have considered single layer Lei and Mak (2015) in the PCB while,multiple layers Bayless et al. (2016); Zhang et al. (2015a) have also been considered.
There is another research Yan et al. (2011) which uses ordered escape routing in order to solve theSER problem. The SER problem is solved by determining the net order. A bipartite graph is created onthe basis of location of escape pins and transformation of net numbers is done. After doing SER, the netnumbers are recovered. Basically, the orders of escape nets are found along the boundary in such a waythat there is no crossover in the intermediate area. This approach achieves 99.9 percent routability alongwith reduction in time consumption, but the achieved net order is not routable in all cases. This approachcan fail in some specific cases where the ordering proposed by the approach is not routable. Quite recently,net ordering has been incorporated with the SER through proposal of a novel net ordering algorithm inorder to reduce the running time Sattar et al. (2017). Recently, we have proposed the solution to thisproblem through a network flow approach Ali et al. (2017). We proposed the solution to this problemthrough a network flow approach where we introduced two different models; one is based on the links andcalled Link based routing, while the other is based on nodes and is called Nodes based routing. They areintroduced to solve SER problem in PCBs through optimization techniques.
As per our findings, the SER has been explored by these studies and they have provided some goodsolutions with limitations. There are some major issues that still seem to be unresolved and they includefixed pattern generation and time complexity. Some studies have proposed solutions that solve manyof the issues but in some cases, they could lead to the pin blockage and resource wastage. The studieshave used randomized approaches and heuristic algorithms and optimal solution is not provided. In thisstudy, we provide an optimal solution having a time complexity of under a minute along with 99.9 percentroutability for the problem of SER in the PCBs
6. In DUAL MODEL NODE BASED ROUTING, algorithms are well explained and unambiguous.
We thank the reviewer for this comment.
7. The algorithm was designed in a way to ensure 100% routability with its proper constraints.
We thank the reviewer for this comment.
8. The lack of evaluation of the global routing model in both Medium Grid and Large Grid cases make the conclusion weak in Line 443-445 and Line 461-462.
We were able to escape 68 pins in the same grid within 0.12 seconds through NEOSserver. This shows that all the desired pins have been escaped to the boundary. The memory used for theprocessing was 68.48 MB. The memory consumption is very less and the time taken to solve the problemof routing is also quite less. The results of escaped pins are shown in Figure 12. After this, the globalrouting model will connect these escaped pins but it has not been shown because of the reason mentionedin the previous sub section that the academic version of the gurobi cannot handle more than a specifiednumber of constraints. If we use another solver instead of Gurobi, we would not be able to get optimalresults. Also, we do not have enough resources to buy the commercial version of Gurobi. We have alreadytested the global routing model for the smaller cases in order to show that our model works perfectly well.
9. In Line 386-391 in RESULTS AND DISCUSSION, lack of provision of statistics from different solvers Bonmin and Minos as a proof of the advantages of utilizing Gurobi solver.
The dual model node based routing is based upon Integer Linear Programmingapproach and for Integer Linear Programming approach, Gurobi, Bonmin, and Minos are a few goodchoices. Gurobi has been preferred over the others because of time complexity and integrity issues.
The time consumption of each solver dependsupon the complexity of the problem. We selected Gurobi through hit and trial because different solvers aresuitable for different problems. In our case, Gurobi is the best possible selection as it retains the integrityand also takes less time as compared to other solvers.
10. Not all cases provided in Wu and Wong(2013) were tested in this paper, it is hard to conclude that the proposed algorithm outperforms over the one in Wu and Wong(2013) as stated in Line 463-464.
The results for small, medium, andlarge grid are shown in Table 3. It can be seen that as the number of pins increases, our proposed modeltakes less and less time as compared to Wu and Wong (2013). This also shows the scalability of ourmodel. As the number of pins is increasing, our model is taking less time and hence, it would still besuitable if number of pins is scaled up. We could not test all the cases provided in Wu and Wong (2013)because of academic solver’s limitations and relevancy. We choose the most relevant cases from Wu andWong (2013) for comparison.
11. Different solver used than Wu and Wong (2013), the comparison of experiment results don’t have exactly the same base line.
The solver used by Wu and Wong (2013) is min-cost flow solver CS2. Although,448the solver used by them is different from our solver but the choice of a solver depends upon the model449and in our case, Gurobi proved to be the best solver in all cases.
12. The paper claims significant improvement in both the memory consumption and runtime in comparison to the previous work Wu and Wong(2013), however, the machine specification used on NEOS server was not specified.
The proposed algorithm was written in AMPL and solved with the help of Gurobi solver. The Gurobisolver and AMPL software were run on two different machines in order to have a comparison of algorithmrunning time on different machines. The first machine used for algorithm execution was an Intel Core 2Duo PC with a processor of 2.10 GHz and a RAM of 2GB. The other machine used for the execution ofthe algorithm was NEOS server which can be accessed on-line. NEOS server consisted of a 2.8GHz (12cores) CPU - 2x Intel Xeon X5660, 64GB RAM, and 2x 500GB/2TB SATA drives disk.
13. The contribution stated in Line 90-91 is not convincing as over half of the cases are not tested for global routing model in Line 436-466.
We were able to escape 68 pins in the same grid within 0.12 seconds through NEOSserver. This shows that all the desired pins have been escaped to the boundary. The memory used for theprocessing was 68.48 MB. The memory consumption is very less and the time taken to solve the problemof routing is also quite less. The results of escaped pins are shown in Figure 12. After this, the globalrouting model will connect these escaped pins but it has not been shown because of the reason mentionedin the previous sub section that the academic version of the Gurobi cannot handle more than a specifiednumber of constraints. If we use another solver instead of Gurobi, we would not be able to get optimalresults. Also, we do not have enough resources to buy the commercial version of Gurobi. We have alreadytested the global routing model for the smaller cases in order to show that our model works perfectly well.
14. The paper sounds promising in the beginning in INTRODUCTION and the background study is quite sufficient. The overall algorithm design is proper and unambiguous. However, there is a room for improvement in the experimental design. The experimental design makes the contribution of the paper unconvincing due to the incompletion of case testing on global routing and the test environment difference between the one in others’ work.
We have added paragraphs on page 10 & 11 in order to support our choices for experimental design.
15. If academic version of Gurobi solver has a limitation in constraints, you may need to consider other solvers for a full comparison in both the local and global routing models instead of leaving the global routing model untested for over half of the test cases. Or it would be more convenient if you could borrow the commercial license from some other institutes. The lack of test results makes the conclusion and contribution really weak.
The academic version of the Gurobi cannot handle more than a specifiednumber of constraints. If we use another solver instead of Gurobi, we would not be able to get optimalresults. Also, we do not have enough resources to buy the commercial version of Gurobi. We have alreadytested the global routing model for the smaller cases in order to show that our model works perfectly well.
16. The machine specification is also a concern when comparing the runtime with others.
The first machine used for algorithm execution was an Intel Core 2Duo PC with a processor of 2.10 GHz and a RAM of 2GB. The other machine used for the execution ofthe algorithm was NEOS server which can be accessed on-line. NEOS server consisted of a 2.8GHz (12cores) CPU - 2x Intel Xeon X5660, 64GB RAM, and 2x 500GB/2TB SATA drives disk. The purpose ofusing these two machines was to show that the proposed algorithm can be used by PC users as well asserver operators. We have mentioned specifications of both the machines so that anyone can easily usethe same specifications and check the validity of results and also compare the results with their proposedsolution.
2.2 Reviewer #3
1. The research work is interesting and is organized well.
We thank the reviewer for this comment.
2. The manuscript require professional language editing.
We have edited the paper in an extensive manner and tried to improve the English language as well as overall presentation.
3. The Design and analysis is sound
We thank the reviewer for this comment.
4. Abstract and conclusion need to be rewritten for easy and clear understanding.
Abstract and conclusion have been rewritten as follows:
Abstract:
Simultaneous Escape Routing (SER) is escaping of circuit pins simultaneously from inside two or more pin arrays. This is comparatively difficult as compared to routing in a single array and has not been addressed by previous studies. The increase in pin array complexity has made the manual SER in PCBs a very inefficient and tedious task and there surely is need for the automated routing algorithms. In
this work we propose network flow based optimal algorithm that uses integer linear program to solve simultaneous escape routing problem and area routing problem in two stages. In the first stage, pins are escaped to the boundaries of pin arrays simultaneously. These escaped pins are connected with each other in the second stage. We tested the proposed algorithm for different benchmark sizes of grids and the results show that it is not only better in terms of routability but also outperforms existing state of the art algorithms in terms of time consumption. The existing algorithms either fails to achieve higher routability or have larger time complexities. Whereas the proposed algorithm achieves 99.9% routability and it is also independent of grid topology and component pin arrangement which shows the superiority of proposed algorithm over the existing state of art algorithms.
Conclusion:
This work address two problem at the same that includes simultaneous escape routing and area routing in PCB. Main contributions of this research is the mapping of PCB routing problem to network flow problem and proposal of two algorithms for SER and area routing using integer linear programs. This work also links the local and global routing algorithms in order to achieve the end-to-end routing on a PCB. Proposed algorithms are efficient in terms of routing and time consumption as they outperform the existing algorithms and achieve 99.9% routability. Currently, we are working to propose a blend of ordered escape routing with the SER for further optimizing routability and improve time consumption. We also aspire to scale this algorithm for dual layer and multi-layer PCBs.
" | Here is a paper. Please give your review comments after reading it. |
71 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation.</ns0:p><ns0:p>The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Artistic interactive musical installations, like Aether <ns0:ref type='bibr' target='#b14'>(Sanchez and Castro, 2014)</ns0:ref> and Intrium <ns0:ref type='bibr' target='#b6'>(Guisan, 2005)</ns0:ref>, are devices that allow an audience to interact with a sonic environment or musical concept using electronic sensors. In some cases, the installation is built as to augment the interaction between the public and an specific environment, as the well-known piano staircase <ns0:ref type='bibr'>(TheFunTheory, 2009)</ns0:ref>, an installation in which each step in a staircase was behaved like the key of a piano, thus causing music to be played when the audio went downstairs and upstairs. More recently, modern motion sensors allowed achieving new possibilities of musical performance and interaction <ns0:ref type='bibr' target='#b7'>(Jung et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b1'>Chen et al., 2014)</ns0:ref> by mapping movements into musical responses.</ns0:p><ns0:p>Interactive musical devices present both artistic and technological challenges <ns0:ref type='bibr' target='#b4'>(Garnett, 2001)</ns0:ref>. They create the possibility of generating music according to a dance, instead of constraining dance to a predefined musical piece <ns0:ref type='bibr' target='#b11'>(Morales-Manzanares et al., 2001)</ns0:ref>. Hence, they bring to the public a technologyenabled experience that is sensibly different from simply listening to music or dancing to a recording.</ns0:p><ns0:p>Nevertheless, most installations are expensive artifacts that must be mounted by a well-trained team. This causes their cultural experience to be restricted to specific environments, such as art galleries, museums or particular events. Therefore, the cultural transformation derived from the interaction with a novel music concept has a limited audience range.</ns0:p><ns0:p>The installation proposed in this article, namely MOTUS, aims at being deployed for a broad public. This is achieved by combining a web-deployed software stack, little hardware requirements and simple, yet engaging, methods for interaction. As a result, the experience provided by MOTUS is made accessible for any person with an internet connection and a laptop with a webcam.</ns0:p><ns0:p>The installation uses a camera as a sensor device, and a simple motion detection algorithm <ns0:ref type='bibr' target='#b19'>(Wirayuda et al., 2013)</ns0:ref> to characterize the audience's movements. The musical generation, based on Markov chains <ns0:ref type='bibr' target='#b16'>(Schulze and van der Merwe, 2011;</ns0:ref><ns0:ref type='bibr' target='#b13'>Pachet, 2002;</ns0:ref><ns0:ref type='bibr' target='#b2'>Cope, 1997)</ns0:ref>, aims at converting the detected movement intensity into the intensity of the musical manifestation without requiring previous musical knowledge from the audience. The installation software also comprises auditory and visual feedback, which may use the laptop's hardware (screen and speakers) or external devices such sound reinforcement systems and projectors.</ns0:p><ns0:p>The remainder of this article is organized as follows. First, related work is presented in Section 2, followed by a discussion about the artistic concepts behind the development of MOTUS in Section 3. In Section 4, MOTUS is thoroughly described both from the artistic and the technical points of view. Further discussion, based on interactions with the audience, is conducted in Section 5. Last, Section 6 brings conclusive remarks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>A great number of interactive art installations has been constructed in the last decade. Each one of them implements an underlying purpose, which is often discussed in academic publications. Some are especially related to MOTUS, as it will be discussed below. <ns0:ref type='bibr' target='#b0'>Birchfield et al. (Birchfield et al., 2006)</ns0:ref> brought forward the question of placement of an installation, and its impact on the usage of a public space. After implementing sonification of a bus stop in a busy street, they was observed that the general public often feels self-conscious about producing sounds in this environment. Henceforth, audience engagement is an important, non-trivial issue to be considered in installations.</ns0:p><ns0:p>A possible technique to achieve audience engagement is to develop a specific space for the installation, providing both auditory and visual stimuli <ns0:ref type='bibr' target='#b9'>(Kobori et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b17'>Seo and Corness, 2007)</ns0:ref>. However, as observed in the Piano Staircase <ns0:ref type='bibr'>(TheFunTheory, 2009)</ns0:ref>, audience engagement may happen even if the installation is placed in a public space. This indicates that the placement of the installation does not cause audience engagement alone.</ns0:p><ns0:p>In the evaluation of the interactive dance installation Hoppsa Universum <ns0:ref type='bibr' target='#b8'>(Kallblad et al., 2008)</ns0:ref>, it has shown that is audience perception was frequently described with expressions like it was fun or be with friends. Later, Schacher <ns0:ref type='bibr' target='#b15'>(Schacher, 2009)</ns0:ref> noted that the audience engagement is related to the fast understanding of the interaction model, which may restrict the usage of more complicated interfaces or algorithms.</ns0:p><ns0:p>Morreale et al. <ns0:ref type='bibr' target='#b12'>(Morreale et al., 2013)</ns0:ref> presented an algorithm, namely Robin, capable of generating piano music from the spatial position of members of the audience. The algorithm uses a rule-based system that models Western piano style music, and may be used by untrained (non-musician) members of the audience. It was presented in an installation that was well-evaluated, with great acceptance ratios.</ns0:p><ns0:p>MOTUS considers all of these aspects, but, unlike the work discussed above, it does not require special hardware (other than that present in most current laptops) or preparations to be used. It aims at being easily used, including by untrained audience, which reflects on the simplicity of its interaction model and its software is deployed as a web application, thus it can be readily used in private spaces. MOTUS is thoroughly described in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>ARTISTIC CONCEPT</ns0:head><ns0:p>MOTUS was first idealized from the idea of converting movements to music using a camera. Its name comes from the Latin word that means 'motion'. This section describes the artistic concepts over which it was constructed.</ns0:p><ns0:p>The musical concept behind MOTUS was derived from improvised genres, like Free Jazz and some styles of ethnic Drum Circles. During an improvisation session, it is important to perceive the other members of the ensemble and create some form of communication with them. In this context, elements such as harmony and rhythm may be transformed to fit the communication process that emerges in each session.</ns0:p><ns0:p>According to the model presented by <ns0:ref type='bibr' target='#b3'>Dubberly et al. (Dubberly et al., 2009)</ns0:ref>, this type of interaction is mediated by the intention of each agent. This means that the correspondence to an intention is, for the improvisation group, more important than the achievement of technical precision. Therefore, MOTUS uses a music generation model that responds to the audience intention.</ns0:p><ns0:p>For the construction of the interactive system, this intention must be assigned to control a measurable aspect of the generated music. Since MOTUS is intended to be used by an untrained audience, the musical aspect controlled by the audience's intention must be simple to understand. For this reason, the audience's intention was assigned to control the musical intensity.</ns0:p><ns0:p>To evaluate the audience's intention using the webcam, it was necessary to estimate the intensity of captured movements. Instead of mapping particular movements to specific sonic representations, a general movement intensity was measured using pixel-by-pixel differences. This allows the audience Reviewing Manuscript to explore not only the interaction with MOTUS, but also the diverse possibilities of using their bodies, interacting with friends or using objects.</ns0:p><ns0:p>With the goal of inducing broader movements, the video area was divided into different regions, each related to a sonic representation. The audience can visualize the video feed, with a color scheme that highlights the regions that are most active. In addition to the aesthetic appeal, this feedback helps understanding the interaction process.</ns0:p><ns0:p>For this same reason, piano sounds were used for audio rendering. They have the goal of being easy to recognize, as most of the general audience (at least in Western countries) is familiar with the instrument. The installation is described from a more technical point of view in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>THE INSTALLATION</ns0:head><ns0:p>The main concern when developing MOTUS was that it could be used by as many people as possible.</ns0:p><ns0:p>Steps towards this goal were taken by requiring as little external hardware as possible and by deploying the software as a web application. The hardware necessary to mount the installation was restricted to that available in a common laptop, i.e., a webcam, a video screen and internal speakers, leading to an overall system as described in The deployment problem can be solved by using JavaScript as the main programming language. It can be used to deploy the application directly on a host web browser. However, this choice also poses a performance restriction, as JavaScript applications are usually slow when compared to native (compiled) programs.</ns0:p><ns0:p>On the artistic side, the concept behind MOTUS is that it should convert movement to music. This conversion means that user movements should trigger a musical response, and more intense movements should correspond to a more intense musical response. Therefore, two subsystems are necessary, one comprising a movement detection algorithm and another one containing a musicological model that generates musical responses.</ns0:p><ns0:p>Also, it quickly became clear that a video feedback of the detection process could improve the audience's experience. This happens because the visual information allows the user to understand and appropriate their interaction with a novel musical device. As a result, a greater level of immersion could be provided.</ns0:p><ns0:p>Therefore, MOTUS can be detailed in a block diagram as shown in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. All blocks in gray are software, and will be executed within the computer shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The following sub-sections will present a thorough description of the movement detection, video rendering, the musicological model and the audio rendering system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Movement detection</ns0:head><ns0:p>The movement detection process applied in MOTUS is very simple, as the web-based implementation does not allow for computationally demanding algorithms. The algorithm begins with the calculation the value v p of each pixel p as the sum of its red, green and blue channels, as it is a common practice in computer vision algorithms <ns0:ref type='bibr' target='#b18'>(Szeliski, 2010)</ns0:ref>. Hence, it may be expressed by:</ns0:p><ns0:formula xml:id='formula_0'>v p = r p + g p + b p .</ns0:formula><ns0:p>(1)</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ Comp Sci reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2015:03:4249:1:1:NEW 6 May 2015)</ns0:ref> Reviewing Manuscript In this algorithm, it is more important to detect the intensity of movements than the precise movement location. Such a parameter can be estimated using the mean absolute difference between the pixel values in a frame t and those in the previous frame t − 1 <ns0:ref type='bibr' target='#b10'>(Moeslund, 2012)</ns0:ref>, that is:</ns0:p><ns0:formula xml:id='formula_1'>µ[t] = ∑ P p=1 |v p [t] − v p [t − 1]| P ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P is the number of pixels in the frame.</ns0:p><ns0:p>Calculating the amount of movement in the whole video feed, however, does not allow placing different interaction with the installation when performing different types of movements. Therefore, the video input was first split into four different partitions. Each partition had its own movements intensity estimation, and, as will be seen later, is related to a different part of the interaction experience.</ns0:p><ns0:p>In preliminary tests, it was noticed that µ[t] changes too quickly, which gives an impression of chaos and lack of control. Hence, it was necessary to apply a filter to each µ t signal before using it for further purposes. An attack-release filter was applied, using the following expression:</ns0:p><ns0:formula xml:id='formula_2'>μ[t] = α µ[t] + (1 − α) μ[t − 1] if µ[t] > μ[t − 1] β µ[t] + (1 − β ) μ[t − 1] if µ[t] ≤ μ[t − 1].</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>The attack-release filter acts as a low-pass filter whose cut-off frequency is different whether the input signal is higher or lower than the last output. Higher values for the α and β coefficients correspond to shorter attack and release times, respectivelly. They were manually adjusted so that the resulting interaction was smooth as desired.</ns0:p><ns0:p>Hence, the result of the movement detection process is a set of four movement estimates μ[t], one for each partition. This result was used both in the musicological model and the video rendering process, as it will be discussed later.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Video rendering</ns0:head><ns0:p>The visual feedback provided by MOTUS aims at two correlated, but different goals. The first is to yield feedback on what the system is doing, that is, what is being detected. The second is to make the audience experience more immersive and engaging.</ns0:p><ns0:p>Three dimensions of the system's inner mechanisms were chosen to be conveyed: the captured image values v p as in Expression 1, the differences between the current frame and the previous frame Reviewing Manuscript to different colors. These colors were arbitrarily chosen to be blue, red and green, which colored the feedback video creating a particular aesthetic environment.</ns0:p><ns0:formula xml:id='formula_3'>(|v p [t] − v p [t − 1]|)</ns0:formula><ns0:p>As stated before, the values of each frame were mapped to the blue channel of the feedback video. The blue color, then, becomes dominant at almost all times, which gives the installation a general feeling of blue. As a consequence, blue is a color related to musical rest. Each pixel's absolute difference to the previous frame was mapped to the red channel. This caused a red 'ghost' to appear in point where strong movements were detected, indicating that an interaction was detected. This piece of visual feedback is bounded to the user and became subtle when compared to other cues.</ns0:p><ns0:p>The amount of movement μ[t] in each frame partition was mapped to the green channel of the corresponding pixels. This aimed at helping the audience to relate movements to sounds, as a particular category of sonic responses would be clearly correlated to specific blinks in a region of the screen. This piece of visual feedback is strongly correlated to the musicological model employed, as it will be seen below.</ns0:p><ns0:p>A screenshot of the video feedback in action is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>, converted to gray scale to ensure visibility in printed media. As it can be seen, screen areas in which there is more movement are highlighted, and it is possible to visualize both the body movement detection and the activation of screen areas related to musical responses. Thus, the audience's impact on the audiovisual environment is easily visualized.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Screenshot of the video render.</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.3'>Musicological model</ns0:head><ns0:p>The generation of musical sequences was done by means of four musicological models, each receiving as input the amount of movement of a different video partition. In all cases, the model should yield a musical manifestation that is perceived as more intense when movements in that partition are more intense. Also, this correlation should be perceived almost immediately.</ns0:p><ns0:p>In addition to that, the models were built so that no strong sensation of downbeat would emerge, hence avoiding inducing the audience to perform known popular dance moves and favoring the exploration of different body movements. The sensation of closure commonly found in tonal music (e.g. in I-IV-V-I sequences) was also avoided, preventing the comparison of the generated music with known pieces, also favoring experimentation. To keep the interaction more interesting, each partition was bounded to a different musicological behavior, which aimed at inducing the audience to explore the whole interactive space.</ns0:p><ns0:p>An aesthetic choice that fits all of these requirements was to make all models yield sequences of musical notes, which is a musical paradigm that is easily recognized by most of the audience. When the sequences are required to be more intense, their notes were increasingly faster and louder. In order to make all musicological models yield sequences that sounded as part of the same piece, they were all bounded to the same octatonic scale, and differences were added on the way each model creates a path within that scale.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, each generative system is independent from the others. They correspond to four different voices, namely upper, middle, harmony and bass. All of them yield note sequences, which will be later rendered. One sequence generation model applied relies on a Markov chain <ns0:ref type='bibr' target='#b2'>(Cope, 1997)</ns0:ref>, adjusted so that the next note is equally likely to be equal to the previous note, a step down or a step up the scale. This model was used in the upper and the middle voices, which were also restricted to particular note ranges. The note range restriction allows users to quickly recognize each one of the voices.</ns0:p><ns0:p>The other sequence generation model was the purely random choice. In the lower voice, a random note from the scale (within the range restriction) was yielded at each interaction. In the harmony voice, two random notes from the scale (also within the range restriction) were yielded at each time.</ns0:p><ns0:p>All four voices had different functions to transform the input (movement intensity in the corresponding partition) into values of note speed and loudness, so that more intense movements are mapped to faster and louder notes. These functions were manually adjusted to provide a balanced auditory response related to each space partition, as well as an interesting experience. In all cases, it has proved to be interesting to have a lower bound filtering on the input below which it is considered as noise and does not produce any sonic response.</ns0:p><ns0:p>As a result, MOTUS quickly responds to the audience's actions. It yields a sequence of notes that are almost always dissonant and out of sync related to each other. Nevertheless, the note sequences aim to be perceived as correlated to the audience's movements. This design fits the employed technology (JavaScript), as it is known for having a bad timing mechanism in current implementations. As note lengths are continuous and not bounded to the notes in other voices, the lack of synchronization does not harm the final result. This also allowed the audio rendering process to be performed by independent agents, as it will be discussed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Audio rendering</ns0:head><ns0:p>The audio rendering process was based on agents that receive pitch, loudness and duration information from the note sequence generated by the musical models. When a note is finished (i.e., its duration has expired), the system proceeds to render the next note (or notes, in the case of the harmony voice), and so on. To keep the interactivity process in real time, the note generation and rendering must be synchronized, so that a request for a new note triggers its calculation.</ns0:p><ns0:p>Since this system should be easy to understand, it was chosen that all voices would be rendered as a piano sound, using sampling. This way, it was expected that the majority of the audience would be able to identify the sounds that came from the system, even when bad speakers are used. The rendering system was implemented using a ready-made sequencing library (MIDI.js).</ns0:p><ns0:p>After built, the system was tested both on online and live audiences. This process provided a rich feedback, as will be discussed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>AUDIENCE FEEDBACK AND DISCUSSION</ns0:head><ns0:p>MOTUS was displayed both online and for live audience, which are respectively discussed in Sections 5.1 and 5.2. These are very different situations, as a live context demands a dedicated space for people to move without harming others, a stronger audio system that is capable of competing with other environmental sounds and a screen that allows visualization from a few meters of distance. This is not the case for online displays, which can be visualized from one's living room or office, thus requiring a less powerful hardware.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Online</ns0:head><ns0:p>For the online interactions, the system was advertised on social networks, and feedback was obtained both spontaneously and from an optional evaluation form. In total, 20 people voluntarily filled the evaluation form. The questions in the form were: In total, 19 volunteer subjects responded the questionnaire, and the majority (16) classified MOTUS as 'very interesting' or 'extremely interesting' for question 5. Although this shows the device was well evaluated, it is also interesting to highlight the conditions that lead to this evaluation. Therefore, these results were jointly analyzed with the answers regarding the hardware used by each subject and their prior interest in dance and music.</ns0:p><ns0:p>As it will be seen, no subject classified MOTUS as 'not interesting'. This is an encouraging result, but can mean that uninterested subjects simply chose not to answer the questionnaire. Nevertheless, provided answers gave important insight about the audience's usage and perception of the installation.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the number of subjects with each type of audio reproduction hardware, grouped by their reported interest in MOTUS (all subjects reported using their default screen for the interaction). It may be noted that using laptop (embedded) speakers did not harm the interaction. On the other hand, no subject using headphones reported MOTUS as 'extremely interesting', which can indicate that full body movements are an important part of the experience. Other' refers to one subject that reported using a mobile device for the interaction.</ns0:p></ns0:div>
<ns0:div><ns0:head>No</ns0:head><ns0:p>Data indicates that MOTUS was successfully deployed over the web and using ubiquitous hardware, as it was designed for. According to the audience, the use of minimal hardware does not harm the overall experience. However, it is important to detect which aspects impact the subjects' reported interest level.</ns0:p><ns0:p>To detect that, the reported interest levels were grouped according to the subjects' prior interest in dancing or playing instruments and singing, as shown in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Gathered data shows that users with a greater interest in dancing tend to report a greater interest in MOTUS, but a similar behavior is not observed when considering their interest in playing instruments or singing. This is another evidence that performing body movements is related to a more interesting experience with the installation.</ns0:p><ns0:p>All the subjects chose at least one option from Question 8. This shows that possibilities for future use were considered. As shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, most subjects would like to keep interacting or recommend to friends, which are indicators of a positive experience with the installation. The answers with 'other' regarded using MOTUS in different situations -in a classroom and using a video feed from a landscapewhich also points at a positive experience. The experience description (related to question 6) showed that most subjects first engaged an exploratory stage, in an attempt to detect the rules governing MOTUS, and then started applying their own repertoire to the interaction. According to the reports, the exploration of own sensations and body movements tended to generate more pleasant experiences than attempts to generate specific musical movements. The musical generation was perceived as simple, as most subjects were able to quickly understand it.</ns0:p><ns0:p>The majority of the suggestions for future work provided as an answer to question 7 point toward changing the installation's musicological model. Also, each suggestion was very different from the others, for example: 'add more instruments', 'I tried to play the blues' and 'I don't like the way notes fade out'. This is an indication that the musicological model, and probably the screen division for interaction, should be freely composed by users, possibly sharing their results. Almost all comments regarding the interaction with the webcam related to the division of the screen. Again, each user had a different suggestion, including increasing the number of divisions and constructing neutral areas that could be used to silently move between other areas. Only one comment suggested the use of a finer motion acquisition algorithm, allowing finger positions to be detected.</ns0:p><ns0:p>The spontaneous feedback was obtained by messages sent by e-mail and social networks. Most of it manifested pleasant surprises, as the presence of such a web application was found to be new. They also provided interesting comments regarding the online interaction.</ns0:p><ns0:p>The most common one was a desire to record the interaction and share it on social networks. In an extreme case, an user posted online a screenshot of the visual feedback. This was not implemented in the system, but the demand clearly states a direction for future work.</ns0:p><ns0:p>There was also a demand for porting the system for mobile environments. This is not possible at this moment because of the reduced processing power of mobile devices. However, it wasn't tested how the algorithm would behave if implemented as a native application.</ns0:p><ns0:p>Interestingly, some online users did not allow MOTUS to use their camera, hence harming the experience. This happened because the system was mistaken for privacy-invading malware. A more informative website may be used in the future to prevent this from happening.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Live</ns0:head><ns0:p>When shown live, MOTUS was mounted using a big screen for video feedback (either a screen or a projector, depending on the venue) and an amplifier for audio playback. It was granted that there would be some free space for movements (as much as possible, which also depended on the venue). Care was taken so that the camera was pointed towards somewhere with no accidental movements (e.g., people passing by or strong shadings from outside) and with a more or less uniform illumination, allowing the camera to work properly.</ns0:p><ns0:p>It was found that the interaction with the system made a small part of the audience too self-conscious to engage in participation. Two possibilities for that are the mirror characteristic of the visual feedback and the tendency of executing random movements in a public space. However, this was not true for everyone.</ns0:p><ns0:p>Part of the audience quickly engaged on exploring the limits within the sonic response of the installation could be controlled. They tended to approach the camera and perform finer arm and hand movements. Some manifested the sensation of playing an imaginary spatial piano.</ns0:p><ns0:p>The more extroverted part of the audience quickly engaged on exploring different body movements. An interesting interaction appeared when groups of people started interacting with the system together, which is perfectly possible due to the nature of the movement detection algorithm. These interactions generally took a longer time and usually comprised smiles and laughter.</ns0:p><ns0:p>Finally, an interesting manifestation was given by the audience, especially those who previously played musical instruments. They clearly manifested frustration with the lack of control possibilities in the interaction, as the same movement is not always related to the exact same musical response. Also, there were comments on the simplicity of the interaction process, which made it boring after a few minutes of exploration.</ns0:p><ns0:p>Although ambient lighting is usually a problem in camera-based movement detection systems, it was found that the system is robust to many different conditions. The system worked under different lighting conditions and presented adequate behavior except when lights blinked. However, the interaction experience was slightly changed depending on the colors of the background and the clothes of the audience.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>This paper described MOTUS, a digital interactive audio-visual installation that requires only hardware that is available in most computer and has its software deployed as a web application. The aesthetic concept behind the system is that it should convert movement to music. These premises -the technical deployment and the desired artistic result -led to a series of design and aesthetic decisions, which are thoroughly described.</ns0:p><ns0:p>MOTUS was shown in live performances, as well as in a website. Feedback was collected from the audience using a questionnaire and by spontaneous comments, which allowed to evaluate how the interaction with the system happened. It was found that this interaction was, most of the times, positive, but sometimes found not very engaging as it doesn't allow many musical aspects to be explored.</ns0:p><ns0:p>It can be accessed at the URL http://www.dca.fee.unicamp.br/ ˜tavares/auxiliary_ material/Motus/index.html, and freely used by anyone. Currently, it requires the Google Chrome browser. The source code is also available online, at https://github.com/tiagoft/ motus.</ns0:p><ns0:p>The installation system is ready to be presented for large audiences, and there seem to be two clear directions for future work. The first is to allow recording of audio and video, as well as sharing of this content in social networks. The second is allow users to compose and share their own interaction models, with a broader range of musical material.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>FigFigure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Installation overview.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Block diagram describing the installation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>and the final detected movement intensity in each partition μ[t] as in Expression 3. To allow the audience to clearly visualize each aspect of the interaction, these parameters were mapped 4/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure Block diagram for the musical interactivity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Interest in MOTUS according to audio hardware. 'Other' refers to one subject that reported using a mobile device for the interaction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Interest in MOTUS according to frequency of playing instruments or singing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Interest MOTUS according to frequency of artistic activities.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,183.09,291.76,330.86,249.19' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,204.37,525.00,394.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Number of times each option in Question 8 was chosen.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Action</ns0:cell><ns0:cell>Votes</ns0:cell></ns0:row><ns0:row><ns0:cell>Keep interacting</ns0:cell><ns0:cell>13</ns0:cell></ns0:row><ns0:row><ns0:cell>Recommend to friends</ns0:cell><ns0:cell>13</ns0:cell></ns0:row><ns0:row><ns0:cell>Download as mobile app</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Contribute to next version 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='5'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)</ns0:note>
<ns0:note place='foot' n='6'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='7'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='9'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)</ns0:note>
<ns0:note place='foot' n='11'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:1:1:NEW 6 May 2015)Reviewing Manuscript</ns0:note>
</ns0:body>
" | "Rebuttal Letter
Manuscript: 'An interactive audio-visual installation using ubiquitous hardware and web-based
software deployment' (#CS-2015:03:4249:0:0:REVIEW)
Tiago F. Tavares – tiagoft@gmail.com
Dear editor and reviewers,
I would like to thank you for your effort on reading and commenting my manuscript. I have
found all suggestions to be very useful for the improvement of the text. They were all considered, as I
discuss below:
Editor's comments
Comment
The review of earlier research must be extended. Reviewer 1's pointers to relevant
conference series and journals should be particularly helpful in updating the literature
review.
Reply
I have expanded the bibliographical review. In special, I have added a Related Work
section, in which I discuss the articles which are most relevant for this work.
Comment
The point that Reviewer 2 raises about violating the objective of having all hardware
embedded in a single laptop should be addressed.
Reply
This point was clarified in the text. The experimental section was divided in two parts,
respectively discussing the online and the live interactions. I have also added a user
survey for the online part. As a result of this, it becomes clear that using embedded
hardware does not harm the experience provided by Motus beyond practical factors (e. g.,
if a particular environment is too noisy it might require more powerful sound
reinforcement).
Comment
Both Reviewers 1 and 2 ask for further details on evaluation. If it is not possible to
provide a deeper discussion of the evaluation already conducted, then it may be necessary
to do further evaluation. This evaluation can be quantitative or qualitative.
Reply
As stated before, I have conducted an additional online user survey, which has provided
material to formalize and expand the discussion section.
Comment
Both Reviewers 2 and 3 raise questions about the emphasis on technical aspects of the
project over the user experience and the music chosen. Please address these additional
aspects to balance out the technical presentation.
Reply
I have added more references to artistic literature, as well as a section dedicated to
explaining the artistic concepts and the rationale behind Motus.
Reviewer Comments
Reviewer 1 (Anonymous)
Basic reporting
Comment
Analysis of background is based on a few papers and should be extended. For example, I
think that a search in journals such as Computer Music Journal or Journal on New Music
Research or in the proceedings of conferences such as NIME, SMC, and ICMC would
provide useful references for a deeper analysis of the state of the art.
Reply
The literature review was expanded considering both the cited venues and others.
Comment
One or more figures showing the visual output of the installation would be interesting and
useful as well. A short video showing the installation at work could be supplied as
additional material.
Reply
I have added a figure showing the visual output of the installation, and provided a link to
the installation online application. A link to the source code was also provided.
Experimental design
Comment
Whereas, on the one hand, this submission cannot be evaluated using the classical
standards as if it were a scientific experiment, on the other hand I think that a deeper
evaluation would be worth pursuing. Given the possibly large number of people
experiencing the installation, questionnaires for evaluating audience experience with it
(e.g., in terms of engagement, pleasantness, interest, and so on) could be administered and
a statistical analysis could be performed on them. Such an evaluation would make the
paper stronger.
Reply
The evaluation was conducted and added to the discussion section. It has provided great
insight on the audience's perception of the experience.
Validity of the findings
Comment
Nevertheless, a more formal and quantitative evaluation of the installation with the
audience would be needed and would provide interesting information about how the
installation is perceived.
Reply
This evaluation as conducted, as stated above.
Comments for the author
Comment
In conclusion, I think this paper addresses an interesting topic, i.e., making audio-visual
musical installations that can be experienced by a large and possibly distributed audience,
being the installation grounded on simple techniques running on low-cost devices.
However, as it is, the paper is quite weak for journal publication: the approach seems
quite naïve and does not sufficiently explore the possibilities that novel technologies open
in this field. For example, several different computer vision techniques could be exploited
and several different mapping could be applied. Such different versions/installations could
be tested (quantitatively) with the audience in order assess e.g., the extent to which they
are interesting, engaging, of artistic value.
Reply
According to the audience feedback, as detected on the user study conducted for this
review, the computer vision technique is sufficient for the application. However, it is clear
that different mappings and musical generative algorithms could be tested. Evaluation
data shows that each person has a different idea on what the mapping should be. This
indicates that a direction for future work is to make the device configurable, maybe
through a composing interface, as it is now discussed in the text. If users are allowed to
share their compositions, then a simple evaluation process through star-rating could be
implemented, and this data could hypothetically be used in the future.
Interest and engagement, according to the audience feedback, seems to be more correlated
to their prior interest in exploring body movements than to the mapping itself.
Reviewer 2 (Anonymous)
Basic reporting
Comment
With respect to the musical model, I am confused about the decision of the author to
'avoid inducing the audience to perform known popular dance moves” and 'preventing
the induction of expectations on the audience”. Both dancing and fulfilment of
expectations are typical conditions of engagement, so it is not clear why they should be
avoided. Also, I believe that piano music is not easier to understand than, say, classical
music with richer orchestration. A reference to confirm author’s claim is missing
Reply
The idea of avoiding inducing the audience to perform popular dance moves, as it is better
explained in the text, aims at favouring the exploration of new body movements, as
opposed to being attached to a common repertoire.
“preventing the induction of expectations” was re-phrased to “preventing the comparison
of the generated music with known pieces, also favoring experimentation”. I believe the
new phrasing better carries the desired meaning.
About the piano: in fact, this is a design decision. The piano is easily recognizable even
when bad speakers are used, as clarified by the phrasing of the reviewed sentence. As
discussed above and in the user evaluation, several users reported the desire to use
different instruments, which means that the installation should be made configurable, as
no instrument can be considered “better” than others.
Experimental design
Comment
I found a relevant experimental design flaw. Unexplainably, the set-up of the live
audience condition comprises 'a big screen for video feedback and an amplifier for audio
playback”. This set-up is not aligned with the initial objectives of the work about having
all the hardware embedded in a single laptop.
Reply
This is true. For this reason, I have conducted additional evaluation through an user
survey. The survey results, in special the written comments, show that the experience
provided by the live showings is very similar to the one in the context of a web
deployment, except for the magnitude of the audio and video devices. As it is now
discussed in the paper, this means that Motus is meaningful towards its original objective.
Validity of the findings
Comment
Given the objective of the paper, providing an extensive evaluation of the audience
experience is necessary to assess the quality of the system. The key contribution of this
work would indeed be to provide the audience with an experience that could be
comparable, in terms of engagement, to that usually achieved with more expensive
installations. Unfortunately, the evaluation methodology is weak, and that’s the biggest
limitation of this work. There is no information about the evaluation techniques and about
how data were analysed and processed. The author makes use of vague statements such as
“it was found that”, missing to reference how these findings came to light. Missing this,
the scientific contribution of the paper is highly affected.
Reply
A formal user study was conducted and has addressed this point, together with the
previously observed results.
Comments for the author
Comment
The paper should be better ground on the literature. To this end, I suggest the author to
improve the related work section given the numerous musical installations about music
and movements (Camurri 2013; Morreale 2014) - as well as algorithmic music creation
(e.g. the works of Cope, Miranda, Legaspi, ecc).
Reply
The literature review was expanded, as stated above.
Comment
Finally, pictures of the visual interface and the user interaction would be appreciated.
Reply
A picture was provided, as well as a link to the installation web application.
Reviewer 3 (Enrica Colabella)
Basic reporting
Comment
My suggestion is to focus better, in the beginning, the cultural field where this experiment
was born and the character of the used music, by discovering references as ancient
architecture buildings, i.e. Battistero di Pisa.
Reply
A section on the artistic motivations behind Motus was added. It brings discussion on the
creative process and the musical references that lead to the creation of the installation as
is.
Experimental design
Comment
The character of the generated music is defined through technical references, parameters
and only some cultural references. It's better if the author will try in deep to identify some
of these characters also by discovering them inside his own music references.
Reply
I have added a new section on the artistic concepts behind the creation of Motus.
Comment
It's interesting that the generative system is performed with four different voices whose
generations are independent each one from the others, by performing a different music
logical model. But it's not clarified how they are different and which character will
identify each voice, apart from the range of notes.
Reply
An additional paragraph in the Musicological Model section clarifies this. The final
output is composed by notes of more or less the same character (a random sequence),
which creates a musical response that is perceived as correlated to the movements of the
audience.
Validity of the findings
Comment
My suggestion is to identify the word MOTUS with a peculiar character, being the Latin
word used a lot from all the world.
Reply
The word MOTUS was used because it means “movement” or “motion”, which is the
base concept used in the creative process. However, it is possible that further versions will
have a different name.
Comment
Also if it should be necessary to hear MOTUS for a global evaluation.
Reply
Please, refer to:
http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/Motus/index.html. Motus
requires Google Chrome, a webcam and loudspeakers (might be embedded laptop
speakers).
Comments for the author
Comment
I suggest to identify better the different characters and the music vision at the base of the
different voices and of the ensamble, focusing better the type of control of random
factors.
Reply
This was clarified in the Musicological Model section. More intense movements are
mapped to faster and louder notes.
Moreover, I would like to reaffirm my interest in publishing this manuscript in PeerJ Computer
Science. In addition to the suggestions, I have made the source code of the installation software freely
available on GitHub (https://github.com/tiagoft/motus). I hope the text modifications were able to
address all points raised by the reviewers.
Best regards,
Tiago F. Tavares
" | Here is a paper. Please give your review comments after reading it. |
72 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation.</ns0:p><ns0:p>The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Artistic interactive musical installations, like Aether <ns0:ref type='bibr' target='#b14'>(Sanchez and Castro, 2014)</ns0:ref> and Intrium <ns0:ref type='bibr' target='#b6'>(Guisan, 2005)</ns0:ref>, are devices that allow an audience to interact with a sonic environment or musical concept using electronic sensors. In some cases, the installation is built as to augment the interaction between the public and an specific environment, as the well-known piano staircase <ns0:ref type='bibr'>(TheFunTheory, 2009)</ns0:ref>, an installation in which each step in a staircase was behaved like the key of a piano, thus causing music to be played when the audio went downstairs and upstairs. More recently, modern motion sensors allowed achieving new possibilities of musical performance and interaction <ns0:ref type='bibr' target='#b7'>(Jung et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b1'>Chen et al., 2014)</ns0:ref> by mapping movements into musical responses.</ns0:p><ns0:p>Interactive musical devices present both artistic and technological challenges <ns0:ref type='bibr' target='#b4'>(Garnett, 2001)</ns0:ref>. They create the possibility of generating music according to a dance, instead of constraining dance to a predefined musical piece <ns0:ref type='bibr' target='#b11'>(Morales-Manzanares et al., 2001)</ns0:ref>. Hence, they bring to the public a technologyenabled experience that is perceptively different from simply listening to music or dancing to a recording.</ns0:p><ns0:p>Nevertheless, most installations are expensive artifacts that must be mounted by a well-trained team. This causes their cultural experience to be restricted to specific environments, such as art galleries, museums or particular events. Therefore, the cultural transformation derived from the interaction with a novel music concept has a limited audience range.</ns0:p><ns0:p>The installation proposed in this article, namely MOTUS, aims at being deployed for a broad public. This is achieved by combining a web-deployed software stack, little hardware requirements and simple, yet engaging, methods for interaction. As a result, the experience provided by MOTUS is made accessible for any person with an internet connection and a laptop with a webcam.</ns0:p><ns0:p>The installation uses a camera as a sensor device, and a simple motion detection algorithm <ns0:ref type='bibr' target='#b19'>(Wirayuda et al., 2013)</ns0:ref> to characterize the audience's movements. The musical generation, based on Markov chains <ns0:ref type='bibr' target='#b16'>(Schulze and van der Merwe, 2011;</ns0:ref><ns0:ref type='bibr' target='#b13'>Pachet, 2002;</ns0:ref><ns0:ref type='bibr' target='#b2'>Cope, 1997)</ns0:ref>, aims at converting the detected movement intensity into the intensity of the musical manifestation without requiring previous musical knowledge from the audience. The installation software also comprises auditory and visual feedback, which may use the laptop's hardware (screen and speakers) or external devices such sound reinforcement systems and projectors.</ns0:p><ns0:p>The remainder of this article is organized as follows. First, related work is presented in Section 2, followed by a discussion about the artistic concepts behind the development of MOTUS in Section 3. In Section 4, MOTUS is thoroughly described both from the artistic and the technical points of view. Further discussion, based on interactions with the audience, is conducted in Section 5. Last, Section 6 brings conclusive remarks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>A great number of interactive art installations has been constructed in the last decade. Each one of them implements an underlying purpose, which is often discussed in academic publications. Some are especially related to MOTUS, as it will be discussed below. <ns0:ref type='bibr' target='#b0'>Birchfield et al. (Birchfield et al., 2006)</ns0:ref> brought forward the question of placement of an installation, and its impact on the usage of a public space. After implementing sonification of a bus stop in a busy street, they was observed that the general public often feels self-conscious about producing sounds in this environment. Henceforth, audience engagement is an important, non-trivial issue to be considered in installations.</ns0:p><ns0:p>A possible technique to achieve audience engagement is to develop a specific space for the installation, providing both auditory and visual stimuli <ns0:ref type='bibr' target='#b9'>(Kobori et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b17'>Seo and Corness, 2007)</ns0:ref>. However, as observed in the Piano Staircase <ns0:ref type='bibr'>(TheFunTheory, 2009)</ns0:ref>, audience engagement may happen even if the installation is placed in a public space. This indicates that the placement of the installation does not cause audience engagement alone.</ns0:p><ns0:p>In the evaluation of the interactive dance installation Hoppsa Universum <ns0:ref type='bibr' target='#b8'>(Kallblad et al., 2008)</ns0:ref>, it has shown that is audience perception was frequently described with expressions like it was fun or be with friends. Later, Schacher <ns0:ref type='bibr' target='#b15'>(Schacher, 2009)</ns0:ref> noted that the audience engagement is related to the fast understanding of the interaction model, which may restrict the usage of more complicated interfaces or algorithms.</ns0:p><ns0:p>Morreale et al. <ns0:ref type='bibr' target='#b12'>(Morreale et al., 2013)</ns0:ref> presented an algorithm, namely Robin, capable of generating piano music from the spatial position of members of the audience. The algorithm uses a rule-based system that models Western piano style music, and may be used by untrained (non-musician) members of the audience. It was presented in an installation that was well-evaluated, with great acceptance ratios.</ns0:p><ns0:p>MOTUS considers all of these aspects, but, unlike the work discussed above, it does not require special hardware (other than that present in most current laptops) or preparations to be used. It aims at being easily used, including by untrained audience, which reflects on the simplicity of its interaction model and its software is deployed as a web application, thus it can be readily used in private spaces. MOTUS is thoroughly described in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>ARTISTIC CONCEPT</ns0:head><ns0:p>MOTUS was first idealized from the idea of converting movements to music using a camera. Its name comes from the Latin word that means 'motion'. This section describes the artistic concepts over which it was constructed.</ns0:p><ns0:p>The musical concept behind MOTUS was derived from improvised genres, like Free Jazz and some styles of ethnic Drum Circles. During an improvisation session, it is important to perceive the other members of the ensemble and create some form of communication with them. In this context, elements such as harmony and rhythm may be transformed to fit the communication process that emerges in each session.</ns0:p><ns0:p>According to the model presented by <ns0:ref type='bibr' target='#b3'>Dubberly et al. (Dubberly et al., 2009)</ns0:ref>, this type of interaction is mediated by the intention of each agent. This means that the correspondence to an intention is, for the improvisation group, more important than the achievement of technical precision. Therefore, MOTUS uses a music generation model that responds to the audience intention.</ns0:p><ns0:p>For the construction of the interactive system, this intention must be assigned to control a measurable aspect of the generated music. Since MOTUS is intended to be used by an untrained audience, the musical aspect controlled by the audience's intention must be simple to understand. For this reason, the audience's intention was assigned to control the musical intensity.</ns0:p><ns0:p>To evaluate the audience's intention using the webcam, it was necessary to estimate the intensity of captured movements. Instead of mapping particular movements to specific sonic representations, a general movement intensity was measured using pixel-by-pixel differences. This allows the audience Reviewing Manuscript to explore not only the interaction with MOTUS, but also the diverse possibilities of using their bodies, interacting with friends or using objects.</ns0:p><ns0:p>With the goal of inducing broader movements, the video area was divided into different regions, each related to a sonic representation. The audience can visualize the video feed, with a color scheme that highlights the regions that are most active. In addition to the aesthetic appeal, this feedback helps understanding the interaction process.</ns0:p><ns0:p>For this same reason, piano sounds were used for audio rendering. They have the goal of being easy to recognize, as most of the general audience (at least in Western countries) is familiar with the instrument. The installation is described from a more technical point of view in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>THE INSTALLATION</ns0:head><ns0:p>The main concern when developing MOTUS was that it could be used by as many people as possible.</ns0:p><ns0:p>Steps towards this goal were taken by requiring as little external hardware as possible and by deploying the software as a web application. The hardware necessary to mount the installation was restricted to that available in a common laptop, i.e., a webcam, a video screen and internal speakers, leading to an overall system as described in The deployment problem can be solved by using JavaScript as the main programming language. It can be used to deploy the application directly on a host web browser. However, this choice also poses a performance restriction, as JavaScript applications are usually slow when compared to native (compiled) programs.</ns0:p><ns0:p>On the artistic side, the concept behind MOTUS is that it should convert movement to music. This conversion means that user movements should trigger a musical response, and more intense movements should correspond to a more intense musical response. Therefore, two subsystems are necessary, one comprising a movement detection algorithm and another one containing a musicological model that generates musical responses.</ns0:p><ns0:p>Also, it quickly became clear that a video feedback of the detection process could improve the audience's experience. This happens because the visual information allows the user to understand and appropriate their interaction with a novel musical device. As a result, a greater level of immersion could be provided.</ns0:p><ns0:p>Therefore, MOTUS can be detailed in a block diagram as shown in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. All blocks in gray are software, and will be executed within the computer shown in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The following sub-sections will present a thorough description of the movement detection, video rendering, the musicological model and the audio rendering system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Movement detection</ns0:head><ns0:p>The movement detection process applied in MOTUS is very simple, as the web-based implementation does not allow for computationally demanding algorithms. The algorithm begins with the calculation the value v p of each pixel p as the sum of its red, green and blue channels, as it is a common practice in computer vision algorithms <ns0:ref type='bibr' target='#b18'>(Szeliski, 2010)</ns0:ref>. Hence, it may be expressed by:</ns0:p><ns0:formula xml:id='formula_0'>v p = r p + g p + b p .</ns0:formula><ns0:p>(1)</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ In this algorithm, it is more important to detect the intensity of movements than the precise movement location. Such a parameter can be estimated using the mean absolute difference between the pixel values in a frame t and those in the previous frame t − 1 <ns0:ref type='bibr' target='#b10'>(Moeslund, 2012)</ns0:ref>, that is:</ns0:p><ns0:formula xml:id='formula_1'>µ[t] = ∑ P p=1 |v p [t] − v p [t − 1]| P ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where P is the number of pixels in the frame.</ns0:p><ns0:p>Calculating the amount of movement in the whole video feed, however, does not allow placing different interaction with the installation when performing different types of movements. Therefore, the video input was first split into four different partitions. Each partition had its own movements intensity estimation, and, as will be seen later, is related to a different part of the interaction experience.</ns0:p><ns0:p>In preliminary tests, it was noticed that µ[t] changes too quickly, which gives an impression of chaos and lack of control. Hence, it was necessary to apply a filter to each µ t signal before using it for further purposes. An attack-release filter was applied, using the following expression:</ns0:p><ns0:formula xml:id='formula_2'>μ[t] = α µ[t] + (1 − α) μ[t − 1] if µ[t] > μ[t − 1] β µ[t] + (1 − β ) μ[t − 1] if µ[t] ≤ μ[t − 1].</ns0:formula><ns0:p>(3)</ns0:p><ns0:p>The attack-release filter acts as a low-pass filter whose cut-off frequency is different whether the input signal is higher or lower than the last output. Higher values for the α and β coefficients correspond to shorter attack and release times, respectivelly. They were manually adjusted so that the resulting interaction was smooth as desired.</ns0:p><ns0:p>Hence, the result of the movement detection process is a set of four movement estimates μ[t], one for each partition. This result was used both in the musicological model and the video rendering process, as it will be discussed later.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Video rendering</ns0:head><ns0:p>The visual feedback provided by MOTUS aims at two correlated, but different goals. The first is to yield feedback on what the system is doing, that is, what is being detected. The second is to make the audience experience more immersive and engaging.</ns0:p><ns0:p>Three dimensions of the system's inner mechanisms were chosen to be conveyed: the captured image values v p as in Expression 1, the differences between the current frame and the previous frame Reviewing Manuscript to different colors. These colors were arbitrarily chosen to be blue, red and green, which colored the feedback video creating a particular aesthetic environment.</ns0:p><ns0:formula xml:id='formula_3'>(|v p [t] − v p [t − 1]|)</ns0:formula><ns0:p>As stated before, the values of each frame were mapped to the blue channel of the feedback video. The blue color, then, becomes dominant at almost all times, which gives the installation a general feeling of blue. As a consequence, blue is a color related to musical rest. Each pixel's absolute difference to the previous frame was mapped to the red channel. This caused a red 'ghost' to appear in point where strong movements were detected, indicating that an interaction was detected. This piece of visual feedback is bounded to the user and became subtle when compared to other cues.</ns0:p><ns0:p>The amount of movement μ[t] in each frame partition was mapped to the green channel of the corresponding pixels. This aimed at helping the audience to relate movements to sounds, as a particular category of sonic responses would be clearly correlated to specific blinks in a region of the screen. This piece of visual feedback is strongly correlated to the musicological model employed, as it will be seen below.</ns0:p><ns0:p>A screenshot of the video feedback in action is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>, converted to gray scale to ensure visibility in printed media. As it can be seen, screen areas in which there is more movement are highlighted, and it is possible to visualize both the body movement detection and the activation of screen areas related to musical responses. Thus, the audience's impact on the audiovisual environment is easily visualized.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Screenshot of the video render.</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.3'>Musicological model</ns0:head><ns0:p>The generation of musical sequences was done by means of four musicological models, each receiving as input the amount of movement of a different video partition. In all cases, the model should yield a musical manifestation that is perceived as more intense when movements in that partition are more intense. Also, this correlation should be perceived almost immediately.</ns0:p><ns0:p>In addition to that, the models were built so that no strong sensation of downbeat would emerge, hence avoiding inducing the audience to perform known popular dance moves and favoring the exploration of different body movements. The sensation of closure commonly found in tonal music (e.g. in I-IV-V-I sequences) was also avoided, preventing the comparison of the generated music with known pieces, also favoring experimentation. To keep the interaction more interesting, each partition was bounded to a different musicological behavior, which aimed at inducing the audience to explore the whole interactive space.</ns0:p><ns0:p>An aesthetic choice that fits all of these requirements was to make all models yield sequences of musical notes, which is a musical paradigm that is easily recognized by most of the audience. When the sequences are required to be more intense, their notes were increasingly faster and louder. In order to make all musicological models yield sequences that sounded as part of the same piece, they were all bounded to the same octatonic scale, and differences were added on the way each model creates a path within that scale.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, each generative system is independent from the others. They correspond to four different voices, namely upper, middle, harmony and bass. All of them yield note sequences, which will be later rendered. One sequence generation model applied relies on a Markov chain <ns0:ref type='bibr' target='#b2'>(Cope, 1997)</ns0:ref>, adjusted so that the next note is equally likely to be equal to the previous note, a step down or a step up the scale. This model was used in the upper and the middle voices, which were also restricted to particular note ranges. The note range restriction allows users to quickly recognize each one of the voices.</ns0:p><ns0:p>The other sequence generation model was the purely random choice. In the lower voice, a random note from the scale (within the range restriction) was yielded at each interaction. In the harmony voice, two random notes from the scale (also within the range restriction) were yielded at each time.</ns0:p><ns0:p>All four voices had different functions to transform the input (movement intensity in the corresponding partition) into values of note speed and loudness, so that more intense movements are mapped to faster and louder notes. These functions were manually adjusted to provide a balanced auditory response related to each space partition, as well as an interesting experience. In all cases, it has proved to be interesting to have a lower bound filtering on the input below which it is considered as noise and does not produce any sonic response.</ns0:p><ns0:p>As a result, MOTUS quickly responds to the audience's actions. It yields a sequence of notes that are almost always dissonant and out of sync related to each other. Nevertheless, the note sequences aim to be perceived as correlated to the audience's movements. This design fits the employed technology (JavaScript), as it is known for having a bad timing mechanism in current implementations. As note lengths are continuous and not bounded to the notes in other voices, the lack of synchronization does not harm the final result. This also allowed the audio rendering process to be performed by independent agents, as it will be discussed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Audio rendering</ns0:head><ns0:p>The audio rendering process was based on agents that receive pitch, loudness and duration information from the note sequence generated by the musical models. When a note is finished (i.e., its duration has expired), the system proceeds to render the next note (or notes, in the case of the harmony voice), and so on. To keep the interactivity process in real time, the note generation and rendering processes must be synchronized, so that a request for a new note triggers its calculation.</ns0:p><ns0:p>Since this system should be easy to understand, it was chosen that all voices would be rendered as a piano sound, using sampling. This way, it was expected that the majority of the audience would be able to identify the sounds that came from the system, even when bad speakers are used. The rendering system was implemented using a ready-made sequencing library (MIDI.js).</ns0:p><ns0:p>After built, the system was tested both on online and live audiences. This process provided a rich feedback, as will be discussed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>AUDIENCE FEEDBACK AND DISCUSSION</ns0:head><ns0:p>MOTUS was displayed both online and for live audience, which are respectively discussed in Sections 5.1 and 5.2. These are very different situations, as a live context demands a dedicated space for people to move without harming others, a stronger audio system that is capable of competing with other environmental sounds and a screen that allows visualization from a few meters of distance. This is not the case for online displays, which can be visualized from one's living room or office, thus requiring a less powerful hardware.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Online</ns0:head><ns0:p>For the online interactions, the system was advertised on social networks, and feedback was obtained both spontaneously and from an optional evaluation form. The questions in the form were: In total, 19 volunteer subjects responded the questionnaire, and the majority (16) classified MOTUS as 'very interesting' or 'extremely interesting' for question 5. Although this shows the device was well evaluated, it is also interesting to highlight the conditions that lead to this evaluation. Therefore, these results were jointly analyzed with the answers regarding the hardware used by each subject and their prior interest in dance and music.</ns0:p><ns0:p>As it will be seen, no subject classified MOTUS as 'not interesting'. This is an encouraging result, but can mean that uninterested subjects simply chose not to answer the questionnaire. Nevertheless, provided answers gave important insight about the audience's usage and perception of the installation.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the number of subjects with each type of audio reproduction hardware, grouped by their reported interest in MOTUS (all subjects reported using their default screen for the interaction). It may be noted that using laptop (embedded) speakers did not harm the interaction. On the other hand, no subject using headphones reported MOTUS as 'extremely interesting', which can indicate that full body movements are an important part of the experience. Other' refers to one subject that reported using a mobile device for the interaction.</ns0:p></ns0:div>
<ns0:div><ns0:head>No</ns0:head><ns0:p>Data indicates that MOTUS was successfully deployed over the web and using ubiquitous hardware, as it was designed for. According to the audience, the use of minimal hardware does not harm the overall experience. However, it is important to detect which aspects impact the subjects' reported interest level.</ns0:p><ns0:p>To detect that, the reported interest levels were grouped according to the subjects' prior interest in dancing or playing instruments and singing, as shown in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Gathered data shows that users with a greater interest in dancing tend to report a greater interest in MOTUS, but a similar behavior is not observed when considering their interest in playing instruments or singing. This is another evidence that performing body movements is related to a more interesting experience with the installation.</ns0:p><ns0:p>All the subjects chose at least one option from Question 8. This shows that possibilities for future use were considered. As shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, most subjects would like to keep interacting or recommend to friends, which are indicators of a positive experience with the installation. The answers with 'other' regarded using MOTUS in different situations -in a classroom and using a video feed from a landscapewhich also points at a positive experience. The experience description (related to question 6) showed that most subjects first engaged an exploratory stage, in an attempt to detect the rules governing MOTUS, and then started applying their own repertoire to the interaction. According to the reports, the exploration of own sensations and body movements tended to generate more pleasant experiences than attempts to generate specific musical movements. The musical generation was perceived as simple, as most subjects were able to quickly understand it.</ns0:p><ns0:p>The majority of the suggestions for future work provided as an answer to question 7 point toward changing the installation's musicological model. Also, each suggestion was very different from the others, for example: 'add more instruments', 'I tried to play the blues' and 'I don't like the way notes fade out'. This is an indication that the musicological model, and probably the screen division for interaction, should be freely composed by users, possibly sharing their results. Almost all comments regarding the interaction with the webcam related to the division of the screen. Again, each user had a different suggestion, including increasing the number of divisions and constructing neutral areas that could be used to silently move between other areas. Only one comment suggested the use of a finer motion acquisition algorithm, allowing finger positions to be detected.</ns0:p><ns0:p>The spontaneous feedback was obtained by messages sent by e-mail and social networks. Most of it manifested pleasant surprises, as the presence of such a web application was found to be new. They also provided interesting comments regarding the online interaction.</ns0:p><ns0:p>The most common one was a desire to record the interaction and share it on social networks. In an extreme case, an user posted online a screenshot of the visual feedback. This was not implemented in the system, but the demand clearly states a direction for future work.</ns0:p><ns0:p>There was also a demand for porting the system for mobile environments. This is not possible at this moment because of the reduced processing power of mobile devices. However, it wasn't tested how the algorithm would behave if implemented as a native application.</ns0:p><ns0:p>Interestingly, some online users did not allow MOTUS to use their camera, hence harming the experience. This happened because the system was mistaken for privacy-invading malware. A more informative website may be used in the future to prevent this from happening.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Live</ns0:head><ns0:p>When shown live, MOTUS was mounted using a big screen for video feedback (either a screen or a projector, depending on the venue) and an amplifier for audio playback. It was granted that there would be some free space for movements (as much as possible, which also depended on the venue). Care was taken so that the camera was pointed towards somewhere with no accidental movements (e.g., people passing by or strong shadings from outside) and with a more or less uniform illumination, allowing the camera to work properly.</ns0:p><ns0:p>It was found that the interaction with the system made a small part of the audience too self-conscious to engage in participation. Two possibilities for that are the mirror characteristic of the visual feedback and the tendency of executing random movements in a public space. However, this was not true for everyone.</ns0:p><ns0:p>Part of the audience quickly engaged on exploring the limits within the sonic response of the installation could be controlled. They tended to approach the camera and perform finer arm and hand movements. Some manifested the sensation of playing an imaginary spatial piano.</ns0:p><ns0:p>The more extroverted part of the audience quickly engaged on exploring different body movements. An interesting interaction appeared when groups of people started interacting with the system together, which is perfectly possible due to the nature of the movement detection algorithm. These interactions generally took a longer time and usually comprised smiles and laughter.</ns0:p><ns0:p>Finally, an interesting manifestation was given by the audience, especially those who previously played musical instruments. They clearly manifested frustration with the lack of control possibilities in the interaction, as the same movement is not always related to the exact same musical response. Also, there were comments on the simplicity of the interaction process, which made it boring after a few minutes of exploration.</ns0:p><ns0:p>Although ambient lighting is usually a problem in camera-based movement detection systems, it was found that the system is robust to many different conditions. The system worked under different lighting conditions and presented adequate behavior except when lights blinked. However, the interaction experience was slightly changed depending on the colors of the background and the clothes of the audience.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>This paper described MOTUS, a digital interactive audio-visual installation that requires only hardware that is available in most computer and has its software deployed as a web application. The aesthetic concept behind the system is that it should convert movement to music. These premises -the technical deployment and the desired artistic result -led to a series of design and aesthetic decisions, which are thoroughly described.</ns0:p><ns0:p>MOTUS was shown in live performances, as well as in a website. Feedback was collected from the audience using a questionnaire and by spontaneous comments, which allowed to evaluate how the interaction with the system happened. It was found that this interaction was, most of the times, positive, but sometimes found not very engaging as it doesn't allow many musical aspects to be explored.</ns0:p><ns0:p>It can be accessed at the URL http://www.dca.fee.unicamp.br/ ˜tavares/auxiliary_ material/Motus/index.html, and freely used by anyone. Currently, it requires the Google Chrome browser. The source code is also available online, at https://github.com/tiagoft/ motus.</ns0:p><ns0:p>The installation system is ready to be presented for large audiences, and there seem to be two clear directions for future work. The first is to allow recording of audio and video, as well as sharing of this content in social networks. The second is allow users to compose and share their own interaction models, with a broader range of musical material.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>FigFigure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Installation overview.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Block diagram describing the installation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>and the final detected movement intensity in each partition μ[t] as in Expression 3. To allow the audience to clearly visualize each aspect of the interaction, these parameters were mapped 4/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure Block diagram for the musical interactivity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Interest in MOTUS according to audio hardware. 'Other' refers to one subject that reported using a mobile device for the interaction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Interest in MOTUS according to frequency of playing instruments or singing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Interest in MOTUS according to frequency of artistic activities.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,183.09,291.76,330.86,249.19' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,204.37,525.00,394.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Number of times each option in Question 8 was chosen.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Action</ns0:cell><ns0:cell>Votes</ns0:cell></ns0:row><ns0:row><ns0:cell>Keep interacting</ns0:cell><ns0:cell>13</ns0:cell></ns0:row><ns0:row><ns0:cell>Recommend to friends</ns0:cell><ns0:cell>13</ns0:cell></ns0:row><ns0:row><ns0:cell>Download as mobile app</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Contribute to next version 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Other</ns0:cell><ns0:cell>2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='5'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)</ns0:note>
<ns0:note place='foot' n='6'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='7'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='9'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)</ns0:note>
<ns0:note place='foot' n='11'>/11 PeerJ Comp Sci reviewing PDF | (CS-2015:03:4249:2:0:NEW 12 May 2015)Reviewing Manuscript</ns0:note>
</ns0:body>
" | "Rebuttal Letter
Manuscript: 'An interactive audio-visual installation using ubiquitous hardware and web-based
software deployment' (#CS-2015:03:4249:0:0:REVIEW)
Tiago F. Tavares – tiagoft@gmail.com
Dear editor,
I would like to thank you again for your effort on re-evaluating my manuscript. The suggested
revisions were all addressed, as detailed below:
Comment
The introduction includes the phrase 'a technology-enabled experience that is sensibly
different from simply listening to music...'. 'Sensibly different' doesn't quite fit here.
Perhaps 'substantively different' or 'perceptively different'?
Reply
I agree. The wording was changed to “perceptively different”.
Comment
In Section 5, it is first stated that 20 people responded to the questionnaire and then a few
paragraphs later that 19 people responded. Can you clarify this?
Reply
The correct number is 19. I also noted that the phrase was redundant with the information
provided by the following paragraph. Hence, I removed this first mention to the number
of participants, increasing the text's readability.
Thanks again for these reviews.
Best regards,
Tiago
" | Here is a paper. Please give your review comments after reading it. |
73 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples.</ns0:p><ns0:p>Our approach is based on Gaussian process regression which provides a flexible nonparametric framework for modelling periodic data. We introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. This decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. To quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. Although the method can be applied to many kernels, we give a special emphasis to the Matérn family, from the expression of the reproducing kernel Hilbert space inner product to the implementation of the associated periodic kernels in a Gaussian process toolkit. The proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The periodic behaviour of natural phenomena arises at many scales, from the small wavelength of electromagnetic radiations to the movements of planets. The mathematical study of natural cycles can be traced back to the nineteenth century with Thompson's harmonic analysis for predicting tides <ns0:ref type='bibr' target='#b26'>[Thomson, 1878]</ns0:ref> and Schuster's investigations on the periodicity of sunspots <ns0:ref type='bibr' target='#b22'>[Schuster, 1898]</ns0:ref>. Amongst the methods that have been considered for detecting and extracting the periodic trend, one can cite harmonic analysis <ns0:ref type='bibr' target='#b9'>[Hartley, 1949]</ns0:ref>, folding methods <ns0:ref type='bibr' target='#b24'>[Stellingwerf, 1978</ns0:ref><ns0:ref type='bibr' target='#b12'>, Leahy et al., 1983]</ns0:ref> which are mostly used in astrophysics and periodic autoregressive models <ns0:ref type='bibr' target='#b27'>[Troutman, 1979</ns0:ref><ns0:ref type='bibr' target='#b28'>, Vecchia, 1985]</ns0:ref>. In this article, we focus on the application of harmonic analysis in reproducing kernel Hilbert spaces (RKHS) and on the consequences for Gaussian process modelling. Our approach provides a flexible framework for inferring both the periodic and aperiodic components of sparsely sampled and noise-corrupted data, providing a principled means for quantifying the degree of periodicity. We demonstrate our proposed method on the problem of identifying periodic genes in gene expression time course data, comparing performance with a popular alternative approach to this problem.</ns0:p><ns0:p>Harmonic analysis is based on the projection of a function onto a basis of periodic functions.</ns0:p><ns0:p>For example, a natural method for extracting the 2π-periodic trend of a function f is to decompose it in a Fourier series: f (x) → f p (x) = a 1 sin(x) + a 2 cos(x) + a 3 sin(2x) + a 4 cos(2x) + . . .</ns0:p><ns0:p>where the coefficients a i are given, up to a normalising constant, by the L 2 inner product between f and the elements of the basis. However, the phenomenon under study is often observed at a limited number of points, which means that the value of f (x) is not known for all x but only for a small set of inputs {x 1 , . . . , x n } called the observation points. With this limited knowledge of f , it is not possible to compute the integrals of the L 2 inner product so the coefficients a i cannot be obtained directly. The observations may also be corrupted by noise, further complicating the problem.</ns0:p><ns0:p>A popular approach to overcome the fact that f is partially known is to build a mathematical model m to approximate it. A good model m has to take into account as much information as possible about f . In the case of noise-free observations it interpolates f for the set of observation points m(x i ) = f (x i ) and its differentiability corresponds to the assumptions one can have about the regularity of f . The main body of literature tackling the issue of interpolating spatial data is scattered over three fields: (geo-)statistics <ns0:ref type='bibr' target='#b15'>[Matheron, 1963</ns0:ref><ns0:ref type='bibr' target='#b23'>, Stein, 1999]</ns0:ref>, functional analysis <ns0:ref type='bibr'>[Aronszajn, 1950, Berlinet and</ns0:ref><ns0:ref type='bibr' target='#b2'>Thomas-Agnan, 2004]</ns0:ref> and machine learning <ns0:ref type='bibr' target='#b20'>[Rasmussen and Williams, 2006</ns0:ref>]. In the statistics and machine learning framework, the solution of the interpolation problem corresponds to the expectation of a Gaussian process, Z, which is conditioned on the observations. In functional analysis the problem reduces to finding the interpolator with minimal norm in a RKHS H. As many authors pointed out (for example <ns0:ref type='bibr' target='#b2'>Berlinet and Thomas-Agnan [2004]</ns0:ref> and Scheuerer et al.</ns0:p><ns0:p>[2011]), the two approaches are closely related. Both Z and H are based on a common object which is a positive definite function of two variables k(., .). In statistics, k corresponds to the covariance of Z and for the functional counterpart, k is the reproducing kernel of H.</ns0:p><ns0:p>From the regularization point of view, the two approaches are equivalent since they lead to the same model m <ns0:ref type='bibr' target='#b29'>[Wahba, 1990]</ns0:ref>. Although we will focus hereafter on the RKHS framework to design periodic kernels, we will also take advantage of the powerful probabilistic interpretation offered by Gaussian processes.</ns0:p><ns0:p>We propose in this article to build the Fourier series using the RKHS inner product instead of the L 2 one. To do so, we extract the sub-RKHS H p of periodic functions in H and model the periodic part of f by its orthogonal projection onto H p . One major asset of this approach is to give a rigorous definition of non-periodic (or aperiodic) functions as the elements of the sub-RKHS H a = H ⊥ p . The decomposition H = H p ⊕ H a then allows discrimination of the periodic component of the signal from the aperiodic one. Although some expressions of kernels leading to RKHS of periodic functions can be found in the literature <ns0:ref type='bibr' target='#b20'>[Rasmussen and Williams, 2006]</ns0:ref>, they do not allow to extract the periodic part of the signal. Indeed, usual periodic kernels do not come with the expression of an aperiodic kernel. It is thus not possible to obtain a proper decomposition of the space as the direct sum of periodic and aperiodic subspaces and the periodic sub-model cannot be obtained rigorously.</ns0:p><ns0:p>The last part of this introduction is dedicated to a motivating example. In section 2, we focus on the construction of periodic and aperiodic kernels and on the associated model decomposition. Section 3 details how to perform the required computations for kernels from the Matérn familly. Section 4 introduces a new criterion for measuring the periodicity of the signal. Finally, the last section illustrates the proposed approach on a biological case study where we detect, amongst the entire genome, the genes showing a cyclic expression.</ns0:p><ns0:p>The examples and the results presented in this article have been generated with the version 0.8 of the python Gaussian process toolbox GPy. This toolbox, in which we have implemented the periodic kernels discussed here, can be downloaded at http://github. com/SheffieldML/GPy. Furthermore, the code generating the figures 1 to 3 is provided in the supplementary materials as jupyter notebooks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Motivating example</ns0:head><ns0:p>To illustrate the challenges of determining a periodic function, we first consider a benchmark of six one dimensional periodic test functions (see Fig. <ns0:ref type='figure'>1 and appendix A</ns0:ref>). These functions include a broad variety of shapes so that we can understand the effect of shape on methods with different modelling assumptions. A set X = (x 1 , . . . , x 50 ) of equally spaced observation points is used as training set and a N (0, 0.1) observation noise is added to each evaluation of the test function: F i = f (x i ) + ε i (or F = f (X) + ε with vector notations). We consider three different modelling approaches to compare the facets of different approaches based on harmonic analysis:</ns0:p><ns0:p>• COSOPT <ns0:ref type='bibr' target='#b25'>[Straume, 2004]</ns0:ref>, which fits cosine basis functions to the data,</ns0:p><ns0:p>• Linear regression in the weights of a truncated Fourier expansion,</ns0:p><ns0:p>• Gaussian process regression with a periodic kernel.</ns0:p><ns0:p>COSOPT COSOPT is a method that is commonly used in biostatistics for detecting periodically expressed genes <ns0:ref type='bibr'>[Hughes et al., 2009, Amaral and</ns0:ref><ns0:ref type='bibr' target='#b0'>Johnston, 2012]</ns0:ref>. It assumes the following model for the signal:</ns0:p><ns0:formula xml:id='formula_1'>y(x) = α + β cos(ωx + ϕ) + ε,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where ε corresponds to white noise. The parameters α, β, ω and ϕ are fitted by minimizing the mean square error.</ns0:p></ns0:div>
<ns0:div><ns0:head>Linear regression</ns0:head><ns0:p>We fit a more general model with a basis of sines and cosines with periods 1, 1/2..., 1/20 to account for periodic signal that does not correspond to a pure sinusoidal signal.</ns0:p><ns0:formula xml:id='formula_2'>y(x) = α + 20 i=1 β i cos(2πix) + 20 i=1 γ i sin(2πix) + ε.<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Again, model parameters are fitted by minimizing the mean square error which corresponds to linear regression over the basis weights.</ns0:p><ns0:p>Gaussian Process with periodic covariance function We fit a Gaussian process model with an underlying periodic kernel. We consider a model,</ns0:p><ns0:formula xml:id='formula_3'>y(x) = α + y p (x) + ε,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where y p is a Gaussian process and where α should be interpreted as a Gaussian random variable with zero mean and variance σ 2 α . The periodicity of the phenomenon is taken into account by choosing a process y p such that the samples are periodic functions. This can be achieved with a kernel such as</ns0:p><ns0:formula xml:id='formula_4'>k p (x, x ) = σ 2 exp − sin 2 (ω(x − x ))<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>or with the kernels discussed later in the article. For this example we choose the periodic Matérn 3/2 kernel which is represented in panel (b) of Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. For any kernel choice, the Gaussian process regression model can be summarized by the mean and variance of the conditional distribution:</ns0:p><ns0:formula xml:id='formula_5'>m(x) = E[y(x)|y(X) = F ] = k(x, X)(k(X, X) + τ 2 I) −1 F v(x) = Var[y(x)|y(X) = F ] = k(x, x) − k(x, X)(k(X, X) + τ 2 I) −1 k(X, x)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where k = σ 2 α + k p and I is the 50 × 50 identity matrix. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure'>1</ns0:ref>: Plots of the benchmark test functions, observation points and fitted models. For an improved visibility, the plotting region is limited to one period. The RMSE is computed using a grid of 500 evenly spaced points spanning [0, 3], and the values indicated on each subplot correspond respectively to COSOPT, the periodic Gaussian process model and linear regression. The Python code used to generate this figure is provided as jupyter notebook in Supplemental Information.</ns0:p><ns0:p>The models fitted with COSOPT, linear regression and the periodic Gaussian process model are compared in Fig. <ns0:ref type='figure'>1</ns0:ref>. It can be seen that the latter clearly outperforms the other models since it can approximate non sinusoidal patterns (in opposition to COSOPT) while offering a good noise filtering (no high frequencies oscillations corresponding to noise overfitting such as for linear regression).</ns0:p><ns0:p>The Gaussian process model gives an effective non-parametric fit to the different functions.</ns0:p><ns0:p>In terms of root mean square error (RMSE) in each case, it is either the best performing method, or it performs nearly as well as the best performing method. Both linear regression and COSOPT can fail catastrophically on one or more of these examples.</ns0:p><ns0:p>Although highly effective for purely periodic data, the use of a periodic Gaussian processes is less appropriate for identifying the periodic component of a pseudo-periodic function such as f (x) = cos(x) + 0.1exp (−x). An alternative suggestion is to consider a pseudo-periodic</ns0:p><ns0:p>Gaussian process y = y 1 +y p with a kernel given by the sum of a usual kernel k 1 and a periodic one k p (see e.g. <ns0:ref type='bibr' target='#b20'>Rasmussen and Williams [2006]</ns0:ref>). Such a construction allows decomposition of the model into a sum of sub-models m</ns0:p><ns0:formula xml:id='formula_6'>(x) = E[y 1 (x)|y(X) = F ] + E[y p (x)|y(X) = F ]</ns0:formula><ns0:p>where the latter is periodic (see section 2.2 for more details). However, the periodic part of the signal is scattered over the two sub-models so it is not fully represented by the periodic sub-model. It would therefore be desirable to introduce new covariance structures that allow an appropriate decomposition in periodic and non-periodic sub-models in order to tackle periodicity estimation for pseudo-periodic signals.</ns0:p><ns0:p>2 Kernels of periodic and aperiodic subspaces</ns0:p><ns0:p>The challenge of creating a pair of kernels that stand respectively for the periodic and aperiodic components of the signal can be tackled using the RKHS framework. We detail in this section how decomposing a RKHS into a subspace of periodic functions and its orthogonal complement leads to periodic and aperiodic sub-kernels.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Fourier basis in RKHS</ns0:head><ns0:p>We assume in this section that the space H p spanned by a truncated Fourier basis</ns0:p><ns0:formula xml:id='formula_7'>B(x) = sin 2π λ x , . . . , cos 2π λ qx (7)</ns0:formula><ns0:p>is a subspace of the RKHS H. Under this hypothesis, it is straightforward to confirm that the reproducing kernel of H p is</ns0:p><ns0:formula xml:id='formula_8'>k p (x, x ) = B (x)G −1 B(x ) (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>where G is the Gram matrix of B in H: G i,j = B i , B j H . Hereafter, we will refer to k p as the periodic kernel. In practice, the computation of k p requires computation of the inner product between sine and cosine functions in H. We will see in the next section that these computations can be done analytically for Matérn kernels. For other kernels, a more comprehensive list of RKHS inner products can be found in <ns0:ref type='bibr'>Berlinet and Thomas-Agnan [2004, Chap. 7</ns0:ref>].</ns0:p><ns0:p>The orthogonal complement of H p in H can be interpreted as a subspace H a of aperiodic functions. By construction, its kernel is k a = k − k p <ns0:ref type='bibr' target='#b2'>[Berlinet and Thomas-Agnan, 2004</ns0:ref>]. An illustration of the decomposition of Matérn 3/2 kernels is given in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. The decomposition of the kernel comes with a decomposition of the associated Gaussian process in to two independent processes and the overall decompositions can be summarised as follow:</ns0:p><ns0:formula xml:id='formula_10'>H = H p ⊥ + H a ↔ k = k p + k a ↔ y = y p ⊥ ⊥ + y a .<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Many stationary covariance functions depend on two parameters: a variance parameter <ns0:ref type='bibr'>, 5)</ns0:ref>. For these plots, one of the kernels variables is fixed to 5. The three graphs on each plot correspond to a different value of the lengthscale parameter . The input space is D = [0, 4π] and the cut-off frequency is q = 20. The Python code used to generate this figure is provided as jupyter notebook in Supplemental Information.</ns0:p><ns0:p>Such reparametrisations of k p and k a induce changes in the norms of H p and H a . However, if the values of the parameters are not equal to zero or +∞, these spaces still consist of the same elements so H p ∩ H a = ∅. This implies that the RKHS generated by k p + k a corresponds to H p + H a where the latter are still orthogonal but endowed with a different norm. Nevertheless, the approach is philosophically different since we build H by adding two spaces orthogonally whereas in Eq. 9 we decompose an existing space H into orthogonal subspaces.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Decomposition in periodic and aperiodic sub-models</ns0:head><ns0:p>The expression y = y p + y a of Eq. 9 allows to introduce two sub-models corresponding to conditional distributions: a periodic one y p (x)|y(X) = F and an aperiodic one y a (x)|y(X) = F .</ns0:p><ns0:p>These two distributions are Gaussian and their mean and variance are given by the usual Gaussian process conditioning formulas</ns0:p><ns0:formula xml:id='formula_11'>m p (x) = E[y p (x)|y(X) = F ] = k p (x, X)k(X, X) −1 F m a (x) = E[y a (x)|y(X) = F ] = k a (x, X)k(X, X) −1 F, (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>) v p (x) = Var[y p (x)|y(X) = F ] = k p (x, x) − k p (x, X)k(X, X) −1 k p (X, x) v a (x) = Var[y a (x)|y(X) = F ] = k a (x, x) − k a (x, X)k(X, X) −1 k a (X, x). (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:p>The linearity of the expectation ensures that the sum of the sub-models means is equal to the full model mean:</ns0:p><ns0:formula xml:id='formula_14'>m(x) = E[y p (x) + y a (x)|y(X) = F ] = E[y p (x)|y(X) = F ] + E[y a (x)|y(X) = F ] = m p (x) + m a (x) (12)</ns0:formula><ns0:p>Manuscript to be reviewed Our decomposition allows for recognition of both periodic and aperiodic parts. In this case maximum likelihood estimation was used to determine the parameters of the kernel, we recovered (σ 2 p , p , σ 2 a , a ) = (52. <ns0:ref type='bibr'>96, 5.99, 1.18, 47.79)</ns0:ref>. The Python code used to generate this figure is provided as jupyter notebook in Supplemental Information. so m p and m a can be interpreted as the decomposition of m into it's periodic and aperiodic components. However, there is no similar decomposition of the variance: v(x) = v p (x)+v a (x) since y p and y a are not independent given the observations. The sub-models can be interpreted as usual Gaussian process models with correlated noise.</ns0:p><ns0:p>For example, m p is the best predictor based on kernel k p with an observational noise given by k a . For a detailed discussion on the decomposition of models based on a sum of kernels see <ns0:ref type='bibr' target='#b5'>Durrande et al. [2012]</ns0:ref>.</ns0:p><ns0:p>We now illustrate this model decomposition on the test function f (x) = sin(x) + x/20 defined over <ns0:ref type='bibr'>[0,</ns0:ref><ns0:ref type='bibr'>20]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows the obtained model after estimating (σ 2 p , p , σ 2 a , a ) of a decomposed Matérn 5/2 kernel. In this example, the estimated values of the lengthscales are very different allowing the model to capture efficiently the periodic component of the signal and the low frequency trend.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Application to Matérn Kernels</ns0:head><ns0:p>The Matérn class of kernels provides a flexible class of stationary covariance functions for a Gaussian process model. The family includes the infinitely smooth exponentiated quadratic (i.e. Gaussian or squared exponential or radial basis function) kernel as well as the nondifferentiable Ornstein-Uhlenbeck covariance. In this section we show how the Matérn class of covariance functions can be decomposed into periodic and aperiodic subspaces in the RKHS.</ns0:p><ns0:p>Matérn kernels k are stationary kernels, which means that they only depend on the distance between the points at which they are evaluated: k(x, y) = k(|x − y|). They are often introduced by the spectral density of k <ns0:ref type='bibr' target='#b23'>[Stein, 1999]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_15'>S(ω) = Γ(ν) 2ν 2σ 2 √ πΓ(ν + 1/2)(2ν) ν 2ν 2 + ω 2 ν+1/2 −1 . (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>)</ns0:formula><ns0:p>Three parameters can be found in this equation: ν which tunes the differentiability of k, which corresponds to a lengthscale parameter and σ 2 that is homogeneous to a variance.</ns0:p><ns0:p>The actual expressions of Matérn kernels are simple when the parameter ν is half-integer.</ns0:p><ns0:p>For ν = 1/2, 3/2, 5/2 we have</ns0:p><ns0:formula xml:id='formula_17'>k 1/2 (x, x ) = σ 2 exp − |x − x | k 3/2 (x, x ) = σ 2 1 + √ 3|x − x | exp − √ 3|x − x | k 5/2 (x, x ) = σ 2 1 + √ 5|x − x | + 5|x − x | 2 3 2 exp − √ 5|x − x | . (<ns0:label>14</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>Here the parameters and σ 2 respectively correspond to a rescaling of the abscissa and ordinate axis. For ν = 1/2 one can recognise the expression of the exponential kernel (i.e.</ns0:p><ns0:p>the covariance of the Ornstein-Uhlenbeck process) and the limit case ν → ∞ corresponds to the squared exponential covariance function <ns0:ref type='bibr' target='#b20'>[Rasmussen and Williams, 2006]</ns0:ref>.</ns0:p><ns0:p>As stated in <ns0:ref type='bibr'>Porcu and Stein [2012, Theorem 9</ns0:ref>.1] and <ns0:ref type='bibr' target='#b30'>Wendland [2005]</ns0:ref>, the RKHS generated by k ν coincides with the Sobolev space W ν+1/2 2</ns0:p><ns0:p>. Since the elements of the Fourier basis are C ∞ , they belong to the Sobolev space and thus to Matérn RKHS. The hypothesis H p ⊂ H made in Section 2 is thus fulfilled and all previous results apply.</ns0:p><ns0:p>Furthermore, the connection between Matérn kernels and autoregressive processes allows us to derive the expression of the RKHS inner product. As detailed in Appendix B, we obtain for an input space D = [a, b]: where</ns0:p><ns0:formula xml:id='formula_19'>Matérn 1/2 (exponential kernel) g, h H 1/2 = 2σ 2 b a 1 g + g 1 h + h dt + 1 σ 2 g(a)h(a) (15) Matérn 3/2 g, h H 3/2 = 3 12 √ 3σ 2 b a 3 2 g + 2 √ 3 g + g 3 2 h + 2 √ 3 h + h dt + 1 σ 2 g(a)h(a) +</ns0:formula><ns0:formula xml:id='formula_20'>L t (g) = 3 5 400 √ 5σ 2 5 √ 5 3 g(t) + 15 2 g (t) + 3 √ 5 g (t) + g (t) .<ns0:label>(17)</ns0:label></ns0:formula><ns0:p>Although these expressions are direct consequences of <ns0:ref type='bibr' target='#b4'>Doob [1953]</ns0:ref> and <ns0:ref type='bibr' target='#b8'>Hájek [1962]</ns0:ref> they cannot be found in the literature to the best of our knowledge.</ns0:p><ns0:p>The knowledge of these inner products allow us to compute the Gram matrix G and thus the sub-kernels k p and k a . A result of great practical interest is that inner products between the basis functions have a closed form expression. Indeed, all the elements of the basis can be written in the form cos(ωx + ϕ) and, using the notation L x for the linear operators in the inner product integrals (see Eq. 17), we obtain:</ns0:p><ns0:formula xml:id='formula_21'>L x (cos(ωx + ϕ)) = i α i cos(ωx + ϕ) (i) = i α i ω i cos ωx + ϕ + iπ 2 . (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>)</ns0:formula><ns0:p>The latter can be factorised in a single cosine ρ cos(ωx + φ) with</ns0:p><ns0:formula xml:id='formula_23'>ρ = r 2 c + r 2 s , φ = arcsin (r s /ρ) if r c ≥ 0 arcsin (r s /ρ) + π if r c < 0 (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>)</ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_25'>r c = i α i ω i cos ϕ + iπ 2 and r s = i α i ω i sin ϕ + iπ 2 .</ns0:formula><ns0:p>Eventually, the computation of the inner product between functions of the basis boils down to the integration of a product of two cosines, which can be solved by linearisation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Quantifying the Periodicity</ns0:head><ns0:p>The decomposition of the model into a sum of sub-models is useful for quantifying the periodicity of the pseudo-periodic signals. In this section we propose a criterion based on the ratio of signal variance explained by the sub-models.</ns0:p><ns0:p>In sensitivity analysis, a common approach for measuring the effect of a set of variables</ns0:p><ns0:p>x 1 , . . . , x n on the output of a multivariate function f (x 1 , . . . , x n ) is to introduce a random vector R = (r 1 , . . . , r n ) with values in the input space of f and to define the variance explained by a subset of variables x I = (x I 1 , . . . , x Im ) as V I = Var (E (f (R)|R I )) <ns0:ref type='bibr' target='#b16'>[Oakley and O'Hagan, 2004]</ns0:ref>. Furthermore, the prediction variance of the Gaussian process model can be taken into account by computing the indices based on random paths of the conditional Gaussian process <ns0:ref type='bibr' target='#b14'>[Marrel et al., 2009]</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We now apply these two principles to define a periodicity ratio based on the sub-models.</ns0:p><ns0:p>Let R be a random variable defined over the input space and y p , y a be the periodic and aperiodic components of the conditional process y given the data-points. y p and y a are normally distributed with respective mean and variance (m p , v p ), (m a , v a ) and their covariance is given by Cov(y p (x), y a (x )) = −k p (x, X)k(X, X) −1 k a (x ). To quantify the periodicity of the signal we introduce the following periodicity ratio:</ns0:p><ns0:formula xml:id='formula_26'>S = Var R [y p (R)] Var R [y p (R) + y a (R)] . (<ns0:label>20</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>)</ns0:formula><ns0:p>Note that S cannot be interpreted as a the percentage of periodicity of the signal in a rigorous</ns0:p><ns0:formula xml:id='formula_28'>way since Var R [y p (R) + y a (R)] = Var R [y p (R)] + Var R [y a (R)].</ns0:formula><ns0:p>As a consequence, this ratio can be greater than 1.</ns0:p><ns0:p>For the model shown in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, the mean and standard deviation of S are respectively 0.86 and 0.01.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Application to Gene Expression Analysis</ns0:head><ns0:p>The 24 hour cycle of days can be observed in the oscillations of biological mechanisms at many spatial scales. This phenomenon, called the circadian rhythm, can for example be seen at a microscopic level on gene expression changes within cells and tissues. The cellular mechanism ensuring this periodic behaviour is called the circadian clock. For arabidopsis, which is a widely used organism in plant biology and genetics, the study of the circadian clock at a gene level shows an auto-regulatory system involving several genes <ns0:ref type='bibr' target='#b3'>[Ding et al., 2007]</ns0:ref>.</ns0:p><ns0:p>As argued by <ns0:ref type='bibr' target='#b7'>Edwards et al. [2006]</ns0:ref>, it is believed that the genes involved in the oscillatory mechanism have a cyclic expression so the detection of periodically expressed genes is of great interest for completing current models.</ns0:p><ns0:p>Within each cell, protein-coding genes are transcribed into messenger RNA molecules which are used for protein synthesis. To quantify the expression of a specific protein-coding gene it is possible to measure the concentration of messenger RNA molecules associated with this gene. Microarray analysis and RNA-sequencing are two examples of methods that take advantage of this principle.</ns0:p><ns0:p>The dataset 1 considered here was originally studied by <ns0:ref type='bibr' target='#b7'>Edwards et al. [2006]</ns0:ref>. It corresponds to gene expression for nine day old arabidopsis seedlings. After eight days under a 12hlight/12h-dark cycle, the seedlings are transferred into constant light. A microarray analysis is performed every four hours, from 26 to 74 hours after the last dark-light transition, to monitor the expression of 22810 genes. <ns0:ref type='bibr' target='#b7'>Edwards et al. [2006]</ns0:ref> use COSOPT <ns0:ref type='bibr' target='#b25'>[Straume, 2004]</ns0:ref> for detecting periodic genes and identify a subset of 3504 periodically expressed genes, with an estimated period between 20 and 28 hours. We now apply to this dataset the method described in the previous sections. The kernel we consider is a sum of a periodic and aperiodic Matérn 3/2 kernel plus a delta function to reflect observation noise:</ns0:p><ns0:formula xml:id='formula_29'>k(x, x ) = σ 2 p k p (x, x ) + σ 2 a k a (x, x ) + τ 2 δ(x, x ). (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_30'>)</ns0:formula><ns0:p>Although the cycle of the circadian clock is known to be around 24 hours, circadian rhythms often depart from this figure (indeed circa dia is Latin for around a day) so we estimate the parameter λ to determine the actual period. The final parametrisation of k is based on six variables: (σ 2 p , p , σ 2 a , a , τ 2 , λ). For each gene, the values of these parameters are estimated using maximum likelihood. The optimization is based on the standard options of the GPy toolkit with the following boundary limits for the parameters: σ p , σ a ≥ 0; p , a ∈ [10, 60]; <ns0:ref type='bibr'>[20,</ns0:ref><ns0:ref type='bibr'>28]</ns0:ref>. Furthermore 50 random restarts are performed for each optimization to limit the effects of local minima.</ns0:p><ns0:formula xml:id='formula_31'>τ 2 ∈ [10 −5 , 0.75] and λ ∈</ns0:formula><ns0:p>Eventually, the periodicity of each model is assessed with the ratio S given by Eq. 20. As this ratio is a random variable, we approximate the expectation of S with the mean value of 1000 realisations. To obtain results comparable with the original paper on this dataset, we label as periodic the set of 3504 genes with the highest periodicity ratio. The cut-off periodicity ratio associated with this quantile is S = 0.76. As can be seen in Fig. <ns0:ref type='figure'>4</ns0:ref>, this cut-off value does not appear to be of particular significance according to the distribution of the Gaussian process models. On the other hand, the distribution spike that can be seen at S = 1 corresponds to a gap between models that are fully-periodic and others. We believe this gap is due to the maximum likelihood estimation since the estimate of σ 2 a is zero for all models in the bin S = 1. The other spike at S = 0 can be interpreted similarly and it corresponds to estimated σ 2 p equal to zero.</ns0:p><ns0:p>Let P COSOP T and P GP be the sets of selected periodic genes respectively by Edwards et al.</ns0:p><ns0:p>[2006] and the method presented here and let P COSOP T and P GP denote their complements.</ns0:p><ns0:p>The overlap between these sets is summarised in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. Although the results cannot be compared to any ground truth, the methods seem coherent since 88% of the genes share the same label. Furthermore the estimated value of the period λ is consistent for the genes labelled as periodic by the two methods, as seen in Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>.</ns0:p><ns0:p>One interesting comparison between the two methods is to examine the genes that are classified differently. The available data from <ns0:ref type='bibr' target='#b7'>Edwards et al. [2006]</ns0:ref> allows focusing on the worst classification mistakes made by one method according to the other. This is illustrated in Figure <ns0:ref type='figure'>6</ns0:ref>: Examples of genes with different labels. Panel A corresponds to genes labelled as periodic by COSOPT but not by the Gaussian process approach, whereas in B they are labelled as periodic only by the latter. In each panel, the four selected genes are those with the highest periodic part according to the method that labels them as periodic. The titles of the graphs correspond to the name of the genes (AGI convention).</ns0:p><ns0:p>to COSOPT and, conversely, the genes in P COSOP T with the highest periodicity ratio S.</ns0:p><ns0:p>Although it is undeniable that the genes selected only by COSOPT (panel a) present some periodic component, they also show a strong non-periodic part, corresponding either to noise or trend. For these genes, the value of the periodicity ratio is: 0.74 (0.10), 0.74 (0.15), 0.63 (0.11), 0.67 (0.05) (means and standard deviations, clockwise from top left) which is close to the classification boundary. On the other hand, the genes selected only by the Gaussian process approach show a strong periodic signal (we have for all genes S = 1.01 (0.01)) with sharp spikes. We note from panel B that there is always at least one observation associated with each spike, which ensures that the behaviour of the Gaussian process models cannot simply be interpreted as overfitting. The reason COSOPT is not able to identify these signals as periodic is that it is based on a single cosine function which makes it inadequate for fitting non sinusoidal periodic functions. This is typically the case for gene expressions with spikes as in panel B but it can also be seen on the test functions of Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>This comparison shows very promising results, both for the capability of the proposed method to handle large datasets and for the quality of the results. Furthermore we believe that the spike shape of the newly discovered genes may be of particular interest for understanding the mechanism of the circadian clock. The full results, as well as the original dataset can be found in the Supplemental Information section.</ns0:p><ns0:p>The main purpose of this article is to introduce a new approach for estimating, extracting and quantifying the periodic component of a pseudo-periodic function f given some noisy observations y i = f (x i ) + ε. The proposed method is typical in that it corresponds to the orthogonal projection onto a basis of periodic functions. The originality here is to perform this projection in some RKHS where the partial knowledge given by the observations can be dealt with elegantly. Previous theoretical results from the mid-1900s allowed us to derive the expressions of the inner product of RKHS based on Matérn kernels. Given these results, it was then possible to define a periodic kernel k p and to decompose k as a sum of sub-kernels</ns0:p><ns0:formula xml:id='formula_32'>k = k p + k a .</ns0:formula><ns0:p>We illustrated three fundamental features of the proposed kernels for Gaussian process modelling. First, as we have seen on the benchmark examples, they allow us to approximate periodic non-sinusoidal patterns while retaining appropriate filtering of the noise. Second, they provide a natural decomposition of the Gaussian process model as a sum of periodic and aperiodic sub-models. Third, they can be reparametrised to define a wider family of kernel which is of particular interest for decoupling the assumptions on the behaviour of the periodic and aperiodic part of the signal.</ns0:p><ns0:p>The probabilistic interpretation of the decomposition in sub-models is of great importance when it comes to define a criterion that quantifies the periodicity of f while taking into account the uncertainty about it. This goal was achieved by applying methods commonly used in Gaussian process based sensitivity analysis to define a periodicity ratio.</ns0:p><ns0:p>Although the proposed method can be applied to any time series data, this work has originally been motivated by the detection of periodically expressed genes. In practice, listing such genes is a key step for a better understanding of the circadian clock mechanism at the gene level. The effectiveness of the method is illustrated on such data in the last section.</ns0:p><ns0:p>The results we obtained are consistent with the literature but they also feature some new genes with a strong periodic component. This suggests that the approach described here is not only theoretically elegant but also efficient in practice.</ns0:p><ns0:p>As a final remark, we would like to stress that the proposed method is fully compatible with all the features of Gaussian processes, from the combination of one-dimensional periodic kernels to obtain periodic kernels in higher dimension to the use of sparse methods when the number of observation becomes large. By implementing our new method within the GPy package for Gaussian process inference we have access to these generalisations along with effective methods for parameter estimation. An interesting future direction would be to incorporate the proposed kernel into the 'Automated Statistician' project <ns0:ref type='bibr' target='#b13'>[Lloyd et al., 2014</ns0:ref><ns0:ref type='bibr' target='#b6'>, Duvenaud et al., 2013]</ns0:ref>, which searches over grammars of kernels.</ns0:p><ns0:p>APPENDIX</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2: Examples of decompositions of a kernel as a sum of a periodic and aperiodic sub-kernels. (A) Matérn 3/2 kernel k(., 5). (B) Periodic sub-kernel k p (., 5). (C) aperiodic sub-kernel k a (., 5). For these plots, one of the kernels variables is fixed to 5. The three graphs on each plot correspond to a different value of the lengthscale parameter . The input space is D = [0, 4π] and the cut-off frequency is q = 20. The Python code used to generate this figure is provided as jupyter notebook in Supplemental Information.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Decomposition of a Gaussian process fit. (A) full model m; (B) periodic portion m p and (C) aperiodic portion m a .Our decomposition allows for recognition of both periodic and aperiodic parts. In this case maximum likelihood estimation was used to determine the parameters of the kernel, we recovered (σ 2 p , p , σ 2 a , a ) = (52.96, 5.99, 1.18, 47.79). The Python code used to generate this figure is provided as jupyter notebook in Supplemental Information.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 6 Figure 4 :</ns0:head><ns0:label>64</ns0:label><ns0:figDesc>Fig.6which shows the behaviour of the most periodically expressed genes in P GP according</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Comparison of Estimated periods for the genes in P GP ∩ P COSOP T . The coefficient of determination of x → x (dashed line) is 0.69.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>GP P COSOP T 2127 1377 P COSOP T 1377 17929 Confusion table associated to the predictions by COSOPT and the proposed Gaussian process approach.</ns0:figDesc><ns0:table /><ns0:note>1 See http://millar.bio.ed.ac.uk/data.htm. 11 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6082:1:1:NEW 14 Feb 2016) Manuscript to be reviewed Computer Science # of genes P GP P</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6082:1:1:NEW 14 Feb 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:6082:1:1:NEW 14 Feb 2016)</ns0:note>
</ns0:body>
" | "Response to reviews – Gaussian process models for periodicity detection
N. Durrande (durrande@gmail.com), J. Hensman, M. Rattray. N. Lawrence
We first of all would like to thank the editor and reviewers for the time the granted to this paper and
for their advices on how to improve this manuscript. We do agree with every comments and remarks
they made and we proceeded with all suggested changes (see highlighted red font in document). We
will now address each of their remarks and detail how we modified the manuscript to dispel their
concerns.
Editor’s comments
The reviewer consensus is that the paper is well written and will provide new tools for
detecting periodicity in large data sets. The motivating example is relevant to cellular
biologists and will be interesting to a broader class of readers. The paper is ready for
publication once the following minor issues have been addressed.
[1.] Reviewer 1 has identified some typographical errors that need to be fixed. Reviewer 1
also asks that you discuss the sensitivity of your results to the 0.77 threshold.
These two points are addressed in details in the answer to the first reviewer (see bellow).
[2.] In addition, it would be good to include a little more discussion of the right-hand
panel of Figure 5. The authors note that the disagreements on the left-hand side are nearmisses for GP. But no real insight is given into why the methods disagree on the cases
in the right-hand panel. Why didn’t COSOPT find periodicity there? The paper says
the comparison allows us to focus on the differences between the methods, and that some
new genes with a strong periodic component have been identified. This is potentially very
interesting biologically. But there is no analysis or explanation of why those periodicities
were not found by other methods.
This discussion was indeed missing in the original submission. We added a paragraph on page 14 to
explain why these patterns cannot be identified as periodic by COSOPT:
“The reason COSOPT is not able to identify these signals as periodic is that it is based
on a single cosine function which makes it inadequate for fitting non sinusoidal periodic
functions. This is typically the case for gene expressions with sharp spikes as in panel (b)
but it can also be seen on the test functions of Fig. 1.”
[3.] Finally, there are some additional problems that need to be corrected beyond those
found by Reviewer 1:
1. Page 4: ”a process yp which samples are periodic functions” - This is ungrammatical.
Do you mean ”the samples of which are periodic functions”?
2. Page 4: matrix notations → matrix notation
These changes have been made in the text.
1
3. Page 7: I have never seen the term ”degenerated parameters.” What is that? Do you
mean parameter settings for which Hp or Ha is a degenerate Hilbert space? A degenerate
Hilbert space is a well-defined notion; degenerated parameters is non-standard. If this is
really what you mean and the term has a precise meaning, you need to define it.
There is indeed a need for clarification here. We replaced “parameters are not degenerated” by “parameters are not equal to zero or +∞”.
4. Page 7: Equation (10) is introduced as a decomposition of the best predictor as a sum of
sub-models ma and mp . Then you need to say what ma and mp are. Remind the reader that
a Gaussian process model is characterized through its mean and variance. State explicitly
that E[yp (x)|y(X) = F ] and E[ya (x)|y(X) = F ] are the means of the two sub-models. You
do this in Equation (11) for the variances, but you never explicitly define An alert reader
familiar with Gaussian processes can figure out that this is what you mean, but as written
it could be confusing for the uninitiated.
The paragraph as been fully rewritten to improve readability. It now reads:
The expression y = yp + ya of Eq. 9 allows to introduce two sub-models corresponding to conditional distributions: a periodic one yp (x)|y(X) = F and an aperiodic one
ya (x)|y(X) = F . These two distributions are Gaussian and their mean and variance are
given by the usual Gaussian process conditioning formulas
mp (x) = E[yp (x)|y(X) = F ] = kp (x, X)k(X, X)−1 F
ma (x) = E[ya (x)|y(X) = F ] = ka (x, X)k(X, X)−1 F,
(1)
vp (x) = Var[yp (x)|y(X) = F ] = kp (x, x) − kp (x, X)k(X, X)−1 kp (X, x)
(2)
va (x) = Var[ya (x)|y(X) = F ] = ka (x, x) − ka (x, X)k(X, X)−1 ka (X, x).
The linearity of the expectation ensures that the sum of the sub-models means is equal to
the full model mean:
m(x) = E[yp (x) + ya (x)|y(X) = F ] = E[yp (x)|y(X) = F ] + E[ya (x)|y(X) = F ] = mp (x) + ma (x)
(3)
so mp and ma can be interpreted as the decomposition of m into it’s periodic and aperiodic
components. However, there is no similar decomposition of the variance: v(x) 6= vp (x) +
va (x) since yp and ya are not independent given the observations.
We hope this changes will be helpful to the reader not familiar with GPs.
5. independent knowing the observations → independent given the observations (or independent conditional on the observations) The terminology ”knowing” for conditioning also
appears elsewhere, and is non-standard.
6. Page 8: points they are evaluated at → points at which they are evaluated
7. Page 9: they belongs to → they belong to
8. Page 10: knowing the data-points → given the data points
9. Page 11: As advocated by Edwards et al. → As hypothesized by Edwards et al. (or argued, or studied, or discussed) Science proceeds by hypothesizing and evaluating empirically.
Advocacy is for lobbyists.
10. Page 12: cannot be compare to → cannot be compared to
11. Page 14: There is a space before the period at the end of the second-to-last sentence in
the first paragraph.
11. As often, the proposed method ... → The proposed method is typical in that it... ”As
often” is not standard English.
All these changes have bee made accordingly to the suggestions.
2
Reviewer 1 (Anonymous)
Basic reporting
This is a well-written manuscript, with very little to criticise. It is nicely self-contained,
and focused. The use of iPython/jupyter notebooks is to be commended, as well as the
availability of code and data.
[1.]Current references to relevant literature seem appropriate. I did wonder if it might be
worth making reference to the ”automatic statistician” project (of Ghahramani and others).
However, I am certain the authors are aware of this project, and am happy to leave this
to their discretion. I was also a little surprised that the authors did not cite more of their
own relevant work.
The reason we did not included a reference to this project is that we where not aware of it when the
first version of the paper was written in early 2013. A citation has been added in the conclusion of the
revised manuscript.
[2.]I spotted just a few typos/grammatical errors, listed below:
1. Page 5: ”which samples are periodic functions” and ”This can obtain by”.
2. Page 13: ”Although the results cannot be compare to any ground truth”
3. Page 15: ”We illustrated three fundamental feature of”
4. Consistency with ”arabidopsis” vs. ”Arabidopsis” (and italics). Perhaps not so important for a computer science audience, but it is nevertheless nice to be consistent. [Since
arabidopsis has become the common name for A. thaliana, I personally think neither capitalisation nor italics are required if the name is just given as arabidopsis – which looks to
be the case throughout this manuscript].
These errors have been corrected.
Experimental design
The experimental design appears sound, with the work conducted rigorously and to a high
technical standard. Reproducibility is ensured through open-source implementation, and the
use of iPython/jupyter notebooks.
I am certainly not an expert on reproducing kernel Hilbert spaces, but I did not spot any
obvious blunders.
Validity of the findings
The application of the proposed method to gene expression data is concise and to the point.
The comparison to COSOPT is welcomed, and the level of agreement between these methods
is reassuring.
[3.]I have just one query, which is minor but seems potentially important for future applications of the method. As far as I understood, the classification threshold for the A.
thaliana dataset was determined to ensure that the number of periodic genes was the same
as the number returned by COSOPT (i.e. 3504). But, in practice, how would I determine
an ”optimal” classification threshold for the proposed method, without running COSOPT?
For this particular example, could it be the case that the ”optimal” classification threshold
is quite a lot lower than 0.77, and actually all of the periodic genes reported by COSOPT
would be reported by the proposed method (if this lower threshold had been chosen)? That
is, could it be the case that the effective difference between the methods (in practice) is one
of having different sensitivity/specificity properties?
3
The issue of choosing the classification threshold is indeed an interesting point that is not tackle in
this article. In order to give more elements to the reader, we added a new figure (Fig. 4, also included
at the end of this document) showing the distribution of the periodicity ratios over all genes). We also
included the following paragraph in the manuscript
“As can be seen in Fig. 4, this cut-off value does not appear to be of particular significance
according to the distribution of the Gaussian process models. On the other hand, the
distribution spike that can be seen at S = 1 corresponds to a gap between models that are
fully-periodic and others. We believe this gap is due to the maximum likelihood estimation
since the estimate of σa2 is zero for all models in the bin S = 1. The other spike at S = 0
can be interpreted similarly and it corresponds to estimated σp2 equal to zero.”
The question of the choice of the threshold is thus still opened. For a given value of the parameters,
one can sample from the prior process and obtain an empirical distribution of the periodicity ratio
that can be used as a null hypothesis for some p−value analysis. However, it is not clear to us how to
compare rigorously periodicity ratios for models based on different parameters.
Reviewer 2 (Mark Girolami)
Basic reporting
This is a self contained paper describing how additive covariance functions for Gaussian
Process priors can be used to detect systematic periodicity in observed data. The motivating
example of circadian enzymatic control is highly relevant to contemporary cellular biology
and a good example of where this sort of statistical machinery proves to be useful.
Experimental design
This is all appropriate for the study undertaken.
Validity of the findings
In terms of the statistical methodology there is sufficient validation of the outcomes.
Comments for the author
A reasonably comprehensive and self-contained article that will introduce readers to the
Gaussian Process machinery of inference and how covariance functions may be constructed
to address a specific study, in this case detecting periodicities. Well written and clear to
read.
These comments did not implied any specific change in the manuscript.
4
density
6
4
2
0
0.0
0.5
1.0
S ratio
Figure 1: Distribution of the periodicity ratio over all genes according to the Gaussian process models.
The cut-off ratio determining if genes are labelled as periodic or not is represented by a vertical dashed
line.
5
" | Here is a paper. Please give your review comments after reading it. |
74 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Selection and sorting the Cartesian sum, X + Y , are classic and important problems. Here, a new algorithm is presented, which generates the top k values of the form X i + Y j . The algorithm relies on layer-ordered heaps, partial orderings of exponentially sized layers. The algorithm relies only on medianof-medians and is simple to implement. Furthermore, it uses data structures contiguous in memory, and is cache efficient and fast in practice. The presented algorithm is demonstrated to be theoretically optimal.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Given two vectors of length n, X and Y , top-k on X +Y finds the k smallest values of the form X i +Y j .</ns0:p><ns0:p>Note that this problem definition is presented w.l.o.g.; X and Y need not share the same length. Top-k is important to practical applications, such as selecting the most abundant k isotopologue peaks from a compound <ns0:ref type='bibr' target='#b6'>(Kreitzberg et al., 2020b)</ns0:ref>. Top-k is ∈ Ω(n + k), because loading the vectors is ∈ Θ(n) and returning the minimal k values is ∈ Θ(k).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Naive approach</ns0:head><ns0:p>Top-k can be solved trivially in O(n 2 log(n) + k) = O(n 2 log(n)) steps by generating and sorting all n 2 values of the form X i + Y j . By using median-of-medians <ns0:ref type='bibr' target='#b0'>(Blum et al., 1973)</ns0:ref>, this can be improved to O(n 2 ) steps by generating all n 2 values and performing k-selection on them.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Existing, tree-based methods for top-k</ns0:head><ns0:p>In 1982, Frederickson & Johnson introduced a method reminiscent of median-of-medians <ns0:ref type='bibr' target='#b0'>(Blum et al., 1973)</ns0:ref>; their method selects only the k th minimum value from X +Y in O(n + min(n, k) log( k min(n,k) )) steps <ns0:ref type='bibr' target='#b3'>(Frederickson and Johnson, 1982)</ns0:ref>.</ns0:p><ns0:p>Frederickson subsequently published a second algorithm, which finds the k smallest elements from a min-heap in O(k), assuming the heap has already been built <ns0:ref type='bibr' target='#b2'>(Frederickson, 1993)</ns0:ref>. Combining this method with a combinatoric heap on X +Y (described below for the <ns0:ref type='bibr'>Kaplan et al. method)</ns0:ref> solves top-k in O(n + k). The tree data structure in Frederickson's method can be combined with a combinatoric heap to compute the k th smallest value from X +Y .</ns0:p><ns0:p>Kaplan et al. described an alternative method for selecting the k th smallest value <ns0:ref type='bibr' target='#b4'>(Kaplan et al., 2019)</ns0:ref>; that method explicitly used Chazelle's soft heaps <ns0:ref type='bibr' target='#b1'>(Chazelle, 2000)</ns0:ref>. By heapifying X and Y in linear time (i.e., guaranteeing w.l.o.g. that X i ≤ X 2i , X 2i+1 ), min i, j X i +Y j = X 1 +Y 1 . Likewise, X i +Y j ≤ X 2i +Y j , X 2i+1 +Y j , X i +Y 2 j , X i +Y 2 j+1 . The soft heap is initialized to contain tuple (X 1 +Y 1 , 1, 1). Then, as tuple (v, i, j) is popped from soft heap, lower-quality tuples are inserted into the soft heap. These lower-quality tuples of (i, j) are {(2i, 1), (2i + 1, 1), (i, 2), (i, 3)}, j = 1 {(i, 2 j), (i, 2 j + 1)}, j > 1.</ns0:p><ns0:p>(1)</ns0:p><ns0:p>In the matrix X i +Y j (which is not realized), this scheme progresses in row-major order, thereby avoiding a tuple being added multiple times.</ns0:p><ns0:p>To compute the k th smallest value from X +Y , the best k values are popped from the soft heap. Even though only the minimal k values are desired, 'corruption' in the soft heap means that the soft heap will not always pop the minimal value; however, as a result, soft heaps can run faster than the Ω(n log(n)) lower bound on comparison sorting. The free parameter ε ∈ (0, 1 2 ] bounds the number of corrupt elements in the soft heap (which may be promoted earlier in the queue than they should be) as ≤ t • ε, where t is the number of insertions into the soft heap thus far. Thus, instead of popping k items (and inserting their lower-quality dependents as described in equation 1), the total number of pops p can be found: The maximal size of the soft heap after p pops is ≤ 3p (because each pop removes one element and inserts <ns0:ref type='bibr' target='#b0'>(Blum et al., 1973)</ns0:ref>. For constant ε, both pop and insertion operations to the soft heap are ∈ Õ(1), and thus the overall runtime of the algorithm is</ns0:p><ns0:formula xml:id='formula_0'>∈ O(n + k).</ns0:formula><ns0:p>Note that the Kaplan et al. method easily solves top-k in O(n + k) steps; this is because computing the k th smallest value from X +Y pops the minimal k values from the soft heap.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3'>Layer-ordered heaps and a novel selection algorithm on X +Y</ns0:head><ns0:p>This paper uses layer-ordered heaps (LOHs) <ns0:ref type='bibr' target='#b5'>(Kreitzberg et al., 2020a)</ns0:ref> to produce an optimal selection algorithm on X +Y . LOHs are stricter than heaps but not as strict as sorting: Heaps guarantee only that</ns0:p><ns0:formula xml:id='formula_1'>X i ≤ X children(i)</ns0:formula><ns0:p>, but do not guarantee any ordering between one child of X i , a, and the child of the sibling of a. Sorting is stricter still, but sorting n values cannot be done faster than log 2 (n!) ∈ Ω(n log(n)). LOHs partition the array into several layers such that the values in a layer are ≤ to the values in subsequent layers: u+1) . The size of these layers starts with |X (1) | = 1 and grows exponentially such that lim u→∞ Although selections reminiscent of LOHs may have been used previously, formalization of rank α</ns0:p><ns0:formula xml:id='formula_2'>X (u) = X (u) 1 , X (u) 2 , . . . ≤ X (</ns0:formula><ns0:formula xml:id='formula_3'>|X (u+1) | |X (u) | = α ≥ 1 (note that α = 1 is</ns0:formula><ns0:p>LOHs has been necessary to demonstrate that for 1 ≪ α ≪ 2, a combination of LOHs and soft heaps allow <ns0:ref type='bibr'>et al., 2020a)</ns0:ref>. Furthermore, efficiently constructing an LOH of rank α is not trivial when α ≪ 2; after all, α → 1 results in layers of size</ns0:p><ns0:formula xml:id='formula_4'>generating the minimum k values from X 1 + X 2 + • • • + X m (where each X i has length n) in o(n • m + k • m) (Kreitzberg</ns0:formula><ns0:formula xml:id='formula_5'>|X (1) | = |X (2) | = • • • = 1, indicating a sorting, which</ns0:formula><ns0:p>implies a runtime ∈ Ω(n log(n)) <ns0:ref type='bibr' target='#b7'>(Pennington et al., 2020)</ns0:ref>.</ns0:p><ns0:p>A python implementation of a LOH is shown in listing 1.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4'>Contribution in this manuscript</ns0:head><ns0:p>The new, optimal algorithm for solving top-k presented here makes extensive use of LOHs. It is simple to implement, does not rely on anything more complicated than linear time one-dimensional selection (i.e., it does not use soft heap). Due to its simplicity and contiguous memory access, it has a fast performance in practice.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Algorithm</ns0:head><ns0:p>The algorithm presented is broken into phases. An illustration of these phases is provided in figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.1'>Phase 0</ns0:head><ns0:p>The algorithm first LOHifies (i.e., constructs a layer order heap from) both X and Y . This is performed by using linear time one-dimensional selection to iteratively remove the largest remaining layer (i.e., the simplest LOH construction method, which is optimal when α ≫ 1).</ns0:p></ns0:div>
<ns0:div><ns0:head>2/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53628:1:2:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>4 5,8,7,9 12,26,40,43,14,46,20,49 0 1,5 6,8,7,9 11,41,31,26,33,39,20,42</ns0:ref> <ns0:ref type='bibr'>5, 11, 7, 33, 6, 39, 42, 20, 0, 9, 1, 41, 26, 8} and Y = {12, 26, 40, 9, 14, 49, 8, 2, 20, 1, 46, 43, 4, 5</ns0:ref> <ns0:ref type='bibr'>(k)</ns0:ref>. Likewise, the layer products whose min corners have been visited but whose max corners have not been visited, and which therefore contain some elements < τ, have area s ′ ∈ O(k). Phase 3: Together, these layer products (red and blue) contain all values that may be in minimal k = 14. Since there are Θ(k) such values, they can be selected using median-of-medians in O(k) time.</ns0:p><ns0:formula xml:id='formula_6'>1 2,</ns0:formula><ns0:formula xml:id='formula_7'>X = {31,</ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.1.2'>Phase 1</ns0:head><ns0:p>Now layer products of the form</ns0:p><ns0:formula xml:id='formula_8'>X (u) +Y (v) = X (u) 1 +Y (v) 1 , X (u) 1 +Y (v) 2 , . . . , X (u) 2 +Y (v) 1 , . . . are considered,</ns0:formula><ns0:p>where X (u) and Y (v) are layers of their respective LOHs.</ns0:p><ns0:p>In phases 1-2, the algorithm initially considers only the minimum and maximum values in each layer product: ⌊(u, v)⌋ = (min(X (u) +Y (v) ), (u, v), 0), ⌈(u, v)⌉ = (max(X (u) +Y (v) ), (u, v), 1). It is unnecessary to compute the Cartesian product of values to build a layer product; instead, only the minimum or maximum values in X (u) and Y (v) are needed. Note that the final value in the tuple uses 0 to indicate that this is the minimum value in the layer product or 1 to indicate the maximum value in the layer product; this ensures that even layer products with homogeneous values satisfy ⌊(u, v)⌋ < ⌈(u, v)⌉. Scalar values can be compared to tuples:</ns0:p><ns0:formula xml:id='formula_9'>X i +Y j ≤ ⌈(u, v)⌉ = (max(X (u) +Y (v) ), (u, v), 1) ↔ X i +Y j ≤ max(X (u) +Y (v) ).</ns0:formula><ns0:p>Binary heap H is initialized to contain tuple ⌊(1, 1)⌋. A set of all tuples in H is maintained to prevent duplicates from being inserted into H (this set could be excluded by using the Kaplan et al. proposal scheme). The algorithm proceeds by popping the lexicographically minimum tuple from H. W.l.o.g., there is no guaranteed ordering of the form X (u) +Y (v) ≤ X (u+1) +Y (v) , because it may be that max(X (u) +</ns0:p><ns0:formula xml:id='formula_10'>Y (v) ) > min(X (u+1) +Y (v) ); however, lexicographically, ⌊(u, v)⌋ < ⌊(u + 1, v)⌋, ⌊(u, v + 1)⌋, ⌈(u, v)⌉; thus,</ns0:formula><ns0:p>the latter tuples need be inserted into H only after ⌊(u, v)⌋ has been popped from H. Note that for this reason and to break ties where layer products contain identical values, (u, v) are included in the tuple.</ns0:p><ns0:p>⌈(u, v)⌉ tuples do not insert any new tuples into H when they're popped.</ns0:p><ns0:p>Whenever a tuple of the form ⌈(u, v)⌉ is popped from H, the index (u, v) is appended to list q and the size of the layer product</ns0:p><ns0:formula xml:id='formula_11'>|X (u) +Y (v) | = |X (u) | • |Y (v)</ns0:formula><ns0:p>| is accumulated into integer s. This method proceeds until s ≥ k.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.3'>Phase 2</ns0:head><ns0:p>Any remaining tuple in H of the form (max(X (u ′ ) +Y (v ′ ) ), (u ′ , v ′ ), 1) has its index (u ′ , v ′ ) appended to list q. s ′ is the total number of elements in each of these (u ′ , v ′ ) layer products appended to q during phase 2.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53628:1:2:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1.4'>Phase 3</ns0:head><ns0:p>The values from every element in each layer product in q are generated. A linear time one-dimensional k-selection is performed on these values and returned.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Proof of correctness</ns0:head><ns0:p>Lemma 2.4 proves that at termination all layer products found in q must contain the minimal k values in X +Y . Thus, by performing one-dimensional k-selection on those values in phase 3, the minimal k values in X +Y are found.</ns0:p><ns0:formula xml:id='formula_12'>Lemma 2.1. If ⌊(u, v)⌋ is popped from H, then both ⌊(u − 1, v)⌋ (if u > 1) and ⌊(u, v − 1)⌋ (if v > 1) must</ns0:formula><ns0:p>previously have been popped from H.</ns0:p><ns0:p>Proof. There is a chain of pops and insertions backwards from</ns0:p><ns0:formula xml:id='formula_13'>⌊(u, v)⌋ to ⌊(1, 1)⌋.</ns0:formula><ns0:p>When both u, v = 1, the lemma is true.</ns0:p><ns0:formula xml:id='formula_14'>W.l.o.g. if u = 1 this chain is of the form ⌊(1, 1)⌋, . . . , ⌊(1, 2)⌋, . . . ⌊(1, 3)⌋, . . . ⌊(u, v)⌋, proving the lemma for that case.</ns0:formula><ns0:p>Otherwise, both u, v > 1. Because insertions into H increment either row or column, something of</ns0:p><ns0:formula xml:id='formula_15'>the form ⌊(a, v − 1)⌋ with a ≤ u must be inserted into H before inserting ⌊(u, v)⌋. ⌊(a, v − 1)⌋ < ⌊(u, v)⌋, so ⌊(a, v − 1)⌋ must precede ⌊(u, v)⌋ in the chain of pops. If a = u, then ⌊(u, v − 1)⌋ is popped before ⌊(u, v)⌋. If a < u, then from the insertion of ⌊(a, v − 1)⌋ into H, until ⌊(u, v − 1)⌋ is popped, H must contain something of the form ⌊(a ′ , v − 1)⌋ : a ′ ≤ u, because popping ⌊(a ′ , v − 1)⌋ inserts ⌊(a ′ + 1, v − 1)⌋. ⌊(a ′ , v − 1)⌋ < ⌊(u, v)⌋ when a ′ ≤ u; therefore, ⌊(u, v)⌋ cannot be popped before any ⌊(a ′ , v − 1)⌋ currently in H.</ns0:formula><ns0:p>Because there are a finite number of these a ′ and they are not revisited, before ⌊(u, v)⌋ is popped, ⌊(u, v − 1)⌋ must be popped. This same process can be repeated with</ns0:p><ns0:formula xml:id='formula_16'>⌊(u − 1, b)⌋ : b ≤ v to show that ⌊(u − 1, v)⌋ must be popped before ⌊(u, v)⌋, proving the lemma for the final case. Lemma 2.2. If ⌈(u, v)⌉ is popped from H, then both ⌈(u − 1, v)⌉ (if u > 1) and ⌈(u, v − 1)⌉ (if v > 1) must previously have been popped from H. Proof. Inserting ⌈(u, v)⌉ requires previously popping ⌊(u, v)⌋. By lemma 2.1, this requires previously popping ⌊(u−1, v)⌋ (if u > 1) and ⌊(u, v−1)⌋ (if v > 1)</ns0:formula><ns0:p>. These pops will insert ⌈(u−1, v)⌉ and ⌈(u, v−1)⌉ respectively. Thus, ⌈(u − 1, v)⌉ and ⌈(u, v − 1)⌉, which are both < ⌈(u, v)⌉, are inserted before ⌈(u, v)⌉, and will therefore be popped before ⌈(u, v)⌉.</ns0:p><ns0:p>Lemma 2.3. All tuples will be visited in ascending order as they are popped from H.</ns0:p><ns0:formula xml:id='formula_17'>Proof. Let ⌊(u, v)⌋ be popped from H and let ⌊(a, b)⌋ < ⌊(u, v)⌋. Either w.l.o.g. a < u, b ≤ v, or w.l.o.g. a < u, b > v.</ns0:formula><ns0:p>In the former case, ⌊(a, b)⌋ will be popped before ⌊(u, v)⌋ by applying induction to lemma 2.1.</ns0:p><ns0:p>In the latter case, lemma 2.</ns0:p><ns0:formula xml:id='formula_18'>1 says that ⌊(a, v)⌋ is popped before ⌊(u, v)⌋. ⌊(a, v)⌋ < ⌊(a, v + 1)⌋ < ⌊(a, v + 1)⌋ < • • • < ⌊(a, b)⌋ < ⌊(u, v)⌋, meaning that ∀r ∈ [v, b], ⌊(a, r)⌋ < ⌊(u, v)⌋. After ⌊(a, v)⌋ is inserted (necessarily before it is popped), at least one such ⌊(a, r)⌋ must be in H until ⌊(a, b)⌋ is popped.</ns0:formula><ns0:p>Thus, all such ⌊(a, r)⌋ will be popped before ⌊(u, v)⌋.</ns0:p><ns0:p>Ordering on popping with ⌈(a, b)⌉ < ⌈(u, v)⌉ is shown in the same manner: For ⌈(u, v)⌉ to be in</ns0:p><ns0:formula xml:id='formula_19'>H, ⌊(u, v)⌋ must have previously been popped. As above, whenever ⌈(u, v)⌉ is in H, then ⌊(a, v)⌋ must have been popped, inserting ⌊(a, v + 1)⌋ into H. Each ⌊(a, r)⌋ popped inserts ⌊(a, r + 1)⌋, so at least one ⌊(a, r)⌋, r ∈ [v, b] must also be in H until ⌊(a, b)⌋ is popped. These ⌊(a, r)⌋ ≤ ⌊(a, b)⌋ < ⌈(a, b)⌉ < ⌈(u, v)⌉,</ns0:formula><ns0:p>and so ⌈(a, b)⌉ will be popped before ⌈(u, v)⌉.</ns0:p><ns0:p>Identical reasoning also shows that ⌊(a, b)⌋ will pop before</ns0:p><ns0:formula xml:id='formula_20'>⌈(u, v)⌉ if ⌊(a, b)⌋ < ⌈(u, v)⌉ or if ⌈(a, b)⌉ < ⌊(u, v)⌋.</ns0:formula><ns0:p>Thus, all tuples are popped in ascending order.</ns0:p><ns0:p>Lemma 2.4. At the end of phase 2, the layer products whose indices are found in q contain the minimal k values.</ns0:p><ns0:p>Proof. Let (u, v) be the layer product that first makes s ≥ k. There are at least k values of X +Y that are ≤ max(X (u) +Y (v) ); this means that τ = max(select(X +Y, k)) ≤ max(X (u) +Y (v) ). The quality of the elements in layer products in q at the end of phase 1 can only be improved by trading some value for a smaller value, and thus require a new value < max(X (u) +Y (v) ).</ns0:p></ns0:div>
<ns0:div><ns0:head>4/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53628:1:2:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By lemma 2.3, tuples will be popped from H in ascending order; therefore, any layer product (u ′ , v ′ ) containing values < max(X (u) +Y (v) ) must have had ⌊(u ′ , v ′ )⌋ popped before ⌈(u, v)⌉. If ⌈(u ′ , v ′ )⌉ was also popped, then this layer product is already included in q and cannot improve it. Thus the only layers that need be considered further have had ⌊(u ′ , v ′ )⌋ popped but not ⌈(u ′ , v ′ )⌉ popped; these can be found by looking for all ⌈(u ′ , v ′ )⌉ that have been inserted into H but not yet popped.</ns0:p><ns0:p>Phase 2 appends to q all such remaining layer products of interest. Thus, at the end of phase 2, q contains all layer products that will be represented in the k-selection of X +Y .</ns0:p><ns0:p>A python implementation of this method is shown in listing 2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Runtime</ns0:head><ns0:p>Theorem 2.8 proves that the total runtime is ∈ O(n + k).</ns0:p><ns0:p>Lemma 2.5. Let (u ′ , v ′ ) be a layer product appended to q during phase 2. Either u</ns0:p><ns0:formula xml:id='formula_21'>′ = 1, v ′ = 1, or (u ′ − 1, v ′ − 1)</ns0:formula><ns0:p>was already appended to q in phase 1.</ns0:p><ns0:p>Proof. Let u ′ > 1 and v ′ > 1. By lemma 2.3, minimum and maximum layer products are popped in ascending order. By the layer ordering property of X and Y , max(X (u ′ −1) ) ≤ min(X (u ′ ) ) and max(Y</ns0:p><ns0:formula xml:id='formula_22'>(v ′ −1) ) ≤ min(Y (v ′ ) ). Thus, ⌈(u ′ − 1, v ′ − 1)⌉ < ⌊(u ′ , v ′ )⌋ and so ⌈(u ′ − 1, v ′ − 1)⌉ must be popped before ⌊(u ′ , v ′ )⌋.</ns0:formula><ns0:p>Lemma 2.6. s, the number of elements in all layer products appended to q in phase 1, is ∈ O(k).</ns0:p><ns0:p>Proof. (u, v) is the layer product whose inclusion during phase 1 in q achieves s ≥ k; therefore, s − |X (u) +</ns0:p><ns0:formula xml:id='formula_23'>Y (v) | < k. This happens when ⌈(u, v)⌉ is popped from H. If k = 1, popping ⌈(1, 1)⌉ ends phase 1 with s = 1 ∈ O(k). If k > 1, then at least one layer index is > 1: u > 1 or v > 1. W.l.o.g., let u > 1. By lemma 2.1, popping ⌈(u, v)⌉ from H requires previously popping ⌈(u − 1, v)⌉. |X (u) + Y (v) | = |X (u) | • |Y (v) | ≈ α • |X (u−1) | • |Y (v) | = α • |X (u−1) +Y (v) | (where ≈ indicates asymptotic behavior); therefore, |X (u) +Y (v) | ∈ O(|X (u−1) +Y (v) |). |X (u−1) +Y (v) | is already counted in s − |X (u) +Y (v) | < k, and so |X (u−1) +Y (v) | < k and |X (u) +Y (v) | ∈ O(k). s < k + |X (u) +Y (v) | ∈ O(k) and hence s ∈ O(k).</ns0:formula><ns0:p>Lemma 2.7. s ′ , the total number of elements in all layer products appended to q in phase 2, ∈ O(k).</ns0:p><ns0:p>Proof. Each layer product appended to q in phase 2 has had ⌊(u ′ , v ′ )⌋ popped in phase 1. By lemma 2.5,</ns0:p><ns0:formula xml:id='formula_24'>either u ′ = 1 or v ′ = 1 or ⌈(u ′ − 1, v ′ − 1)⌉ must have been popped before ⌊(u ′ , v ′ )⌋.</ns0:formula><ns0:p>First consider when u ′ > 1 and v ′ > 1. Each (u ′ , v ′ ) matches exactly one layer product (u ′ − 1, v ′ − 1).</ns0:p><ns0:p>Because ⌈(u ′ − 1, v ′ − 1)⌉ must have been popped before ⌊(u ′ , v ′ )⌋, then ⌈(u ′ − 1, v ′ − 1)⌉ was also popped during phase 1. s, the count of all elements whose layer products were inserted into q in phase 1, includes</ns0:p><ns0:formula xml:id='formula_25'>|X (u ′ −1) + Y (v ′ −1) | but does not include X (u ′ ) + Y (v ′ )</ns0:formula><ns0:p>(the latter is appended to q during phase 2). By exponential growth of layers in X and Y ,</ns0:p><ns0:formula xml:id='formula_26'>|X (u ′ ) +Y (v ′ ) | ≈ α 2 • |X (u ′ −1) +Y (v ′ −1) |. These |X (u ′ −1) +Y (v ′ −1) |</ns0:formula><ns0:p>values were included in s during phase 1, and thus the total number of elements in all such</ns0:p><ns0:formula xml:id='formula_27'>(u ′ − 1, v ′ − 1)</ns0:formula><ns0:p>layer products is ≤ s. Thus the sum of sizes of all layer products (u ′ , v ′ ) with u ′ > 1 and v ′ > 1 that are appended to q during phase 2 is asymptotically ≤ α 2 • s.</ns0:p><ns0:p>When either u ′ = 1 or v ′ = 1, the number of elements in all layer products must be ∈ O(n):</ns0:p><ns0:formula xml:id='formula_28'>∑ u ′ |X (u ′ ) + Y (1) | + ∑ v ′ |X (u ′ ) +Y (1) | < 2n; however, it is possible to show that contributions where u ′ = 1 or v ′ = 1 are ∈ O(k):</ns0:formula><ns0:p>W.l.o.g. for u ′ > 1, ⌊(u ′ , 1)⌋ is inserted into H only when ⌊(u ′ − 1, 1)⌋ is popped. Thus at most one ⌊(u ′ , 1)⌋ can exist in H at any time. Furthermore, popping ⌊(u ′ , 1)⌋ from H requires previously popping ⌈(u ′ − 1, 1)⌉ from H: layer ordering on X implies max(X (u ′ −1) ) ≤ min(X (u ′ ) ) and |Y (1) = 1| implies min(Y (1) ) = max(Y (1) ), and so ⌈(u ′ − 1, 1)⌉ = (max(X (u ′ −1) +Y (1) ), (u ′ − 1, 1), 1) < ⌊(u ′ , 1)⌋ = (min(X (u ′ ) + Y (1) ), (u ′ , 1), 0). Thus ⌈(u ′ − 1, 1)⌉ has been popped from H and counted in s. By the exponential growth of layers, the contribution of all such u ′ > 1, v ′ = 1 will be ≈≤ α • s, and so the</ns0:p><ns0:formula xml:id='formula_29'>contributions of u ′ > 1, v ′ = 1 or u ′ = 1, v ′ > 1 will be ≈≤ 2α • s.</ns0:formula><ns0:p>When u ′ = v ′ = 1, the layer product contains 1 element.</ns0:p><ns0:p>Therefore, s ′ , the total number of elements found in layer products appended to q during phase 2, has Proof. For any constant α > 1, LOHification of X and Y runs in linear time, and so phase 0 runs ∈ O(n).</ns0:p><ns0:formula xml:id='formula_30'>s ′ ≤ (α 2 + 2α) • s + 1. By lemma 2.6, s ∈ O(k),</ns0:formula><ns0:p>The total number of layers in each LOH is ≈ log α (n); therefore, the total number of layer products is</ns0:p><ns0:formula xml:id='formula_31'>≈ log 2 α (n).</ns0:formula><ns0:p>In the worst-case scenario, the heap insertions and pops (and corresponding set insertions and removals) will sort ≈ 2 log 2 α (n) elements, because each layer product may be inserted as both ⌊•⌋or⌈•⌉;</ns0:p><ns0:p>the worst-case runtime via comparison sort will be</ns0:p><ns0:formula xml:id='formula_32'>∈ O(log 2 α (n) log(log 2 α (n))) ⊂ o(n).</ns0:formula><ns0:p>The operations to maintain a set of indices in the heap have the same runtime per operation as those inserting/removing to a binary heap, and so can be amortized out. Thus, the runtimes of phases 1-2 are amortized out by the O(n) runtime of phase 0.</ns0:p><ns0:p>Lemma 2.6 shows that s ∈ O(k). Likewise, lemma 2.7 shows that s ′ ∈ O(k). The number of elements in all layer products in q during phase 3 is s + s ′ ∈ O(k). Thus, the number of elements on which the one-dimensional selection is performed will be ∈ O(k). Using a linear time one-dimensional selection algorithm, the runtime of the k-selection in phase 3 is ∈ O(k).</ns0:p><ns0:p>The total runtime of all phases ∈ O(n</ns0:p><ns0:formula xml:id='formula_33'>+ k + k + k) = O(n + k).</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS</ns0:head><ns0:p>Runtimes of the naive O(n 2 log(n)) method (chosen for reference because it is the easiest method to implement and because of the fast runtime constant on python's built-in sorting routine), the soft heap-based method from Kaplan et al., and the LOH-based method in this paper are shown in table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>The proposed approach achieves a > 295× speedup over the naive approach and > 18× speedup over the soft heap approach. LOHs are more lightweight than soft heaps, including contiguous memory access patterns and far fewer pointer dereferences than soft heaps.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DISCUSSION</ns0:head><ns0:p>The algorithm can be thought of as 'zooming out' as it pans through the layer products, thereby passing the value threshold at which the k th best value X i +Y j occurs. It is somewhat reminiscent of skip lists <ns0:ref type='bibr' target='#b8'>(Pugh, 1990)</ns0:ref>; however, where a skip list begins coarse and progressively refines the search, this approach begins finely and becomes progressively coarser. The notion of retrieving the best k values while 'overshooting' the target by as little as possible results in some values that may be considered but which will not survive the final one-dimensional selection in phase 3. This is reminiscent of 'corruption' in Chazelle's soft heaps. Like soft heaps, this method eschews sorting in order to prevent a runtime ∈ Ω(n log(n)) or</ns0:p><ns0:p>∈ Ω(k log(k)). But unlike soft heaps, LOHs can be constructed easily using only an implementation of median-of-medians (or any other linear time one-dimensional selection algorithm).</ns0:p></ns0:div>
<ns0:div><ns0:head>6/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53628:1:2:NEW 5 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Phase 3 is the only part of the algorithm in which k appears in the runtime formula. This is significant because the layer products in q at the end of phase 2 could be returned in their compressed form (i.e., as the two layers to be combined). The total runtime of phases 0-2 is ∈ O(n). It may be possible to recursively perform X +Y selection on layer products X (u) +Y (v) to compute layer products constituting exactly the k values in the solution, still in factored Cartesian layer product form. Similarly, it may be possible to perform the one-dimensional selection without fully inflating every layer product into its constituent elements. For some applications, a compressed form may be acceptable, thereby making it plausible to remove the requirement that the runtime be ∈ Ω(k).</ns0:p><ns0:p>As noted in theorem 2.8, even fully sorting all of the minimum and maximum layer products would be ∈ o(n); sorting in this manner may be preferred in practice, because it simplifies the implementation (Listing 3) at the cost of incurring greater runtime in practice when k ≪ n 2 . Furthermore, listing 3 is unsuitable for online processing (i.e., where X and Y are extended on the fly or where several subsequent selections are performed), whereas listing 2 could be adapted to those uses.</ns0:p><ns0:p>Phase 0 (which performs LOHification) is the slowest part of the presented python implementation; it would benefit from having a practically faster implementation to perform LOHify.</ns0:p><ns0:p>The fast practical performance is partially due to the algorithm's simplicity and partially due to the contiguous nature of LOHs. Online data structures like soft heap are less easily suited to contiguous access, because they support efficient removal and therefore move pointers to memory rather than moving the contents of the memory.</ns0:p><ns0:p>The choice of α affects performance through the cost of LOHifying and the amount by which the number of generated values overshoots the k minimum values wanted: when α ≈ 1, LOHify effectively sorts X and Y , but generates few extra values; α ≫ 1, LOHify has a linear runtime, but generates more extra values, which need to be removed by the final k-selection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>LOHs can be constructed in linear time and used to produce a theoretically optimal algorithm for selecting the minimal k values from X +Y . The new optimal algorithm presented here is faster in practice than the existing soft heap-based optimal algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>ACKNOWLEDGEMENTS</ns0:head><ns0:p>Thanks to Patrick Kreitzberg, Kyle Lucke, and Jake Pennington for fruitful discussions and kindness.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DECLARATIONS</ns0:head></ns0:div>
<ns0:div><ns0:head n='7.1'>Funding</ns0:head><ns0:p>This work was supported by grant number 1845465 from the National Science Foundation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.2'>Code availability</ns0:head><ns0:p>Python source code and L A T E X for this paper are available at https://doi.org/10.6084/m9. figshare.13708564 (MIT license, free for both academic and commercial use). C++ code (MIT license, free for both academic and commercial use) for the same algorithm can be found in https: //bitbucket.org/orserang/neutronstar/, a fast isotope calculator built with this selection method.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.3'>Conflicts of interest</ns0:head><ns0:p>The authors declare that they have no conflicts of interest.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>SUPPLEMENTAL INFORMATION</ns0:head></ns0:div>
<ns0:div><ns0:head n='8.1'>Python code</ns0:head><ns0:p>Listing 1. LayerOrderedHeap.py: A class for LOHifying, retrieving layers, and the minimum and maximum value in a layer. Listing 3. SimplifiedCartesianSumSelection.py: A simplified implementation of Listing 2. This implementation is slower when k ≪ n 2 ; however, it has the same asymptotic runtime for any n and k: Θ(n + k). </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>≤</ns0:head><ns0:label /><ns0:figDesc>4 elements according to equation 1); therefore, p − corruption ≥ p − 4p • ε, and thus p − 4p • ε ≥ k guarantees that p − corruption ≥ k. This leads to p = k 1−4ε , ε ≤ 1 4 . This guarantees that Θ(k) values, which must include the minimal k values, are popped. These values are post-processed to retrieve the minimal k values via linear time one-dimensional selection</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>equivalent to sorting because all layers have size 1). By assigning values in layer u children from layer u + 1, this can be seen as a more constrained form of the heap; however, unlike sorting, for any constant α > 1, LOHs can be constructed ∈ O(n) by performing iterative linear time one-dimensional selection, iteratively selecting and removing the largest layer until all layers have been partitioned. For example, 8, 1, 6, 4, 5, 3, 2 can be LOHified with α = 2 into an LOH with three layers (1 ≤ 3, 2 ≤ 8, 4, 6, 5) by first selecting the largest 4 values on the entire list (8, 4, 6, 5), removing them, and then selecting the largest 2 values from the remaining 3 values (3, 2).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Illustration of method for selecting the k = 14 minimal values from X +Y : Phase 0:X = {31,5, 11, 7, 33, 6, 39, 42, 20, 0, 9, 1, 41, 26, 8} and Y = {12, 26, 40, 9, 14, 49, 8, 2, 20, 1, 46, 43, 4, 5, 7} are both LOHified to axes in O(n) time. Note that the minimum and maximum values in a layer are placed at the first and last positions in the layer, respectively; otherwise values within layers are themselves unordered. Phase 1: The minimum and maximum corners of all layer products (grid, left panel) are visited together in ascending order until the area of the layer products whose max corners are visited exceeds k (inset, right panel), and the largest value visited is labeled as τ = 10. Phase 2: The layer products whose max corners have been visited (blue) has area s that exceeds k but has s ∈ O(k). Likewise, the layer products whose min corners have been visited but whose max corners have not been visited, and which therefore contain some elements < τ, have area s ′ ∈ O(k). Phase 3: Together, these layer products (red and blue) contain all values that may be in minimal k = 14. Since there are Θ(k) such values, they can be selected using median-of-medians in O(k) time.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>and thus s ′ ∈ O(k). Average runtimes (in seconds) on random uniform integer X and Y with |X| = |Y | = n. The layer-ordered heap implementation used α = 2 and resulted in s+s ′ k = 3.637 on average. Individual and total runtimes are rounded to three significant figures. Theorem 2.8. The total runtime of the algorithm is ∈ O(n + k).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>#</ns0:head><ns0:label /><ns0:figDesc>https://stackoverflow.com/questions/10806303/python-implementation-of-median-of-ls in enumerate(layer_sizes[::-1]): small_vals,large_vals = partition(array, len(array) -ls) = [ min(layer) for layer in self._layers ] self._max_in_layers = [ max(layer) for layer in self._layers ]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>min corner, False for max corner return (self._loh_a.min(i) + self._loh_b.min(j), (i,j), False) def _max_tuple(self,i,j): # True for min corner, False for max corner return (self._loh_a.max(i) + self._loh_b.max(j), (i,j), True) def __init__(self, array_a, array_b):self._loh_a = LayerOrderedHeap(array_a) self._loh_b = LayerOrderedHeap(array_b) self._full_cartesian_product_size = len(array_a) * len(array_b) 10/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53628:1:2:NEW 5 Feb 2021) Manuscript to be reviewed Computer Science self._sorted_corners = sorted([self._min_tuple(i,j) for i in range(len(self._loh_a)) for j in range(len(self._loh_b))] + [self._max_tuple(i,j) for i in range( len(self._loh_a)) for j in range(len(self._loh_b)i,j), is_max = self._sorted_corners[index_in_sorted] new_candidates = [ v_a+v_b for v_a in self._loh_a[i] for v_b in self._loh_b[j] ] will be popped before corresponding max corner; # this gets a superset of what is needed (just as in phase 2) candidates.extend(new_candidates) index_in_sorted += 1 print( 'Ratio of total popped candidates to k: {}'.format(len(candidates) / k) ) k_small_vals, large_vals = partition(candidates, k) return k_small_vals</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, 7} are both LOHified to axes in O(n) time. Note that the minimum and maximum values in a layer are placed at the first and last positions in the layer, respectively; otherwise values within layers are themselves unordered. Phase 1: The minimum and maximum corners of all layer products (grid, left panel) are visited together in ascending order until the area of the layer products whose max corners are visited exceeds k (inset, right panel), and the largest value visited is labeled as τ = 10. Phase 2: The layer products whose max corners have been visited (blue) has area s that exceeds k but has s ∈ O</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "Dear Dr. Rahul Shah,
Thank you for your consideration and for your and the reviewers’ thoughtful
comments.
Below, you will find our point-by-point response to the two remaining reviewer
comments. We hope our responses are satisfactory and look forward to hearing
from you soon!
Sincerely,
Oliver Serang
Reviewer 1 (Anonymous)
Basic reporting
Reviewer comment: The authors present a way to find top k smallest elements
from the Cartesian sum X+Y where X and Y are two given sets. For this, they
use layer-ordered heaps (LOH). In the results section they compare their method
with two other methods: 1) naive O(n2 log(n)+k) method and 2) soft heap based
method by Kaplan et. al. They see a 295x and 18x speedup respectively. The
authors have also thoroughly analyzed the proof of correctness of the algorithm.
Additionally, they have provided the python code for the same.
In my opinion, the paper is well written (there are typos which are listed
below) and is organized properly. All the lemmas and theorems are clearly
defined. The manuscript also includes enough background on the topics that are
needed to understand the paper. But it would have been easily understandable
if an example had been provided. Additionally, the space complexity is not
analyzed properly.
Response: At the request of another reviewer, we have removed discussion of
space complexity.
Reviewer comment: Typos: (provided inside ” ”) ...their method selects only
the kth ”minimum” value from X +Y in...
It bounds the number of ”corrupt” elements in the soft heap....
this can be seen as a more constrained form of ”the” heap...
Due to its simplicity and contiguous memory access, it has ”a” fast performance
in...
used to indicate that this is the ”minimum” value in the layer product..
W.l.o.g., there is ”no” guaranteed ordering of the form...
The values from every element in each layer product in q ”are” generated...
Each (u0 , v 0 ) matches ”” exactly one layer product (u0 − 1, v 0 − 1)...
Table 1: N = 4000, K = 2000...17.2...0.422...0.0409 = 0.0268 + 0.0141
Response: Thank you! We have fixed these.
1
Experimental design
Reviewer comment: The research question is well defined but it is a new
optimal algorithm in addition to other optimal algorithms that already exist
(for eg. soft heap based approach). The novelty of the paper is the usage of
layer ordered heaps.
In the results section, the table is not explained in depth i.e. how different
methods compare and why they are giving such results. Also, it would have
been interesting to see such comparison for different values of alpha and then
compare the average speed up with other methods.
Response: The optimal α depends on the problem being solved. For example,
if n k, then the cost of the LOHify will dwarf the cost of retrieving the k
smallest elements, and so α 1 will result in the best performance. On the
other hand, if k n, the cost of retrieving k elements will dwarf the cost of
LOHifying, and so α 2 will perform best . For this reason, empirical analysis
on the effect of α on performance is beyond the scope of this work; however, we
have included a short explanation on the pros and cons of large and small α in
the discussion.
Reviewer comment: I feel that the authors should provide some brief introduction to the naive algorithm stated in the results section. Response: We
have done this.
Validity of the findings
Reviewer comment: I think that the manuscript lacks a concluding paragraph
that wraps up everything.
Response: We have added this.
Reviewer 1 (Anonymous)
Basic reporting
Reviewer comment: The authors present a new algorithm for the problem of
selecting the top k values from the Cartesian sum X + Y. The algorithm is simple
and runs in optimal O(n + k) time by making use of Layered-Ordered Heaps.
An algorithm with the same time complexity previously existed, but the authors
demonstrate through experiments that the algorithm they present is faster in
practice. They additionally provide an implementation of their algorithm in
python.
Pros: I believe that the techniques being used here provide a very simple algorithm for solving this problem. Additionally, despite having the same
asymptotic time complexity, the new algorithm appears to be significantly faster
in practice.
Cons: Even though I believe that the algorithm is correct, I found many of
the statements made in the proofs are hard to follow and feel they require more
justification. Also, there are minor issues I take with some of the notation.
2
The overall structure of the proof appears to be a good way to go about
proving the correctness and time complexity of the algorithm. That is, the
lemmas seem to be the right ones to prove the overall result. However, the
proofs of the lemmas could use additional work. I have described below the
main portions of the individual proofs that could use further justification, along
with some additional minor notational issues and typos.
Once these revisions are made and the proofs are satisfactorily clear, I feel
that the presented algorithm is interesting enough that the work should be
accepted.
Response: Thank you for your thoughtful review!
Reviewer comment: Line 56: limit should be on u, not i.
Response: We have made this change.
(
(
Reviewer comment: Line 80: comma before X2 u) + Y1 v)
Response: We have made this change.
Reviewer comment: Line 83 and following paragraph: I think ”false” and
”true” within the tuple are unnecessary and just confused me at first. It seems
0 and 1 could be used immediately. Also, before the same line where the ordering
on tuples is introduced, it should be stated that lexicographic ordering is being
used. Finally, I’m not sure why (u,v) itself needs to be in the tuple at all.
Response: The index (u, v) is needed to insert its inferior indices (u + 1, v)
and (u, v + 1) and to break ties (thus guaranteeing strict < rather than ≤)
where different layer products contain identical values. We have mentioned this
explicitly to clarify.
Reviewer comment: Proof of Lemma 2.1 : Line 112: ”structures of pops of
the form ... ” Could this be made more explicit? Is there a better word than
”structures”, and can some justification be given? This line seems critical, after
that I agree with the remaining proof.
Response: This proof has been made more explicit.
Reviewer comment: Proof of Lemma 2.3 ; The mention of maximum and
minimum tuples in the statement of the Lemma made me think that the proof
was to prove all maximum tuples are in ascending order and all minimum tuples
are in ascending order, but what the proof is actually trying to show that all
tuples are popped in ascending order. Response: We have clarified this.
Reviewer comment: Line 128: ”meaning that for all v >= r <= b”. I am
confused by this notation.
Response: Thank you, this has been changed.
Reviewer comment: Overall, I believe this second paragraph could use much
more justification. How does at ”least one such floor(a,r) must be in H” go-to
”all floor(a,r) must be popped before floor(u,v)”?
Response: This has been explained more thoroughly; also, it coincides with
the requested update to proof of Lemma 2.3.
Reviewer comment: Similarly, I feel more justification is needed in the third
paragraph. How does ”at least one ...” lead to floor(a,b) being popped.
Response: In a similar light to the previous suggestion, this is now better
explained.
3
Reviewer comment: Line 141: the select notation is introduced for the first
time in this way.
Response: We understand, but feel it can be easily understood from context.
Reviewer comment: Line 179: Approximately less-or-equal-to should be made
more rigorous.
Response: This does not mean approxmately, but asymptotically ≤. We have
added this clarification.
Reviewer comment: Lines 180 and line 190, ”area”? Perhaps just say product.
Response: We have clarified this.
Reviewer comment: Line 199: ”on the hull” should be ”on the heap”
Response: We have made this change
Reviewer comment: Section 2.4 seems unnecessary.
Response: We have removed this.
Reviewer comment: Very minor: I prefer sentences beginning with a word
rather than mathematical notation, as was done in a few places. Response: We
have done this where it can easily be done.
Experimental design
no comment
Validity of the findings
no comment
Reviewer 3 (Anonymous)
Basic reporting
Reviewer comment: In the introduction section, I would suggest the author
add more practical applications for selecting the top-k problem before explaining
the existing solutions to enhance the importance of the work.
Although the structure of the article is in an acceptable format, It would be
better if the author provides some examples or figures for illustrations of the
construction part or even the definitions to help readers understanding. For
instance, in section 1.2 I would suggest adding an example including a figure to
illustrate layer-order heaps and show the partitioning for some alpha.
Response: We have added an example.
Reviewer comment: I would suggest adding a pseudo-code including all the
important steps from phases so that readers can see all the data structures (with
their initializations) and the main algorithms without distracting by details and
proofs.
Response: Thanks for the suggestion, but we disagree; the python code is
already nearly the same as pseudocode, and it can be run (which is often a
problem when implementing a paper from pseudocode).
4
Experimental design
no comment
Validity of the findings
no comment
Comments for the Author
Reviewer comment: In this paper, the author presents an optimal algorithm
to output the top-k values of Xi +Yj based on layer-ordered heaps. The solution
provided in the paper is not complicated and it is easy to follow as it only
uses linear time one-dimensional selection. In addition, the algorithm has fast
performance since it uses data structures contiguous in memory and it is cache
efficient.
This paper is well written and has been organized properly by providing
appropriate sections. The problem is well stated, and the solution is compared
properly with the related algorithms in the last section of the paper. The python
implementations of all phases of the algorithm have been provided properly. The
presented algorithm achieves 295X speed-up over the naive approach and 18X
speed-up over the soft heap approach.
As far as being comprehensible, the algorithm and proofs provided are reliable and error-free. There exist sufficient proofs for the correctness of the
algorithm as well as for the time and space complexities of each phase.
Response: Thank you for your thoughtful review!
5
" | Here is a paper. Please give your review comments after reading it. |
75 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. In the collaborative business environment, blockchain coupled with smart contract removes the reliance on a central system and offers data integrity which is crucial when the transacting parties rely on the shared data. The acceptance of such blockchain-based systems is necessary for the continued use of the services. Despite many extensive studies evaluating the performance of blockchainbased systems, few have focused on users' acceptance of real-life applications.</ns0:p><ns0:p>Objective. The main objective of this research is to evaluate the user acceptance of a real-life blockchain-based system (BBS) by observing various latent variables affecting the development of users' attitudes and intention to use the system. It also aims to uncover the dimensions and role of trust, security and privacy alongside the primary Technology Acceptance Model (TAM)-based predictors and their causal relationship with the users' behavior to adopt such BBS.</ns0:p><ns0:p>Methods. We tested the augmented TAM with Trust Model on a BBS that comprises two subsystems: a Shopping Cart System (SCS), a system oriented towards end-users and a Data Sharing System (DSS), a system oriented towards system administrators. We set research questions and hypotheses, and conducted online surveys by requesting each participant to respond to the questionnaire after using the respective system. The main study comprises two separate sub-studies: the first study was performed on SCS and the second on DSS. Furthermore, each study data comprises initial pre-test and post-test data scores. We analyzed the research model with partial least square structural equation modelling.</ns0:p><ns0:p>Results. The empirical study validates our research model and supports most of the research hypotheses. Based on our findings, we deduce that TAM-based predictors and trust constructs cannot be applied uniformly to BBS. Depending on the specifics of the BBS, the relationships between perceived trust antecedents and attitudes towards the system might change. For SCS, trust is the strongest determinant of attitudes towards system, while DSS has perceived privacy as the strongest determinant of attitudes towards system. Quality of system shows the strongest total effect on intention to use SCS, while perceived usefulness has the strongest total effect on intention to use DSS. Trust has a positive significant effect on users' attitudes towards both BSS, while security does not have any significant effect on users' attitudes toward BBS. In SCS, privacy positively affects trust, but security has no significant effect on trust, whereas, in DSS, both privacy and security have significant effects on trust. In both BBS, trust has a moderating effect on privacy that correlates with attitudes towards BBS, whereas security does not have any mediating role between privacy and attitudes towards BBS. Hence, we recommend that while developing BBS, particular attention should be paid to increasing user trust and perceived privacy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Blockchain technology has influenced significantly the financial world with its first application in the form of cryptocurrencies such as bitcoin <ns0:ref type='bibr' target='#b62'>(Nakamoto, 2008)</ns0:ref>. After a decade long development phase, it has now exhibited high potential for a broader diffusion across many other industries such as healthcare, agriculture, tourism and research fields <ns0:ref type='bibr' target='#b8'>(Bullock & Bannigan, 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Feng Tian, 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>McGhin, Choo, Liu, & He, 2019;</ns0:ref><ns0:ref type='bibr' target='#b79'>Shrestha & Vassileva, 2016</ns0:ref><ns0:ref type='bibr' target='#b83'>, 2018b)</ns0:ref>. Blockchain technology is likely to disrupt many of the traditional centralized business models because of its being decentralized, immutable, tamper-proof and transparent processes <ns0:ref type='bibr' target='#b81'>(Shrestha & Vassileva, 2018a;</ns0:ref><ns0:ref type='bibr' target='#b90'>Swan, 2015)</ns0:ref>. Many blockchain systems also support smart contracts that encode the business logic into an autonomous self-executing piece of a program and are also deployed on the blockchain. A Smart contract stores the rules which negotiate the terms of the contract, automatically verifies the contract and executes the agreed terms whenever it is triggered by the target collaborator or by the responsible process from another smart contract. Blockchain coupled with smart contract technology removes the reliance on the central system between the collaborators and the transacting parties <ns0:ref type='bibr' target='#b77'>(Shrestha, Deters, & Vassileva, 2017)</ns0:ref>. However, blockchain is not a silver bullet that can be incorporated into any business use case. It is particularly important to identify and conduct careful analysis and evaluation of different factors affecting the collaborative business model that is built on the top of blockchain and smart contracts technologies. Furthermore, <ns0:ref type='bibr' target='#b67'>(Prashanth Joshi, Han, & Wang, 2018)</ns0:ref> argued in their comprehensive survey that numerous privacy and security-related issues have risen while adopting blockchain-based applications. <ns0:ref type='bibr' target='#b48'>(Kshetri, 2017)</ns0:ref>, based on their findings, suggested that although blockchain supports peer-to-peer security, the decentralized application itself is vulnerable to security breaches and privacy infringements.</ns0:p><ns0:p>As suggested by <ns0:ref type='bibr' target='#b19'>(Cunningham, 1967)</ns0:ref>, the evaluation process is crucial in studying the user perception of the adoption of new information technology services. The Technology Acceptance Model (TAM) proposed by <ns0:ref type='bibr' target='#b20'>(Davis, 1989)</ns0:ref> has been used widely in the literature to examine whether users understand the underlying technology and can competently use the services <ns0:ref type='bibr' target='#b36'>(Granić & Marangunić, 2019)</ns0:ref>. In many studies, researchers extend TAM by adding external constructs depending upon the contexts to explain the critical relationship between customers and their adoption of the new technology <ns0:ref type='bibr' target='#b58'>(Melas, Zampetakis, Dimopoulou, & Moustakis, 2011)</ns0:ref>. With the rapid development of the use cases of blockchain in recent years, a few studies have already been conducted considering the user acceptance of an abstract blockchain-based system <ns0:ref type='bibr' target='#b32'>(Folkinshteyn & Lennon, 2016;</ns0:ref><ns0:ref type='bibr' target='#b44'>Kern, 2018;</ns0:ref><ns0:ref type='bibr' target='#b71'>Shin, 2019;</ns0:ref><ns0:ref type='bibr' target='#b84'>Shrestha & Vassileva, 2019a)</ns0:ref>. Although numerous extensive systematic studies have been conducted on evaluating the performance of blockchain-based systems <ns0:ref type='bibr' target='#b87'>(Shrestha, Vassileva, & Deters, 2020)</ns0:ref>, to the best of our knowledge, no study has been conducted in the context of users' acceptance of real-life blockchain-based applications except for bitcoin as financial technology <ns0:ref type='bibr' target='#b32'>(Folkinshteyn & Lennon, 2016)</ns0:ref>. Previous works have evaluated user acceptance of the blockchain-based prototype system using an extended Technology Acceptance Model (TAM) in <ns0:ref type='bibr' target='#b44'>(Kern, 2018;</ns0:ref><ns0:ref type='bibr' target='#b84'>Shrestha & Vassileva, 2019a</ns0:ref>) and the trust model in <ns0:ref type='bibr' target='#b71'>(Shin, 2019)</ns0:ref>. The previous studies suggest that the blockchain-based system will be accepted if it is perceived as trustworthy, convenient and useful <ns0:ref type='bibr' target='#b44'>(Kern, 2018;</ns0:ref><ns0:ref type='bibr' target='#b71'>Shin, 2019)</ns0:ref>.</ns0:p><ns0:p>The major contribution of this study is that it expands the previous work by conducting a new user study on a real-life blockchain-based system (BBS), described in <ns0:ref type='bibr' target='#b78'>(Shrestha, Joshi, & Vassileva, 2020)</ns0:ref>. This study presents the augmented TAM by incorporating additional constructs-Trust, Perceived Security and Perceived Privacy in technology adoption study and presents the total effect and mediation analyses. The findings are informative and potentially useful for designing new blockchain-based systems.</ns0:p><ns0:p>The BBS of our study is the general-purpose blockchain-based system that provides a solution to four important problems: private payment, ensuring privacy and user control, and incentives for sharing. This BBS was constructed for the online shopping cart which also allows customers to connect to the seller directly and share personal data without losing control and ownership of it. This BBS has two subsystems-a customer-specific shopping cart system (SCS), and a companyspecific data sharing system (DSS). SCS allows customers to set their data sharing preferences and deploy them via smart contracts, which gives customers full transparency over who accesses their data, when and for what purpose, specifies the purposes of data sharing, which kinds of data can be shared, which applications or companies can access their data and provide an incentive to them for sharing their data in terms of micropayment as stated in the contract. Similarly, DSS allows companies to check data integrity, get tamper-proof records and proof of existence of every transaction while sharing data in the consortium blockchain network. Therefore, the BBS used in the study is a very novel decentralized application that covers the aspects of both the customer and company. So, its in-depth analysis to examine all those factors of the Trust model and the TAM indicators that mostly affect the user acceptance of the BBS is crucial to provide an opportunity for a broad debate and perspective on potential uses of blockchain and smart contract technologies for the eCommerce domain along with other different important industries such as healthcare, agriculture, tourism and research fields.</ns0:p><ns0:p>Therefore, our current study is based on the user evaluation of the blockchain-based SCS and DSS, before and after using those sub-systems by the selected participants, using the validated constructs of the TAM and the Trust model. This new augmented model incorporates both classical TAM with perceived ease of use, perceived usefulness and quality of system, and Trust model with security and privacy variables, and it can be applied to evaluate the acceptance of the general blockchain-and smart contracts-based systems. The present study, using the partial least square structural equation modeling on augmented TAM, hypothesizes and validates various causal relationships to observe the statistical significance between the constructs of interest and intention to use the BBS. Section V focuses on the brief analysis of the results and section VI provides the discussion. The limitation of the study is presented in section VII. Finally, section VIII concludes the paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background</ns0:head><ns0:p>This section provides background information about the Blockchain-Based System used in the study described in this paper, the Technology Acceptance Model, and the models of Privacy, Security and Trust used to predict software systems' adoption by users.</ns0:p></ns0:div>
<ns0:div><ns0:head>Blockchain-Based System (BBS)</ns0:head><ns0:p>The term BBS for a general blockchain-based system was initially used in <ns0:ref type='bibr' target='#b43'>(Jun, 2018)</ns0:ref> without any detailed explanation. BBS in our study represents the blockchain-based service that we have developed with an engineering-oriented approach to address trust-aware business processes in an e-commerce domain, in the context of an online shopping cart system <ns0:ref type='bibr' target='#b78'>(Shrestha, Joshi, et al., 2020)</ns0:ref>. The requirements for the BBS are:  To enable companies to increase trust in their products and supply chains. This BBS has a 3-tier architecture <ns0:ref type='bibr' target='#b29'>(Fernandez, Fonoage, VanHilst, & Marta, 2008)</ns0:ref> employing Spring Boot 1 and React 2 as the main building technologies. The system uses permissioned MultiChain as a solution to both on-chain and off-chain data storage, encryption, hashing and tracking of data, together with Ethereum. Ethereum is used for access control and enabling transactions with ethers that allow users to shop online with all the transactions stored in the blockchain and get incentives for permitting to share their data as they specify in the smart contracts. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> presents the interaction among the customer (data provider) and other ecommerce companies/apps (data consumers) of the BBS. The system comprises two subsystems: Shopping Cart System (SCS) and Data Sharing System (DSS). SCS is used in the online shopping cart enterprise. It has a payment mechanism supporting cryptocurrency, ether and manages the mutual agreement between customers and enterprise through smart contracts. SCS automatically registers the immutable timestamped information about the transactions that acts as proof of existence and can be useful to settle any disputes between the stakeholders in the future. Moreover, SCS deploys smart contracts that allow customers to provide their data sharing preferences on a template form without needing them to write the code for the smart contracts. The smart contracts support users in the following ways <ns0:ref type='bibr' target='#b86'>(Shrestha & Vassileva, 2019b)</ns0:ref>:</ns0:p><ns0:p> Give users full transparency over who accesses their data, when and for what purpose.</ns0:p><ns0:p> Allow users to specify the purposes of data sharing, which kinds of data can be shared, and which applications or companies can access the data.</ns0:p><ns0:p> Provide an incentive to users for sharing their data (in terms of payment for the use of the data by applications, as specified by the contracts). DSS is used for sharing user data among the companies, that provide the shopping cart system to the customers. DSS allows enterprises to form a consortium blockchain network in the MultiChain environment so that user data are only shared with the particular node, that has been given the data access permission, as defined in the smart contracts when deployed by customers on SCS. DSS offers tamper-proof encrypted data storage, publication and provenance mechanisms with a transparency of the event log mechanism in collaborative processes where different enterprises use published/shared data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Augmented Technology Acceptance Model</ns0:head><ns0:p>The classical Technology Acceptance Model (TAM) as shown in Fig. <ns0:ref type='figure'>2</ns0:ref> was based on the Theory of Reasoned Action <ns0:ref type='bibr' target='#b31'>(Fishbein & Ajzen, 1975)</ns0:ref> in social psychology, which claims that behavioral intention is a strong indicator of actual behavior. The TAM has been used as a conceptual framework in many studies of the potential users' behavioral intention to use a particular technology. The behavioral intention is defined as 'the degree to which a person has formulated conscious plans to perform or not perform some specified future behavior' <ns0:ref type='bibr' target='#b95'>(Warshaw & Davis, 1985)</ns0:ref>. The classical TAM focuses on using technology, where perceived ease of use (PEOU) and perceived usefulness (PU) are two design attributes or antecedent to influence user acceptance behavior. PEOU is defined as the degree to which a person believes that using a particular system would be free of effort. PU is the degree to which a person believes that using a particular system would enhance his or her job performance. TAM hypothesizes that the actual use of the system is determined by behavioral intention to use (ITU), which is the degree to which a person has behavioral intention to adopt the technology. ITU is in turn influenced by the user′s attitude towards use, perceived usefulness and perceived ease of use of the system. Attitude towards use is the degree of belief to which a person uses the system as guided by valuations <ns0:ref type='bibr' target='#b71'>(Shin, 2019;</ns0:ref><ns0:ref type='bibr' target='#b74'>Shin, 2017)</ns0:ref>.</ns0:p><ns0:p>TAM is widely used to understand how users come to accept and use information technology. However, there is no existing literature on using TAM in the context of real-life blockchains and smart contracts-based applications, indicating a significant gap in the knowledge. To fill this research gap in the existing literature, this study applies the augmented TAM with trust model to the BBS that we implemented, with participants who actually used the system before answering the survey questionnaires. Our study also uncovers the individual mediating effects of trust, security and perceived usefulness.</ns0:p><ns0:p>In classical TAM, the main design constructs such as perceived ease of use and perceived usefulness have shown significant influence on the behavioral intention of the user to adopt the information systems <ns0:ref type='bibr' target='#b20'>(Davis, 1989)</ns0:ref>, and the latest study by <ns0:ref type='bibr' target='#b71'>(Shin, 2019)</ns0:ref> shows the necessity of considering the Trust-Security-Privacy factors in the decision model of the blockchain-basedsolution adoption. So, we adopted the partial least square structural equation modeling (PLS-SEM) analyses on the augmented TAM as it is a useful technique to estimate complex causeeffect relationship models with latent variables and we aimed to model the latent constructs under conditions of non-normality and small sample sizes <ns0:ref type='bibr' target='#b50'>(Kwong & Wong, 2013)</ns0:ref>.</ns0:p><ns0:p>Many researchers often extend TAM by adding external constructs because classical TAM often does not capture many key factors specific to the context of the technology <ns0:ref type='bibr' target='#b58'>(Melas et al., 2011)</ns0:ref>. Quality of system (QOS) <ns0:ref type='bibr' target='#b47'>(Koh, Prybutok, & Ryan, 2010)</ns0:ref>, trust (T) <ns0:ref type='bibr' target='#b98'>(Wu & Chen, 2005)</ns0:ref>, behavioral control <ns0:ref type='bibr' target='#b3'>(Bhattacherjee, 2000)</ns0:ref> are some of the constructs that have been added as influential variables to user acceptance of the information technology and are therefore inevitable for evaluating a novel system, BBS as in this current study. Although in the software engineering domain, security and privacy are regarded as part of QOS, in this study, we have presented perceived security and perceived privacy as separate constructs. <ns0:ref type='bibr' target='#b23'>(DeLone & McLean, 1992)</ns0:ref> refers to QOS as the technical details of the system interface and system's quality that produces output response such that the technology attributes singularly or jointly influence user satisfaction. Hence, it is assumed that the QOS affects user satisfaction and that directly or indirectly through PU, affects users' intention to use the system <ns0:ref type='bibr' target='#b23'>(DeLone & McLean, 1992;</ns0:ref><ns0:ref type='bibr' target='#b84'>Shrestha & Vassileva, 2019a)</ns0:ref>.</ns0:p><ns0:p>Moreover, perceived privacy and perceived security have critical roles in the acceptance of the technologies as the prior research suggest they have a significant effect on users' attitudes that positively influence their intention to use the technologies <ns0:ref type='bibr' target='#b0'>(Amin & Ramayah, 2010;</ns0:ref><ns0:ref type='bibr' target='#b70'>Roca, García, & de la Vega, 2009;</ns0:ref><ns0:ref type='bibr' target='#b72'>Shin, 2010)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multidimensionality of Privacy</ns0:head><ns0:p>Privacy is defined as the right to be let alone <ns0:ref type='bibr' target='#b94'>(Warren & Brandeis, 1890)</ns0:ref>. Furthermore, privacy has been considered as the right to prevent the disclosure of personal information to others <ns0:ref type='bibr' target='#b97'>(Westin, 1968)</ns0:ref>. Later, privacy has been known to be not just unidimensional <ns0:ref type='bibr' target='#b10'>(Burgoon et al., 1989;</ns0:ref><ns0:ref type='bibr' target='#b22'>DeCew, 1997)</ns0:ref>) as it includes informational privacy along with accessibility privacy, physical privacy and expressive privacy.</ns0:p><ns0:p> Informational privacy -'how, when, and to what extent information about the self will be released to another person' <ns0:ref type='bibr' target='#b10'>(Burgoon et al., 1989;</ns0:ref><ns0:ref type='bibr' target='#b22'>DeCew, 1997)</ns0:ref>, e.g. the user is asked for too much personal information while using online services.</ns0:p><ns0:p> Accessibility privacy-'acquisition or attempted acquisition of information that involves gaining access to an individual' (DeCew, 1997), e.g. the user's contact (address, phone or email) information might be left in the old system. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Physical privacy-'the degree to which a person is physically accessible to others' <ns0:ref type='bibr' target='#b10'>(Burgoon et al., 1989)</ns0:ref> e.g. viewing user screen in an unauthorized way.</ns0:p><ns0:p> Expressive privacy-'protects a realm for expressing one's self-identity or personhood through speech or activity' <ns0:ref type='bibr' target='#b22'>(DeCew, 1997)</ns0:ref>. It restricts extrinsic social control over choices and improves intrinsic control over self-expression, e.g., user data may be inappropriately forwarded to others. <ns0:ref type='bibr' target='#b40'>(Introna & Pouloudi, 1999)</ns0:ref> developed a framework of principles for the first time to study privacy concerns while exploring the interrelations of interests and values for various stakeholders. The study has identified that different users have distinct levels of concern about their privacy. <ns0:ref type='bibr' target='#b89'>(Smith, Milberg, & Burke, 1996)</ns0:ref> developed a scale for the concern for privacy that measured unidimensional aspects of privacy such as collection, errors, secondary use, and unauthorized access to information factors. <ns0:ref type='bibr' target='#b55'>(Malhotra, Kim, & Agarwal, 2004</ns0:ref>) also presented a model to consider multiple aspects of privacy such as identifying attitudes towards the collection of personally identifiable information, control over personal information and awareness of privacy practices of companies gathering personal information. However, all these studies just focused on the informational privacy, so the scales to measure privacy were also based on a unidimensional approach and were not even validated. Furthermore, the issue regarding the benefit to giving up privacy such as offering personalization, enhanced security etc. was not addressed by those studies.</ns0:p><ns0:p>Hence, to address the multidimensionality of privacy, it is particularly important to consider privacy-related behaviors while studying privacy concerns and user attitudes towards privacy in BBS. The constructs presented in a study by <ns0:ref type='bibr' target='#b6'>(Buchanan, Paine, Joinson, & Reips, 2007)</ns0:ref> are validated and considered both privacy concerns and user behavior models. The behavioral items include general caution and technical protection of privacy. Attitudinal item includes privacy concern. The authors found that privacy concern correlates significantly with a general caution, but not significantly with the technical protection factor. Furthermore, perceived privacy, which is the attitudinal privacy or privacy concern undoubtedly plays a critical role in user accepting technologies <ns0:ref type='bibr' target='#b39'>(Hoffman, Novak, & Peralta, 1999;</ns0:ref><ns0:ref type='bibr' target='#b66'>Poon, 2008)</ns0:ref>). It sheds light on the possibility of unauthorized use and access to the personal and financial information of the users by the companies that they are intending to use the service of <ns0:ref type='bibr' target='#b26'>(Dwyer, Hiltz, & Passerini, 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security</ns0:head><ns0:p>Perceived security is the degree to which a user believes that the online service has no predisposition to risk <ns0:ref type='bibr' target='#b99'>(Yenisey, Ozok, & Salvendy, 2005)</ns0:ref>. The protected financial and personal information may get compromised by theft and fraudulent activities leading to vulnerability on the internet. Because of this, a sense of security becomes a major concern for the customers to handout their details on the network <ns0:ref type='bibr' target='#b34'>(Gefen, 2000;</ns0:ref><ns0:ref type='bibr' target='#b75'>Shrestha, 2014;</ns0:ref><ns0:ref type='bibr' target='#b93'>Wang, Lee, & Wang, 1998)</ns0:ref>. Perceived security here does not only mean technical security but the user's subjective feeling of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>being secured in the network <ns0:ref type='bibr' target='#b70'>(Roca et al., 2009)</ns0:ref>. Authors <ns0:ref type='bibr' target='#b52'>(Linck, Pousttchi, & Wiedemann, 2006)</ns0:ref> have argued that a lack of subjective security in the user's mind will create hesitation to use systems.</ns0:p></ns0:div>
<ns0:div><ns0:head>Trust as Mediating Factor</ns0:head><ns0:p>Trust is an important contributing factor for users to do a certain task that can make them vulnerable and yet hope the service provider on the other end to fully comply with the set of protocols to complete a transaction <ns0:ref type='bibr' target='#b26'>(Dwyer et al., 2007)</ns0:ref> and eventually develop a new relationship <ns0:ref type='bibr' target='#b17'>(Coppola, Hiltz, & Rotter, 2004;</ns0:ref><ns0:ref type='bibr' target='#b41'>Jarvenpaa & Leidner, 1999;</ns0:ref><ns0:ref type='bibr' target='#b64'>Piccoli & Ives, 2003)</ns0:ref>. In a virtual environment, as the users do not have any control over the outcome of their actions, trust becomes one of the prime factors for them to ground some firm belief in the reliability to engage with the other party <ns0:ref type='bibr' target='#b39'>(Hoffman et al., 1999)</ns0:ref>. In e-commerce, when information is disclosed, users tend to trust more the service provider <ns0:ref type='bibr' target='#b60'>(Metzger, 2004)</ns0:ref> resulting in users being free of doubts and are more likely to engage with the other party <ns0:ref type='bibr' target='#b39'>(Hoffman et al., 1999)</ns0:ref>. Research has shown that trust has a positive significant impact on attitude and intentions to use systems <ns0:ref type='bibr' target='#b63'>(Papadopoulou, 2007)</ns0:ref>. With greater trust, users question less the authenticity of online services.</ns0:p><ns0:p>The user acceptance behavioral model, as presented by <ns0:ref type='bibr' target='#b69'>(Rios, Fernandez-Gago, & Lopez, 2017;</ns0:ref><ns0:ref type='bibr' target='#b72'>Shin, 2010)</ns0:ref> for theoretical social network services, is also useful for conceptualizing the role of perceived security, perceived privacy (privacy concern from attitudinal privacy) on user trust. Their findings revealed that perceived security has a moderating effect on perceived privacy that correlates significantly with trust the user can have on the system.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Numerous studies have been conducted to examine the factors that determine the acceptance of information technology in the context of an extended TAM and Trust model. We cover a crosssection of those studies that are related to our work.</ns0:p><ns0:p>To the best of our knowledge, <ns0:ref type='bibr' target='#b32'>(Folkinshteyn & Lennon, 2016)</ns0:ref> conducted a very first user study with TAM in the context of the adoption of bitcoin as financial technology. Their findings revealed both positive and negative factors associated with the acceptance of bitcoin, the first real-life application of blockchain technology. They have also argued that the cryptocurrency offers borderless and efficient transactions with significant positive factors in PEOU and PU, giving users full control over their currency, however it is also extremely volatile with not being lenient of security breaches or errors <ns0:ref type='bibr' target='#b32'>(Folkinshteyn & Lennon, 2016)</ns0:ref>. So, it has both risks and benefits that affect the overall adoption of the cryptocurrency. Their findings also suggested exploring other aspects beyond TAM variables to consider the underlying risk and trustworthiness constructs associated with the blockchain-based applications. Previous research by <ns0:ref type='bibr' target='#b44'>(Kern, 2018;</ns0:ref><ns0:ref type='bibr' target='#b71'>Shin, 2019)</ns0:ref> on an abstract blockchain-based application model suggested that the blockchain-based system can be accepted if it has enough trust to sustain and is perceived as convenient and useful in the highly competitive market. Almost all of the existing research so far is limited to the blockchain-based prototype system using an extended TAM <ns0:ref type='bibr' target='#b44'>(Kern, 2018;</ns0:ref><ns0:ref type='bibr' target='#b84'>Shrestha & Vassileva, 2019a)</ns0:ref> and Trust model <ns0:ref type='bibr' target='#b71'>(Shin, 2019)</ns0:ref>. Our current study extends the research contribution of the prior study <ns0:ref type='bibr' target='#b84'>(Shrestha & Vassileva, 2019a)</ns0:ref> by conducting a new user study on the real-life blockchain-based system, BBS <ns0:ref type='bibr'>(Shrestha et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Gefen et al. have previously explored a mixed model with TAM and Trust model to study the adoption of the on-line shopping setting <ns0:ref type='bibr' target='#b35'>(Gefen, Karahanna, & Straub, 2003)</ns0:ref>. Their model presented the use of the on-line system into both system attributes such as perceived usefulness and perceived ease of use and trust in e-vendors. Their model resulted in the integrative indication of the TAM and Trust constructs as good predictors for the output response, which was the behavioral intention to use the online shopping system. Therefore, the current study adopts a similar model and presents it as augmented TAM which comprises an extended TAM and Trust model.</ns0:p><ns0:p>As online activities such as online shopping generate a plethora of real-time transactions of all kinds of assets and information, they are prone to security and privacy-related risks <ns0:ref type='bibr' target='#b70'>(Roca et al., 2009)</ns0:ref>. A privacy issue mostly occurs with unwarranted access to the users' personal data, but that does not necessarily involve security breaches, which can happen with poor access control mechanisms in the system allowing malicious actors to control the system. However, both breaches are critical issues and they often exist together on the online services where users typically feel hesitant to provide private information over the internet <ns0:ref type='bibr' target='#b39'>(Hoffman et al., 1999)</ns0:ref>. <ns0:ref type='bibr' target='#b72'>(Shin, 2010)</ns0:ref> previously explored the statistical significance of security and privacy in the acceptance of social networking sites. Later, <ns0:ref type='bibr' target='#b71'>(Shin, 2019)</ns0:ref> presented the role and dimension of digital trust in the emerging blockchain context, where <ns0:ref type='bibr' target='#b88'>(Siegel & Sarma, 2019)</ns0:ref> has argued that it has not been investigated how privacy/security factors affect user's behavioral cognitive process of accepting the blockchain-based systems. This study, in addition to previous TAM validated constructs, explores the users' perception towards the security and privacy aspect of the BBS and their influence on intention to use the BBS by using the moderating effects of trust on attitudes towards system. Besides, the current research aims to answer the following research questions when exploring the relationship between different indicators of the augmented TAM with the trust model: <ns0:ref type='table' target='#tab_1'>2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> RQ5: Is the influence of ease of use/quality of system on intention to use BBS mediated by perceived usefulness?</ns0:p></ns0:div>
<ns0:div><ns0:head>Research Model and Hypotheses</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> presents the structural model with the main constructs and their associated structural paths. Fourteen research hypotheses are thus constructed for our research model based on the findings of the literature review presented in the previous section.</ns0:p><ns0:p>Perceived Usefulness and Perceived Ease of Use <ns0:ref type='bibr' target='#b20'>(Davis, 1989;</ns0:ref><ns0:ref type='bibr' target='#b21'>Davis, Bagozzi, & Warshaw, 1992)</ns0:ref> H1: Perceived ease of use significantly influences the perceived usefulness of BBS. H2: Perceived ease of use significantly influences the intention to use BBS. H3: Perceived usefulness significantly influences the intention to use BBS. <ns0:ref type='bibr' target='#b47'>(Koh et al., 2010)</ns0:ref> H4: Quality of system significantly influences the perceived usefulness of BBS. H5: Quality of system significantly influences the intention to use BBS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quality of System</ns0:head><ns0:p>Attitude Towards BBS <ns0:ref type='bibr' target='#b74'>(Shin, 2017)</ns0:ref> H6. Attitude towards BBS significantly influences the intention to use BBS.</ns0:p><ns0:p>Trust <ns0:ref type='bibr' target='#b25'>(Dennis, Robert, Curtis, Kowalczyk, & Hasty, 2012)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>(Jian, Bisantz, & Drury, 2000)</ns0:ref> H7. Trust positively affects users' attitudes toward BBS.</ns0:p><ns0:p>Perceived Privacy <ns0:ref type='bibr' target='#b6'>(Buchanan et al., 2007)</ns0:ref> H8. Perceived privacy has a positive effect on the users' trust in BBS. H9. Perceived privacy has a positive effect on the users' attitudes toward BBS. H10. Perceived privacy positively or negatively affects users' perceived security. H11: Privacy concern positively affects users' behavior on general caution. H12: Privacy concern positively affects users' behavior on technical protection.</ns0:p><ns0:p>Perceived Security <ns0:ref type='bibr' target='#b72'>(Shin, 2010)</ns0:ref> H13. Perceived security positively affects users' trust in BBS. H14. Perceived security positively affects users' attitude toward BBS.</ns0:p><ns0:p>The main study comprises two separate sub-studies: the first study was performed on the SCS and the second on the DSS. Furthermore, each study data comprises pre-test and post-test data scores. The pre-test defines the data collected from participants before they use the system, whereas post-test data is collected after participants use the system.</ns0:p><ns0:p>The pretest study can be considered as the study associated with the prototype model. Since, the present study follows the previous research work from <ns0:ref type='bibr' target='#b84'>(Shrestha & Vassileva, 2019a)</ns0:ref>, the pretests for the current study do not include the constructs from classical TAM as they were already evaluated in the previous study. So, the pretests of the current study do not present data for hypotheses H1 -H6. The post-tests for both SCS and DSS do not have behavioral privacygeneral caution and -technical protection constructs as they are only evaluated once, during the pre-test. So, the post-test data do not test hypotheses H11 -H12.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>The present study was approved with delegated review by the University of Saskatchewan Behavioural Research Ethics Board (Beh-REB). The approval with reference number Beh # ID2106 was given for behavioural application/amendment form, consent form and survey questionnaire. We first conducted a pilot study with 14 participants from the Multi-User Adaptive Distributed Mobile and Ubiquitous Computing (MADMUC) Lab and quantitative research experts at the University of Saskatchewan to evaluate the feasibility, duration and improve upon the study design of our research approach. The participants in the pilot study provided feedback with their opinion of the survey in general. Based on the pilot test outcomes and the review of quantitative research experts, the final survey questionnaires were modified and restructured, and then the research model was empirically tested by collecting survey data. The design of the research instrument, sample organizations and sample demographics are described below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Research Instrument Design</ns0:head><ns0:p>We conducted online surveys through SurveyMonkey by requesting each participant to respond to the questionnaire on different constructs. The survey instrument is based on constructs validated in prior studies by <ns0:ref type='bibr' target='#b6'>(Buchanan et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b20'>Davis, 1989;</ns0:ref><ns0:ref type='bibr' target='#b21'>Davis et al., 1992;</ns0:ref><ns0:ref type='bibr' target='#b25'>Dennis et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b42'>Jian et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b47'>Koh et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b72'>D. H. Shin, 2010</ns0:ref><ns0:ref type='bibr' target='#b74'>D. H. Shin, , 2017) )</ns0:ref> and adapted in the context of our research model. The instrument consists of 6 items for perceived ease of use, 6 items for perceived usefulness, 4 items for quality of system, 3 items for perceived enjoyment, 4 items for intention to use, 3 items for perceived security, 9 items for trust, 4 items for attitudinal privacy (perceived privacy), 4 items for behavioral privacy-general caution, 4 items for behavioral privacy-technical protection and 3 items for attitude towards BBS. For our later analysis, we did not consider data related to perceived enjoyment. All the respective items (questions) in the constructs are provided as supplemental files. We measured the responses to the items on a 7scale Likert scale from 1 = strongly disagree to 7 = strongly agree.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sample Organizations</ns0:head><ns0:p>We recruited participants through the website announcement on the University of Saskatchewan's PAWS homepage and on the social networking site, LinkedIn. Participation was entirely voluntary. The participants had to read and accept the consent form to participate in the study. No real identities and email addresses were collected during the data-gathering phase in the surveys. The consent for participation was obtained via an implied consent form. By completing and submitting the questionnaire, participants' free and informed consent was implied and indicated that they understood the conditions of participation in the study spelled out in the consent form. To contextualize the surveys for SCS, we provided participants at the beginning of the pre-test survey questionnaire (presented as the supplemental file in Article S1) with a video about a brief description of blockchain technology and BBS. The inclusion criteria for the SCS survey was that any individual with knowledge about the internet could participate. After participants completed the pre-test survey, we presented them with another video about using the SCS and hosted a remote session allowing them to use the SCS for fifteen minutes. We did not record but noted down their comments and confusion during their interaction with the system. Thereafter, we presented them with a post-test survey questionnaire (presented as a supplemental file in Article S2) to measure different constructs of our Augmented TAM with Trust model. Similarly, we conducted the pre-test and post-test surveys for the DSS part as well. The post-test survey questionnaire for DSS is presented as a supplemental file in Article S3. Each participant in the DSS survey was also asked to use the DSS remotely for fifteen minutes. The inclusion criteria for the DSS survey was that the participants should be from a technical (computer science or engineering) background because the DSS includes technical aspects that only the software developer or system administrator could understand better. Most of the participants completing DSS surveys also took part in the SCS surveys.</ns0:p></ns0:div>
<ns0:div><ns0:head>Participants Demographics</ns0:head><ns0:p>A total of 66 participants took part in the SCS study and 53 participated in the DSS study. However, upon cleaning, 63 valid responses for SCS and 50 for DSS were left for the analysis. We used a partial least square nonparametric bootstrapping procedure to test the statistical significance with 5000 subsamples (Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, 2013) so that the resampling process would create subsamples with observations randomly drawn from the original set of data. For the study, we based our survey by collecting data from the participants who understood at least something about the blockchain and smart contract technologies after watching the video that we prepared on blockchain technology and BBS. The mean score suggests that for SCS, 79% of participants have basic knowledge and 19% have advanced knowledge of blockchain technology; whereas for DSS, 68% of participants have basic knowledge and 28% have advanced knowledge of blockchain technology. Table <ns0:ref type='table'>1</ns0:ref> highlights the demographics of the participants.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We used SPSS version 26 to process the collected data with descriptive statistics. We analyzed the research model with structural equation modelling using smartPLS (Partial Least Squares). PLS is a well-established technique for estimating path coefficients in structural models and has PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science been widely used in research studies to model latent constructs under conditions of nonnormality and small to medium sample sizes <ns0:ref type='bibr' target='#b50'>(Kwong & Wong, 2013)</ns0:ref>. The structural equation model (SEM) as suggested by (Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, 2013) includes the testing of the measurement models (exploratory factor analysis, internal consistency, convergent validity, divergent validity, Dillon-Goldstein's rho) and the structural models (regression analysis). We started by fitting the measurement models to the data and later we tested the underlying structural models.</ns0:p><ns0:p>We applied the path weighting structural model scheme in smartPLS <ns0:ref type='bibr' target='#b50'>(Kwong & Wong, 2013)</ns0:ref>, which provides the highest R 2 value for endogenous or dependent latent variables. The purpose of PLS regression is to combine features from principal component analysis (PCA) and multiple regression <ns0:ref type='bibr' target='#b70'>(Roca et al., 2009)</ns0:ref>. PLS-SEM is applicable for all kinds of PLS path model specifications and estimations. We first used 300 maximum iterations for calculating the PLS results and 7 stop criterion values (Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, 2013) so that the PLS algorithm could stop when the change in the outer weights between two consecutive iterations was smaller than 7 stop criterion value. We then used a nonparametric bootstrapping procedure to test the statistical significance of various PLS-SEM results that include path coefficients and R 2 values. Bootstrapping is a resampling technique with replacement from the sample data to generate empirical sampling distribution. In our case, we used 5000 subsamples and a two-tailed test type with a 0.1 significance level (Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, 2013).</ns0:p></ns0:div>
<ns0:div><ns0:head>Descriptive Statistic</ns0:head><ns0:p>We had a 7-scale Likert scale for the responses to the items, so we categorized the scale in terms of percentage value to analyze the average score for each item and overall impression of the constructs. We collected scores for all the items in perceived ease of use, perceived usefulness, quality of system, trust, security, privacy, attitudes, and intention to use constructs of our model. The scores obtained for selected constructs indicate that user perceptions on the benefits of using BBS should be maintained by making improvements to achieve a higher level of score category. The preliminary descriptive statistic of the obtained data is shown in Fig. <ns0:ref type='figure'>4</ns0:ref> which informs that the average results of the constructs are above 71.43%, so they qualified for the quite high category <ns0:ref type='bibr' target='#b84'>(Shrestha & Vassileva, 2019a)</ns0:ref>. The comparatively lower pre-test scores indicate that participants developed confidence and trust towards the overall usefulness, usability, attitudes and intention to use the BBS after they used the SCS and DSS. Furthermore, higher scores for PEOS, PU, QOS for SCS over DSS signify that the participants feel easier to use SCS compared to the participants who participated in the DSS part of the study. However, all the selected constructs in our study provided a significant impression in the context of both BBS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Measurement Validation</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We checked the measurement model with the exploratory factor analysis by testing the convergent validity, reliability of measures and discriminant validity.</ns0:p><ns0:p>For Exploratory Factor Analysis, we first checked the factor loadings of individual items, as shown in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, to see whether the items in each variable loaded highly on its own construct over the other respective constructs. According to <ns0:ref type='bibr' target='#b16'>(Chin, Peterson, & Brown, 2008)</ns0:ref>, factor loadings exceeding 0.60 can be considered as significant. In our study, all the indicators in the measurement models had a factor loading of value greater than 0.60 except for Item 4 in the construct Behavioral Privacy-Technical Protection (BP-TP4). Since the square of factor loading is directly translated as item's reliability, the item BP-TP4, 'I regularly clear my browser's history' with a very low loading value of 0.39 indicated that its communality value would be only 0.15, and thus should be avoided in the model. Although we used the validated constructs, our exploratory analysis detected that the item BP-TP4 had a weak influence on the Behavioral Privacy construct.</ns0:p><ns0:p>For the Convergent Validity of each construct measure, we calculated the Average Variance Extracted (AVE) and Composite Reliability (CR) from the factor loading. AVE for each construct should exceed the recommended level of 0.50 so that over 50% of the variances observed in the items were accounted for by the hypothesized constructs, and CR should also be above 0.75 to publish results <ns0:ref type='bibr' target='#b38'>(Hair, Black, Babin, & Anderson, 2014)</ns0:ref>. In our study, the AVE reported in Table 3 exceeds 0.50 for all the constructs except for Beh Privacy-Technical Protection (BP-TP). However, CR for each construct was above 0.75 (acceptable), confirming that it measures the construct validity of the model. Since the BP-TP had the item BP-TP4 of very low factor loading along with an AVE value of 0.469, it suggests that the factor BP-TP did not bring significant variance for the variables (items/questions) to converge into a single construct which means BP-TP items are a less-than-effective measure of the latent construct. We also justify this with the exceptionally low rho_A value for the construct BP-TP.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> shows the calculated rho_A value (Dillon-Goldstein's rho) for checking the internal consistency to justify the reliability of each measure. The rho_A evaluates the within-scale consistency of the responses to the items of the measures of constructs and is a better reliability measure than Cronbach's alpha in SEM <ns0:ref type='bibr' target='#b24'>(Demo, Neiva, Nunes, & Rozzett, 2012)</ns0:ref>. In our study, as recommended, rho_A for each construct was greater than 0.70 except for BP-TP which had a 0.28 rho value. Therefore, this also supports our decision of removing the behavioral privacy constructs from the post-tests for both SCS and DSS. We assumed that using the BBS simply does not influence the user's behavioral perception of privacy. So, we were interested to see if there is any significant effect on the attitudinal aspect of privacy.</ns0:p><ns0:p>For assessing the Discriminant Validity of measures, we calculated the square root of the AVE (along the diagonals) of each construct as shown in Table <ns0:ref type='table'>4</ns0:ref>. To lean towards discriminant validity, <ns0:ref type='bibr' target='#b33'>(Fornell & Larcker, 1981)</ns0:ref> recommended having low correlations between the measure of interest and the measures of other constructs. In our model, we observed those diagonal values for each construct exceeded other corresponding values, which are the intercorrelations of the given construct with the other remaining constructs. This pointed out that the measures of each construct which was theoretically supposed to be not overlapping with measures of other variables are in fact, unrelated in our model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Partial Least Square Path Modeling</ns0:head><ns0:p>To begin our Structural Equation Modeling (SEM) analysis, we built the models for the general population in the context of the pre-test (prototype model) and two subsystems SCS and DSC. We characterized the models by looking into coefficients of determination (R 2 's), path coefficients (β's) and corresponding P-value. R 2 determines the variance of a given construct explained by antecedents, β captures the strength of the relationship between the selected constructs and P-value determines the statistical significance of the models <ns0:ref type='bibr' target='#b84'>(Shrestha & Vassileva, 2019a)</ns0:ref>. According to Chin's guideline <ns0:ref type='bibr' target='#b15'>(Chin, Marcelin, & Newsted, 2003)</ns0:ref>, a path coefficient should be equal to or greater than 0.2 to be considered relevant. A model is statistically somewhat significant (*p) when p-value < 0.1, statistically quite significant (**p) when p-value < 0.01 and statistically highly significant (***p) when p-value < 0.001. Table <ns0:ref type='table'>5</ns0:ref>, Table <ns0:ref type='table'>6</ns0:ref> and Table <ns0:ref type='table'>7</ns0:ref> each show the standardized path coefficient (β), t-statistics, p-value and R 2 across selected constructs for pre-test, SCS and DSS, respectively. The indirect and total effects of one construct over another construct in the presence of mediating constructs were also computed alongside.</ns0:p></ns0:div>
<ns0:div><ns0:head>Validation of Hypotheses</ns0:head><ns0:p>For pre-test in the context of prototype model, the model presented in Fig. <ns0:ref type='figure'>5</ns0:ref> shows causal relationship between perceived attitudinal privacy, behavioral privacy-technical protection, behavioral privacy-general caution, perceived security, trust and attitude towards BBS constructs. Considering the direct effects, attitudinal privacy (privacy concern) had very high significant effects on security (β = 0.64; P < 0.001) and trust (β = 0.313; P < 0.001), but an insignificant effect on attitudes towards system (β = 0.176; P > 0.05). In addition, attitudinal privacy also positively affected behavioral privacy-general caution (β = 0.465; P < 0.001) but had an insignificant effect on behavioral privacy-technical protection (β = 0.068; P > 0.1). The effect of security on trust was also highly significant (β = 0.529; P < 0.001), but insignificant on attitudes towards BBS (β = -0.104; P > 0.1). Finally, trust had a high significant positive effect on attitudes towards BBS (β = 0.724; P < 0.001). Thus, hypotheses H7, H8, H10, H11 and H13 were supported, but H9, H12, and H14 were rejected in the context of pre-test. Moreover, trust, privacy and security explain 59.8% of variance in attitudes towards BBS (R 2 = 0.598), security and privacy explain 58.8% of variance in trust (R 2 = 0.588), privacy explains 40.6% of variance in security (R 2 = 0.406), whereas attitudinal privacy explains very low, 21.6% of variance on behavioral privacy-general caution (R 2 = 0.216) and 0.5% on behavioral privacy-technical protection. R 2 value higher than 0.26 indicates a substantial model <ns0:ref type='bibr' target='#b61'>(Muller & Cohen, 1989)</ns0:ref>.</ns0:p><ns0:p>For post-test study in the context of SCS, the model presented in Fig. <ns0:ref type='figure'>6</ns0:ref> shows causal relationship between perceived ease of use, perceived usefulness, quality of system, security, privacy, trust, attitude towards SCS and intention to use SCS constructs. Considering the direct effect, perceived ease of use had quite significant effect on perceived usefulness (β = 0.356; P < 0.01) but insignificant effect on intention to use (β = 0.058; P > 0.1); therefore, H1 was supported and H2 was rejected. Perceived usefulness had relevant but somewhat significant effect on intention to use (β = 0.284; P < 0.1); thus, H3 was also supported. Quality of system had positive significant effect on perceived usefulness (β = 0.509; P < 0.001) and somewhat significant effect on intention to use SCS (β = 0.338; P < 0.1); therefore, H4 and H5 were supported. Attitude towards SCS had relevant but somewhat significant effect on intention to use (β = 0.25; P < 0.1); therefore, H6 was supported. The effect of trust was highly significant on attitude towards SCS (β = 0.534; P < 0.001); therefore, H7 was supported. Perceived privacy had positive significant effects on trust (β = 0.609; P < 0.001), attitudes towards SCS (β = 0.325; P < 0.01) and perceived security (β = 0.654; P < 0.001); therefore, H8, H9 and H10 were supported. Perceived security had insignificant effect on trust (β = 0.212; P > 0.1) and attitudes towards SCS (β = -0.165; P > 0.1); therefore, H13 and H14 were rejected. In the following, the explained variances include perceived usefulness (R 2 = 0.571), security (R 2 = 0.428), trust (R 2 = 0.585), attitude towards SCS (R 2 = 0.5) and intention to use (R 2 = 0.612). Therefore, R 2 value higher than 0.26 indicated a substantial model for SCS <ns0:ref type='bibr' target='#b61'>(Muller & Cohen, 1989)</ns0:ref>.</ns0:p><ns0:p>Similarly, for post-test study in the context of DSS, the model presented in Fig. <ns0:ref type='figure'>7</ns0:ref> shows causal relationship between perceived ease of use, perceived usefulness, quality of system, security, privacy, trust, attitude towards DSS and intention to use DSS constructs. Considering the direct effect, perceived ease of use had significant effect on perceived usefulness (β = 0.488; P < 0.001) but insignificant effect on intention to use (β = -0.173; P > 0.1); therefore, H1 was supported and H2 was rejected. Perceived usefulness had relevant but somewhat significant effect on intention to use (β = 0.495; P < 0.1); thus, H3 was also supported. Quality of system had positive significant effect on perceived usefulness (β = 0.427; P < 0.01), but insignificant effect on intention to use DSS (β = -0.009; P > 0.1); therefore, H4 was supported and H5 was rejected. Attitude towards DSS had relevant and positive significant effect on intention to use (β = 0.554; P < 0.001); therefore, H6 was supported. The effect of trust was highly significant on attitude towards DSS (β = 0.637; P < 0.001); therefore, H7 was supported. Perceived privacy had positive significant effects on trust (β = 0.495; P < 0.001) and perceived security (β = 0.82; P < 0.001), but insignificant effect on attitudes towards DSS (β = 0.097; P > 0.1); therefore, H8 and H10 were supported but H9 was rejected. Perceived security had significant effect on trust (β = 0.369; P < 0.01) but insignificant effect on attitudes towards DSS (β = -0.012; P > 0.1); therefore, H13 was supported but H14 was rejected. In the following, the explained variances include perceived usefulness (R 2 = 0.708), security (R 2 = 0.679), trust (R 2 = 0.683), attitude towards SCS (R 2 = 0.5) and intention to use (R 2 = 0.678). Therefore, R 2 value higher than 0.26 indicated a substantial model for DSS <ns0:ref type='bibr' target='#b61'>(Muller & Cohen, 1989)</ns0:ref>. Table <ns0:ref type='table'>8</ns0:ref> summarizes the validation of our study's hypotheses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Total Effect Analysis</ns0:head><ns0:p>To address the first research question, we present the total effect of antecedents from the trust model on attitudes towards BBS as shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. In the pre-test model, trust had the strongest total effect on attitudes towards BBS (β = 0.732; P < 0.001), followed by privacy on attitudes towards BBS (β = 0.586; P > 0.001) and security on attitudes towards BBS (β = 0.283; P < 0.1), which was marginally significant. In the SCS model, privacy had the strongest influence on attitudes towards SCS (β = 0.62; P < 0.001), followed by trust on attitudes towards SCS (β = 0.538; P < 0.001), while security had no significant total effect on attitudes towards SCS (β = -0.049; P > 0.1). Finally, the total effect statistic for the DSS model was similar to that of the pretest model, with respect to first two strongest design constructs which were trust (β = 0.637; P < 0.001), followed by privacy (β = 0.596; P > 0.001). Security turned out to have no significant effect on attitude towards DSS (β = 0.223; P > 0.1).</ns0:p><ns0:p>To address the second research question, we present the total effect of the perceived design constructs on intention to use from SCS and DSS model as shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. In SCS model, quality of system had the strongest total effect on intention to use SCS (β = 0.49; P < 0.001). Perceived usefulness had a weak total effect on intention to use SCS (β = 0.297; P < 0.1), while privacy had no significant total effect on intention to use SCS (β = 0.156; P > 0.1), followed by perceived ease of use on intention to use SCS (β = 0.153; P > 0.1) and security on intention to use SCS (β = -0.01; P > 0.1). In the context of DSS, Perceived usefulness had the strongest total effect on intention to use DSS (β = 0.495; P < 0.01), followed by trust on intention to use DSS (β = 0.353; P < 0.001) and privacy on intention to use DSS (β = 0.33; P < 0.001). Quality of system had no significant total effect on intention to use DSS (β = 0.202; P > 0.1), followed by security on intention to use DSS (β = 0.124; P > 0.1) and perceived ease of use on intention to use DSS (β = 0.069; P > 0.1).</ns0:p></ns0:div>
<ns0:div><ns0:head>Mediation Analysis</ns0:head><ns0:p>To address our third, fourth and fifth research questions, we carried out the indirect effect analysis. We first investigated the mediating effect of security and/or trust over the relationship between privacy and attitudes towards BBS, then investigated the mediating effect of trust over the relationship between security and attitudes towards BBS, and finally investigated a similar mediating effect of perceived usefulness over the relationship between ease of use/quality of system on intention to use BBS. According to <ns0:ref type='bibr' target='#b1'>(Baron & Kenny, 1986;</ns0:ref><ns0:ref type='bibr' target='#b38'>Hair et al., 2014)</ns0:ref>, there is no need to check for the indirect effect if the direct effect is insignificant in the model. So, for the pre-test model as presented in Table <ns0:ref type='table'>5</ns0:ref>, we found the observed indirect effects for the selected predictors in the presence of mediating variables in the pre-test model. In the presence of mediating effect of both trust and security, the effect of privacy on attitude towards BBS slightly decreased from (β = 0.584; T = 6.868; P < 0.001 while excluding both trust and security) to (β = 0.412; T = 4.466; P < 0.001) with the variance accounted for (VAF) value of 0.703. The VAF is calculated as the ratio of the indirect path coefficient to the total path coefficient. With 70.3% VAF, trust and security had a partial mediation effect between privacy and attitude towards BBS. While analyzing the individual mediating effects between privacy and attitudes towards BBS, trust alone had positive significant effect (β = 0.226; T = 3.151; P < 0.01), but security alone had no significant effect (β = -0.068; T = 0.852; P > 0.1). So, our finding suggested that only trust played a crucial mediating role while security had no significant effect between privacy and attitudes toward BBS. Similarly, in the presence of mediating effect of trust, the effect of security on attitude towards BBS slightly decreased from (β = 0.538; T = 6.14; P < 0.001 while excluding Trust) to (β = 0.391; T = 4.348; P < 0.001) with the variance accounted for (VAF) value of 1.382. With 138% VAF, trust had a perfect mediation effect between security and attitude towards BBS.</ns0:p><ns0:p>For SCS model, as presented in Table <ns0:ref type='table'>6</ns0:ref>, in the presence of mediating effect of trust and security, the effect of privacy on attitude towards SCS slightly decreased from (β = 0.619; T = 8.504; P < 0.001 while excluding both trust and security) to (β = 0.293; T = 2.23; P < 0.1) with the variance accounted for (VAF) value of 0.473. With 47.3% VAF, trust and security had a partial mediation effect between privacy and attitude towards SCS. While analyzing the individual mediating effects between privacy and attitudes towards SCS, trust alone had positive significant effect (β = 0.326; T = 2.683; P < 0.01), but security alone had no significant effect (β = -0.108; T = 1.466; P > 0.1). So, our finding suggested that only trust played a crucial mediating role while security had no significant effect between privacy and attitudes toward SCS. In the same SCS model, with the presence of mediating effect of trust, the effect of security on attitude towards SCS became insignificant from (β = 0.384; T = 3.304; P < 0.001 while excluding trust) to (β = 0.122; T = 1.191; P > 0.1) with the variance accounted for (VAF) value of -2.489. Therefore, trust had no mediating effect between security and attitude towards SCS since after adding trust predictor as a mediator, the indirect effect on attitude towards SCS became non-significant while the direct effect was also insignificant. Furthermore, in the same SCS model, with the presence of mediating effect of perceived usefulness, the effect of quality of system on intention to use SCS became insignificant from (β = 0.705; T = 11.127; P < 0.001 while excluding usefulness) to (β = 0.152; T = 1.557; P > 0.1) with the variance accounted for (VAF) value of 0.31. Therefore, perceived usefulness had no significant mediating effect between quality of system and intention to use SCS since after adding perceived usefulness predictor as a mediator, the indirect effect on intention to use SCS became non-significant while the direct effect was still significant. In addition, with the presence of mediating effect of perceived usefulness, the ease of use on intention to use SCS became insignificant from (β = 0.559; T = 7.035; P < 0.1 while excluding usefulness) to (β = 0.108; T = 1.363; P > 0.1) with the variance accounted for (VAF) value of 0.706. Therefore, perceived usefulness had no significant mediating effect between ease of use and intention to use SCS since after adding perceived usefulness predictor as a mediator, the indirect effect on intention to use SCS became non-significant while the direct effect was also non-significant.</ns0:p><ns0:p>Finally, in the DSS model, as presented in Table <ns0:ref type='table'>7</ns0:ref>, in the presence of mediating effect of trust and security, the effect of privacy on attitude towards DSS slightly decreased from (β = 0.6; T = 6.134; P < 0.001 while excluding both trust and security) to (β = 0.5; T = 2.319; P < 0.1) with the variance accounted for (VAF) value of 0.839. With 83.9% VAF, trust and security had a partial mediation effect between privacy and attitude towards DSS. While analyzing the individual mediating effects between privacy and attitudes towards DSS, trust alone had positive significant effect (β = 0.316; T = 2.726; P < 0.01), but security alone had no significant effect (β = -0.01; T = 0.053; P > 0.1). So, our finding suggested that only trust played a crucial mediating role while security had no significant effect between privacy and attitudes toward DSS. In the same DSS model, no mediation effect was observed for trust between security and attitudes towards DSS. Furthermore, in the same DSS model, with the presence of mediating effect of perceived usefulness, the effect of quality of system on intention to use DSS reduced from (β = 0.66; T = 7.82; P < 0.001 while excluding usefulness) to (β = 0.212; T = 1.846; P < 0.1) with the variance accounted for (VAF) value of 1.05. Therefore, perceived usefulness had a complete significant mediating effect between quality of system and intention to use DSD since after adding perceived usefulness predictor as a mediator, the indirect effect on intention to use DSD became significant while the direct effect was insignificant. This suggested that the indirect significant path between quality of system and intention to use DSD was contributed by perceived usefulness predictor construct. In addition, with the presence of mediating effect of perceived usefulness, the effect of ease of use on intention to use reduced from (β = 0.588; T = 6.385; P < 0.001 while excluding usefulness) to (β = 0.242; T = 1.946; P < 0.1) with the variance accounted for (VAF) value of 3.507. Therefore, perceived usefulness had a complete mediating effect between ease of use and intention to use DSD since after adding perceived usefulness predictor as a mediator, the indirect effect on intention to use DSD became significant while the direct effect was non-significant. This suggested that the indirect significant path between perceived ease of use and intention to use DSD was contributed by perceived usefulness predictor construct.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The purpose of this study is to evaluate the user acceptance of a working blockchain-based system by observing the attributes affecting the development of users' attitudes and intention to use the system. We achieved the goal of our research by testing the augmented TAM with a Trust model on our application (BBS) that is built using blockchain technology. The empirical study validates our research model and supports most of the research hypotheses that were set considering the aim of this study. We also identified different issues influencing users' attitudes and intentions to adopt BBS by considering observed facts from the causal relationships and their implications. According to <ns0:ref type='bibr' target='#b35'>(Gefen et al., 2003)</ns0:ref>, extending TAM with trust model is well justified for its effectiveness in improving the predictive power of the explored issues associated with the acceptance of online services. BBS can be considered as a set of online services, so applying the TAM augmented with trust, as we did in our study, is justified.</ns0:p><ns0:p>The major contribution of our study to the existing literature of blockchain and distributed ledger technologies is to uncover the dimensions and role of trust alongside primary TAM-based design predictors and their causal relationship with users' attitudes and behavioral intention to accept such technologies.</ns0:p><ns0:p>According to the results from our research, TAM-based predictors or trust constructs cannot be applied uniformly to BBS. Depending on the specifics of the BBS, the relationships between perceived trust, perceived security, perceived privacy and attitudes towards the system might change. In this study, there was a customer-specific system: SCS and a company-specific system: DSS. Every participant who completed the post-survey for DSS also completed the postsurvey for SCS with 66 participants completing the post-survey for SCS and 53 participants completing the post-survey for DSS. There was no major difference between the users of each system that could lead to the difference between the responses of the two surveys. With the SCS, the user engaged with the point of view of a customer, whereas the DSS had the user engaged the system as an enterprise's system administrator.</ns0:p><ns0:p>A previous study by <ns0:ref type='bibr' target='#b6'>(Buchanan et al., 2007)</ns0:ref> suggests that attitudinal privacy, in the privacy model, correlates significantly with behavioral privacy-general caution but not significantly with the technical protection factor. The findings of our current research also indicate comparable results. Users who are concerned with their data privacy tend to be more cautious and careful about protecting it, however, if the users are technically competent, they have already used tools to protect their privacy such as clearing the browser's cache and history, using spyware etc., so they become less concerned about their privacy infringement.</ns0:p><ns0:p>Based on our research findings, perceived ease of use does not impact behavioral intention to use the actual BBS unlike in our previous study on the blockchain-based prototype model, where ease of use was significant in the initial stage <ns0:ref type='bibr' target='#b84'>(Shrestha & Vassileva, 2019a)</ns0:ref>. This is because users perceive BBS, a user-friendly web application, easier to learn and operate. Based on representative literature such as <ns0:ref type='bibr' target='#b53'>(Liu, Chen, Sun, Wible, & Kuo, 2010)</ns0:ref>, UI design is the most significant item that affects perceived ease of use. Users, instead of being more concerned about learning to use the system, are concerned about the usefulness and overall performance of the BBS. Previous studies by <ns0:ref type='bibr' target='#b92'>((Venkatesh, Morris, Davis, & Davis, 2003)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>(Chan & Lu, 2004)</ns0:ref>; <ns0:ref type='bibr' target='#b65'>(Pikkarainen, Pikkarainen, Karjaluoto, & Pahnila, 2004)</ns0:ref>; <ns0:ref type='bibr' target='#b70'>(Roca et al., 2009)</ns0:ref>) confirm that usability (ease of use) remains non-significant to develop an intention to use the system.</ns0:p><ns0:p>According to the results of our study, we deduce perceived ease of use and quality of BBS as significant predictors of the usefulness construct. When users find BBS easier to use and believe they can be skillful in using it, they will consider the system as more useful to improve their performance and productivity. This is also confirmed by previous studies <ns0:ref type='bibr' target='#b35'>(Gefen et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b53'>Liu et al., 2010</ns0:ref>). In our system, SCS allows customers to set their data sharing preferences and receive incentives for sharing their data as per the smart contracts, while DSS guarantees companies that the customer data they access have integrity and confirm provenance. So, users of each system, who feel more satisfied with these features, develop a higher understanding of its perceived usefulness. Eventually, with positive feelings about the usefulness of the BBS, users develop a stronger behavioral intention to accept the system. Since the quality of system has an insignificant direct effect on the intention to use the system for DSS, its effect through perceived usefulness is found out to be a significant positive effect in our study which is per the suggestions made by <ns0:ref type='bibr' target='#b23'>(DeLone & McLean, 1992)</ns0:ref>.</ns0:p><ns0:p>Moreover, the empirical results of our study also confirm a significant positive effect of the users' attitudes on their intention to use the BBS and suggest that the most important antecedent of attitudes towards using BBS is trust which is also supported by the previous studies <ns0:ref type='bibr' target='#b4'>(Bhattacherjee, 2002)</ns0:ref>, which confirms that the trust predictor significantly influences the user's decision to adopt the online services. Therefore, familiarity with the significance of the underlying blockchain technology and the honesty of the companies to keep its promises of protecting privacy, securing information and incentivizing customers for sharing their data bring a higher level of trust and stimulate positive attitudes of customers towards using the SCS. Similarly, trusting the blockchain technology for its integrity and dependability significantly improves the company's attitudes towards adopting the DSS.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b72'>(Shin, 2010)</ns0:ref>, trust has a moderating effect on perceived security and perceived privacy when it comes to adopting social networking sites. Perceived security has a mediating effect on perceived privacy that correlates to trust <ns0:ref type='bibr' target='#b69'>(Rios et al., 2017)</ns0:ref>. The findings from our study suggest that perceived security has a direct effect on trust in the context of the prototype model and DSS. Outside of this, there is no significant relationship between security and other constructs. Perceived privacy has a direct effect on user trust and perceived security, which reinforces the findings by <ns0:ref type='bibr' target='#b69'>(Rios et al., 2017)</ns0:ref> that claims perceived security and perceived privacy are related. Based on our findings, the direct effect of perceived privacy on users' attitudes towards BBS is only significant for SCS and is moderated by trust in all pre-test, SCS and DSS models.</ns0:p><ns0:p>Our findings suggest that the influence of perceived privacy and perceived security depends strongly on which blockchain-based system users interact with. When answering the initial pre-PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science test survey, participants have no system to base their ideas on. So, security, privacy, trust and BBS become abstract concepts. As abstract concepts, participants believe privacy affects security, security and privacy affect trust, and trust affects their intention to use the system. However, they are not aware of any direct effect of privacy and security on their choice to use the system.</ns0:p><ns0:p>In our study, we see that after using the SCS, there is a significant effect of perceived privacy on the user's attitude towards BBS. Yet, the pre-test and DSS survey results show that participants feel perceived privacy does not positively affect their attitudes towards BBS. Perceived privacy's effect on user's attitudes towards BBS is only significant with a customer-specific BBS like SCS but not significant with a company-specific BBS like DSS. However, trust has either a partial or complete mediating role in all kinds of BBS which is consistent with prior research <ns0:ref type='bibr' target='#b72'>(Shin, 2010)</ns0:ref>.</ns0:p><ns0:p>Based on the initial pre-test survey results, we deduce that participants feel security protection mechanisms are an important indicator to trust the system. However, after using the SCS, we learn that perceived security tends to be an insignificant predictor of trust. For the DSS, the effect of perceived security on trust is once again significant. It may be because after experiencing the real-life blockchain-based system, respondents using the SCS becomes aware of the underlying security infrastructure of blockchain and smart contracts, but once they learn that the business process models deployed via smart contracts are committed on a public blockchain, they may care more about privacy and think less about underlying security. As they are not concerned about security, they want to have control over their data instead, the relative significance of perceived privacy to trust SCS for these users is higher. On the other hand, respondents experiencing DSS to access customer data may not care much about privacy since they are already putting their information through transparent processes for customers and other enterprises. Instead, they may care more about secure transactions, mitigating anomalies and malicious behavior in their consortium network and cyber-resilient smart contracts. Therefore, perceived security may significantly affect trust in an abstract context, but with a specific context, it may be significant for a model like SCS and may not be significant for a model like DSS.</ns0:p><ns0:p>Prior research on the effect of perceived security and perceived privacy on user trust are mixed. <ns0:ref type='bibr' target='#b71'>(Shin, 2019)</ns0:ref> found a significant moderating effect of security on trust, but participants had no real interaction with a system. Studies on non-blockchain online services had comparable results. <ns0:ref type='bibr' target='#b56'>(McCole, Ramsey, & Williams, 2010)</ns0:ref> found that perceived privacy and perceived security moderates the effect of trust. <ns0:ref type='bibr' target='#b27'>(Eastlick, Lotz, & Warrington, 2006)</ns0:ref> empirically showed that the relationship between privacy concerns and trust was the third strongest of all relationships studied. <ns0:ref type='bibr' target='#b14'>(Chellappa & Pavlou, 2002)</ns0:ref> argued that perceived security is a stronger predictor of trust. All four of these studies were abstract and did not have participants engage with a real system before answering their survey. These results support our initial pre-test results. Without interacting with any system, participants often consider privacy, security, and trust to be strongly related.</ns0:p><ns0:p>In previous studies, where participants engaged with online services such as online shopping, perceived security had a stronger effect. Both <ns0:ref type='bibr' target='#b2'>(Belanger, Hiller, & Smith, 2002)</ns0:ref> and <ns0:ref type='bibr' target='#b45'>(Kim, Steinfield, & Lai, 2008)</ns0:ref> found that perceived security had a stronger effect than perceived privacy on consumer behavior. <ns0:ref type='bibr' target='#b70'>(Roca et al., 2009)</ns0:ref> found that perceived privacy did not influence trust, but they did not consider the influence of security factors moderating privacy concerns in their model based on extended TAM. These do not align with our findings from when participants used the SCS. Our study found security has no significant relationship to trust in SCS, while privacy significantly affects trust and attitudes towards BBS.</ns0:p><ns0:p>The discrepancy between results from abstract studies and studies with concrete systems shows how important it is to focus on the latter. Although the abstract studies show there was a strong relationship between trust, privacy, and security, the studies with actual eCommerce systems have mixed and inconclusive results. Furthermore, studies on eCommerce systems focus on the customer. Few relevant studies focus on the company's trust and its intention to use the technology. Therefore, we cannot find other results to compare to the current study's finding that for DSS perceived security positively affects trust in BBS, and trust completely mediates the influence of privacy on attitudes towards adopting the BBS. Also, based on our pre-test and posttest results, there is no mediating effect of security over the perceived privacy on the users' attitudes towards BBS. Further study is needed with specific types of BBS to see if there are more BBS types other than customer-specific and company-specific and to better understand which trust construct is significant for each type of system.</ns0:p><ns0:p>Our study also brings a methodological contribution to the literature with the use of partial least square structural equation modelling (PLS-SEM) to analyze the user acceptance of the concrete blockchain-based application. PLS is component-based and can model the latent constructs under conditions for smaller sample sizes by maximizing the explained variance of dependent indicators and use multiple regressions to observe the effect of predictors on the response variables <ns0:ref type='bibr' target='#b16'>(Chin et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b37'>Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, 2013)</ns0:ref>. Furthermore, this study contributes to the methodology by adopting Dillon-Goldstein's rho, for estimating internal consistency reliability, which is suggested as an always better choice than conservative Cronbach's alpha in the presence of skew items and smaller samples <ns0:ref type='bibr' target='#b24'>(Demo et al., 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>The main limitation of our study is that our findings are based on smaller targeted population size and only on two specific types of BBS. Therefore, the results may not generalize to the broader population and to any type of BBS. Further study may consider using a larger sample with specific types of BBS; to explore BBS types other than customer-specific and companyspecific, and to better understand which trust antecedent is significant for each type of system. Moreover, an obvious limitation comes from using the same participants for both systems. With most respondents who participated in the DSS study also completed the SCS study, they also satisfy the inclusion criteria of DSS while doing the SCS study. The DSS study had participants only with a technology background. Also by taking on separate roles, participants may have experienced different motivations that skewed their survey results, so further study is needed to draw any conclusions about the role users take and what factors influence their desire to use the specific BBS. Likewise, the results showed that almost 79% of participants for SCS had a basic knowledge of blockchain technology while 19% had advanced knowledge and some of the participants belonged to academia. To address this, we need to consider an underlying effect of participants' background on their behavioral intention to use BBS. Therefore, this study offers an opportunity for future exploration of BBS to consider multigroup analysis based on participants' demography and background knowledge when analyzing the endogenous and exogenous variables, which will further explain the user acceptance of the BBS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, we presented the augmented TAM with trust model on our real-life blockchainbased system (BBS), which comprises two subsystems: Shopping Cart System (SCS) and Data Sharing System (DSS). The main contribution of our study to the body of knowledge is that, to the best of our knowledge, this study is the first to examine the augmented TAM with trust model using real-life concrete blockchain-based applications. The empirical study validated our research model and supported most of the research hypotheses that we set based on our research. Our findings suggested that TAM-based predictors and trust constructs cannot be applied uniformly to BBS. Depending on the specifics of the BBS, the relationships between perceived trust, perceived security, perceived privacy and attitudes towards the system might change. In SCS trust was the strongest determinant of attitudes towards the system, but in DSS, privacy was the strongest determinant of attitudes towards the system. Quality of system had the strongest total effect on intention to use SCS, while perceived usefulness had the strongest total effect on intention to use DSS. Trust significantly influenced the users' attitudes towards both types of BBS, while security did not have any effect on users' attitudes toward BBS. In SCS, privacy positively affected trust, but security had no significant effect on trust, whereas, in DSS, both privacy and security significantly influenced trust. In both BBS, trust had a moderating effect on privacy that correlated directly with attitudes towards BBS, whereas security had no mediating effect between privacy and attitudes towards BBS. Hence, we recommend that while implementing and upgrading blockchain-based solutions, the decision-makers should carefully consider the trust patterns and address the associated privacy challenges of the users. Designers and decision-makers for the industries should know that the effect of trust antecedents is contextdependent whether it is customer or company-oriented. For the development of customeroriented BBS, the effect of a privacy-aware system to influence users' attitudes toward BBS is relevant. For the development of a company-oriented BBS, additional security measures must also be carefully addressed to significantly influence users' trust in BBS, which in turn positively leads to a higher intention to adopt the system. In future work, we plan to investigate multigroup analysis based on participants' background knowledge when analyzing the latent variables and perform the qualitative analysis based upon the respondents' feedback, which will further explain the user acceptance of the BBS. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>To offer direct payment with native Ethereum tokens thereby enabling privacy and confidentiality. To create proof of the existence of every transaction. To give the users full transparency over who accesses their data, when and for what purpose. To enable companies to share customers' data among others in the consortium network. To provide incentives to customers in real-time for sharing their data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>RQ1: Which of the design attributes is/are the strongest antecedents of the attitudes towards BBS?  RQ2: Which of the design attributes is/are the strongest antecedents of the intention to use BBS?  RQ3: Is the influence of privacy on attitudes towards BBS mediated by both security and/or trust?  RQ4: Is the influence of security on attitudes towards BBS mediated by trust? PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,165.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,285.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,300.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,314.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,314.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,318.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,335.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Exploratory Factor Analysis</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53488:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "March 19, 2021
Dear Editor,
We thank the editor and the reviewers for the valuable and constructive comments on the manuscript. We have edited the manuscript to address all the concerns. The specific responses to the editor’s and reviewers’ comments are listed below:
Comment 1: how this work adds to their previous work (Shrestha & Vassileva, 2019a)? What are the new ideas in this paper that are 'useful/reusable in general' for the broader community, that did not already exist in the previous paper?
Response: The current study evaluates the user acceptance of the real-life blockchain-based solution in contrast to the previous studies, that were based on prototype models. This study also presents the augmented TAM by incorporating additional constructs- Trust, Perceived Security and Perceived Privacy in technology adoption study and presents the total effect and mediation analyses. The findings are informative and potentially useful for designing new blockchain-based systems.
Comment 2: Does the path weighting structural modeling approach has been well validated? How to separate correlational vs causal relationships between the TAM constructs?
Response: We have used the validated constructs for our study. Before conducting the main study, we did a pilot study with 14 researchers to evaluate the suitability of adapting the already validated questionnaire.
Correlation in the study is a relationship between two or more variables and does not necessarily imply cause and effect but it does have predictive nature. Based upon the previous research/studies conducted with classical TAM, we formulated hypotheses to present the causal relationship such as the perceived usefulness and perceived ease of use significantly influence user acceptance behavior. It hypothesizes that the actual use of the system is determined by behavioral intention to use, which is in turn influenced by the user′s attitude toward using the system and perceived usefulness and perceived ease of use of the system.
Comment 3: Justification of applying path structural modeling scheme to validate decision model of technology adoption.
Response: In the TAM, UX design attributes such as perceived ease of use and perceived usefulness have been found to be strong determinants of the behavioral intention of the user to adopt the information systems (Davis, 1989), and the latest study by (Shin, 2019) shows the importance of the Trust-Security-Privacy model in the user acceptance of the blockchain-based-system. So, we adopted the PLS-SEM analysis in our study as it a useful technique to estimate complex cause-effect relationship models with latent variables and we aimed to model the latent constructs under conditions of non-normality and small sample sizes.
Comment 4: Questionnaire related to Quality of the system and Attitudes towards using the system.
Response: We used the validated construct based on the survey carried out for Information technology by (Davis et. al, 1992) and (Koh et. al, 2010), which presented that the user would be satisfied with a good quality of information/system. (DeLone & McLean, 1992) refers to QOS as the technical details of the system interface and system’s quality that produces output response such that the technology attributes singularly or jointly influence user satisfaction. So, the items used relate to the satisfaction level and the degree to which the user can be dependable to using the system. Those studies also didn’t mention the security and privacy constructs. So, we used the separate items to represent privacy and security constructs as presented by (Buchanan et. al, 2007) and (Shin, 2010, 2019). We have dependability-related items to represent Trust construct as adopted from (Jian, Bisantz, & Drury, 2000). We will observe the correlation between privacy and security constructs with this system quality construct in our next study.
For the items of the Attitude construct, we adapted them from the validated questionnaire by (Shin, 2017, 2019) which in our study are:
• I believe that using the blockchain-based system would be beneficial for me.
• In my opinion, it would be desirable for me to use the blockchain-based system.
• It would be good for me to use the blockchain-based system.
We have updated the content of the manuscript to clearly define this construct.
Comment 5: Criteria applied to include/exclude data.
Response: The inclusion criteria for the SCS survey was that any individual with knowledge about the internet could participate. The inclusion criteria for the DSS survey was that the participants should be from a technical (computer science or engineering) background because the DSS includes technical aspects that only the software developer or system administrator could understand better.
Comment 6: How much can we trust the results from students, who usually do not have industrial experience, to answer company-related questions?
Response: We have both students and industry people who participated in the DSS (company-oriented) survey. One of the inclusion criteria was that the participants should be from a technical (computer science or engineering) background because the DSS includes technical aspects that only the software developer or system administrator could understand better. Also, the questions weren’t related to the business model. So, the students with sufficient system technical knowledge are expected to answer those questions but some research in the literature shows the student sample may or may not introduce bias in the final result. In our study, we haven’t conducted two separate studies based on the student sample and non-student sample. It is a really interesting topic, and we aim to conduct another study based upon different demographic groups. We have updated the manuscript to reflect this issue of possible bias as a limitation.
Comment 7: line 870, it is because … Without follow-up interviews to confirm the assumptions, the authors shall write “it may be because …”
Response: We have updated the manuscript accordingly.
Comment 8: related works mixed within the introduction, better to merge the referred literature within the later 'Related Work subsection.
Response: We have updated the manuscript accordingly.
Comment 9: it is better to have a list of contributions Authors claim after addressing the issues.
Response: We have updated the manuscript accordingly.
Comment 10: The contents style as in line 125, 127, 146, etc., preceded with '..our/current study..' looks bringing an impression that Authors have struggled to fit the survey type
Response: Since we used the term BBS but it has already been used in an article by (Jun, 2018) without any detailed explanation. So, we cited their work and explained the BBS as per our system used in this study. We have updated the manuscript to reflect BBS as the general-purpose blockchain-based service.
Comment 11: Improve the organization of the manuscript.
Response: We have reorganized the structure of the manuscript accordingly.
We believe that the manuscript is now suitable for publication in the PeerJ CS journal.
Sincerely,
Ajay Shrestha
Department of Computer Science
University of Saskatchewan
Saskatoon, SK, Canada
On behalf of all authors.
" | Here is a paper. Please give your review comments after reading it. |
76 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Semi-supervised learning combines supervised and unsupervised learning approaches to learn predictive models from both labeled and unlabeled data. It is most appropriate for problems where labeled examples are difficult to obtain but unlabeled examples are readily available (e.g., drug repurposing). Semi-supervised predictive clustering trees (SSL-PCTs) are a prominent method for semi-supervised learning that achieves good performance on various predictive modeling tasks, including structured output prediction tasks. The main issue, however, is that the learning time scales quadratically with the number of features. In contrast to axis-parallel trees, which only use individual features to split the data, oblique predictive clustering trees (SPYCTs) use linear combinations of features. This makes the splits more flexible and expressive and often leads to better predictive performance. With a carefully designed criterion function, we can use efficient optimization techniques to learn oblique splits. In this paper, we propose semi-supervised oblique predictive clustering trees (SSL-SPYCTs). We adjust the split learning to take unlabeled examples into account while remaining efficient. The main advantage over SSL-PCTs is that the proposed method scales linearly with the number of features. The experimental evaluation confirms the theoretical computational advantage and shows that SSL-SPYCTs often outperform SSL-PCTs and supervised PCTs both in single-tree setting and ensemble settings. We also show that SSL-SPYCTs are better at producing meaningful feature importance scores than supervised SPYCTs when the amount of labeled data is limited.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The most common tasks in machine learning are supervised and unsupervised learning. In supervised learning, we are presented with a set of examples described with their properties (i.e., descriptive variables or features) as well as with a target property (i.e., output variables, target variables, or labels). The goal of a supervised learning method is to learn a mapping from the descriptive values to the output values that generalizes well to examples that were not used for learning. In unsupervised learning, on the other hand, no output values are provided for the examples. Instead, unsupervised methods aim to extract some underlying structure of the examples (e.g., discover clusters of similar examples, learn low dimensional representations, etc.).</ns0:p><ns0:p>Semi-supervised learning combines these two approaches <ns0:ref type='bibr'>(Chapelle et al., 2006)</ns0:ref>. We are presented experiments, so labeling the examples (compounds) is expensive. On the other hand, millions of unlabeled compounds are present and described in online databases. Ideally, a semi-supervised method can use a handful of labeled compounds, combine them with the unlabeled compounds, and learn a model that can predict the effect of a compound on the disease progression, to facilitate the discovery of a novel drug.</ns0:p><ns0:p>The most common approaches to semi-supervised learning are wrapper methods <ns0:ref type='bibr' target='#b17'>(Van Engelen and Hoos, 2020)</ns0:ref>, such as self-training <ns0:ref type='bibr' target='#b5'>(Kang et al., 2016)</ns0:ref>, where a model iteratively labels the unlabeled examples and includes these pseudo-labels in the learning set in the next iteration. Alternatively, in cotraining <ns0:ref type='bibr' target='#b18'>(Zhou and Li, 2007)</ns0:ref> there are two models that iteratively label the data for each other. Typically, these two models are different or at least learn on different views of the data. Among the intrinsically semi-supervised methods (Van Engelen and Hoos, 2020), semi-supervised predictive clustering trees <ns0:ref type='bibr' target='#b8'>(Levatić, 2017)</ns0:ref> are a prominent method. They can be used to solve a variety of predictive tasks, including multi-target regression and (hierarchical) multi-label classification <ns0:ref type='bibr' target='#b8'>(Levatić, 2017;</ns0:ref><ns0:ref type='bibr' target='#b9'>Levatić et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b10'>Levatić et al., , 2018;;</ns0:ref><ns0:ref type='bibr' target='#b11'>Levatić et al., 2020)</ns0:ref>. They achieve good predictive performance and, as a bonus, the learned models can be interpreted, either by inspecting the learned trees or calculating feature importances from ensembles of trees <ns0:ref type='bibr' target='#b13'>(Petković et al., 2020)</ns0:ref>. However, the method scales poorly with data dimensionalitythe model learning can take a very long time on datasets with many features or targets.</ns0:p><ns0:p>Standard decision/regression trees <ns0:ref type='bibr'>(Breiman et al., 1984)</ns0:ref> split data based on the features in a way that minimizes the impurity of the target in the resulting clusters (e.g., variance for regression, entropy for classification). In the end nodes (leaves), predictions for the target are made. Predictive clustering trees <ns0:ref type='bibr' target='#b1'>(Blockeel et al., 1998</ns0:ref><ns0:ref type='bibr' target='#b0'>(Blockeel et al., , 2002) )</ns0:ref> (PCTs) generalize standard trees by differentiating between three types of attributes: features, clustering attributes, and targets. Features are used to divide the examples; these are the attributes encountered in the split nodes. Clustering attributes are used to calculate the heuristic that guides the search of the best split at a given node, and targets are predicted in the leaves. The role of the targets in standard trees is therefore split between the clustering attributes and targets in PCTs. In theory, the clustering attributes can be selected independently of the features and the targets. However, the learned tree should make accurate predictions for the targets, so minimizing the impurity of the clustering attributes should help minimize the impurity of the targets. This attribute differentiation gives PCTs a lot of flexibility. They have been used for predicting various structured outputs <ns0:ref type='bibr' target='#b7'>(Kocev et al., 2013)</ns0:ref>, including multi-target regression, multi-label classification, and hierarchical multi-label classification. Embeddings of the targets have been used as clustering attributes in order to reduce the time complexity of tree learning <ns0:ref type='bibr' target='#b14'>(Stepišnik and Kocev, 2020a)</ns0:ref>. Semi-supervised PCTs use both targets and features as clustering attributes.</ns0:p><ns0:p>This makes leaves homogeneous in both the input and the output space, which allows unlabeled examples to influence the learning process.</ns0:p><ns0:p>PCTs use individual features to split the data, which means the split hyperplanes in the input spaces are axis-parallel. SPYCTs <ns0:ref type='bibr' target='#b15'>(Stepišnik and Kocev, 2020b;</ns0:ref><ns0:ref type='bibr' target='#b16'>Stepišnik and Kocev, 2020)</ns0:ref> w, b = get split hyperplane(X l , X u , Y , c)</ns0:p><ns0:formula xml:id='formula_0'>3: score = Xw + b 4: rows1 = {i | score i > 0} 5: rows2 = {i | score i ≤ 0} 6:</ns0:formula><ns0:p>if acceptable split(rows1, rows2) then</ns0:p><ns0:formula xml:id='formula_1'>7: left subtree = grow tree(X l [rows1], X u [rows1], Y [rows1], c) 8: right subtree = grow tree(X l [rows2], X u [rows2], Y [rows2], c) 9:</ns0:formula><ns0:p>return Node(w, b, left subtree, right subtree) 10:</ns0:p><ns0:p>else 11:</ns0:p><ns0:p>return Leaf(prototype(Y ))</ns0:p><ns0:p>of clustering attributes, and can exploit sparse data to speed up computation.</ns0:p><ns0:p>In this paper, we propose SPYCTs for semi-supervised learning. We follow the same semi-supervised approach that regular PCTs do, which includes features in the heuristic function for evaluating the quality of a split. This makes the improved scaling of SPYCTs over PCTs especially beneficial, which is the main motivation for our proposal. We modify the oblique split learning objective functions of SPYCTs to account for missing target values. We evaluate the proposed approach on multiple benchmark datasets for different predictive modeling tasks.</ns0:p><ns0:p>In the remainder of the paper, we first describe the proposed semi-supervised methods and present the experimental setting for their evaluation. Next, we present and discuss the results of our experiments and, finally, conclude the paper by providing several take-home messages.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHOD DESCRIPTION</ns0:head><ns0:p>In this section, we present our proposal for semi-supervised learning of SPYCTs (SSL-SPYCTs). We start by introducing the notation used in the manuscript. Let X l ∈ R L×D and X u ∈ R U×D be the matrices containing the D features of the L labeled and U unlabeled examples, respectively. Let Y ∈ R L×T be the matrix containing the T targets associated with the L labeled examples. And let X = [(X l ) T (X u ) T ] T ∈ R (L+U)×D be the matrix combining the features of both labeled and unlabeled examples. Finally, let p ∈ R D+T be the vector of clustering weights, used to put different priorities to different clustering attributes (features and targets) when learning a split.</ns0:p><ns0:p>There are two variants of SPYCTs that learn the split hyperplanes in different ways. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>on both features and targets, the cluster centroids consist of feature and target parts, i.e., </ns0:p><ns0:formula xml:id='formula_2'>c 0 = X l i,: Y i,: ∈ R D+T , c 1 = X l j,: Y j,: ∈ R D+T . Next,</ns0:formula><ns0:formula xml:id='formula_3'>d(i, j) = D ∑ k=1 p k (X j,k − c i k ) 2 + α T ∑ k=1 p D+k (Y j,k − c i D+k ) 2 ,</ns0:formula><ns0:p>where i ∈ {0, 1} is the cluster indicator, 1 ≤ j ≤ L + U is the example index, and α = 1 if the example is labeled (i.e., j ≤ L) and α = 0 if it is unlabeled. The examples are split into two clusters according to the closer centroid. In the case of ties in the distance, the examples are assigned (uniformly) randomly to a cluster.</ns0:p><ns0:p>Let s ∈ {0, 1} L+U be the vector indicating the cluster membership. The new centroids are then the means of the examples assigned to each cluster. The means of the target parts of the centroids are calculated only using the labeled examples, i.e.,</ns0:p><ns0:formula xml:id='formula_4'>c i j = ∑ L+U k=1 1[s k = i]X k, j ∑ L+U k=1 1[s k = i] , if 1 ≤ j ≤ D, c i j = ∑ L k=1 1[s k = i]Y k, j−D ∑ L k=1 1[s k = i] , if D < j ≤ D + T.</ns0:formula><ns0:p>This procedure is repeated for a specified number of iterations. After the final clusters are determined, a linear SVM is used to approximate this split based on the features. Specifically, the following optimization problem is solved:</ns0:p><ns0:formula xml:id='formula_5'>min w,b ||w|| 1 +C L+U ∑ k=1 max(0, 1 − s k (X k,: • w + b)) 2 ,</ns0:formula><ns0:p>where parameter C ∈ R determines the strength of regularization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gradient variant</ns0:head><ns0:p>We start with randomly initialized weights (w) and bias (b) and calculate the fuzzy membership vector s = σ (Xw + b) ∈ [0, 1] L+U . The value s i tells us how much the corresponding example belongs to the 'positive' group, whereas the value 1 − s i tells us how much it belongs to the 'negative' group. To calculate the impurity of a group, we calculate the weighted variance for every feature and every target. For the targets, only labeled examples are used in the calculation. Weighted variance of a vector v ∈ R n with weights a ∈ R n is defined as</ns0:p><ns0:formula xml:id='formula_6'>var(v, a) = ∑ n i a i (v i − mean(v, a)) 2 A = mean(v 2 , a) − mean(v, a) 2 ,</ns0:formula><ns0:p>where A = ∑ n i a i is the sum of weights and mean(v, a) = 1 A ∑ n i a i v i is the weighted mean of v. The impurity of the positive group is then calculated as</ns0:p><ns0:formula xml:id='formula_7'>imp(s, p) = D ∑ k=1 p k var(X :,k , s) + T ∑ k=1 p D+k var(Y :,k , s).</ns0:formula><ns0:p>To get the impurity of the negative group imp(1 − s, p), we simply swap the fuzzy membership weights with 1 − s. The split fitness function we wish to optimize is then <ns0:ref type='table'>2020:10:54485:1:1:NEW 23 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_8'>f (w, b) = S • imp(s, p) + (L +U − S) • imp(1 − s, p),</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The clustering weights are uniform for the targets for tasks of binary classification, multi-class classification, multi-label classification, regression, and multi-target regression. For hierarchical multilabel classification, the weights for target labels positioned lower in the hierarchy are smaller. This gives more importance to labels higher in the hierarchy when splitting the examples.</ns0:p><ns0:p>Features and clustering attributes are standardized to mean 0 and standard deviation 1 prior to learning each split. For the features, this is done to make split learning more stable. For the clustering attributes, this is performed before the application of the clustering weights, so that only clustering weights control the relative influences of the different clustering attributes on the objective function.</ns0:p><ns0:p>We also implement a parameter ω that determines the degree of supervision. The clustering weights, corresponding to features (p i for 1 ≤ i ≤ D), are scaled so that their sum is 1 − ω, and clustering weights, corresponding to targets (p i for D < i ≤ D + T ), are scaled so that their sum is ω. This enables us to determine the relative importance of features and targets when splitting the data. With the borderline values selected for ω (0 or 1), we get the extreme behavior in terms of the amount of supervision.</ns0:p><ns0:p>Setting the value of ω to 0 means that the target impurity is ignored and tree construction is effectively unsupervised, i.e., without supervision. Alternatively, setting the value of ω to 1 means that feature impurity is ignored when learning splits, hence, the unlabeled examples do not affect the split selection.</ns0:p><ns0:p>The tree construction in this case is fully supervised.</ns0:p><ns0:p>The splitting of the examples (i.e., the tree construction) stops when at least one of the following stopping criteria is reached. We can specify the minimum number of examples required in leaf nodes (at least one labeled example is always required otherwise predictions cannot be made). We can also require a split to reduce the impurity by a specified amount or specify the maximum depth of the tree.</ns0:p><ns0:p>After the splitting stops, a leaf node is created. The prototype of the targets of the remaining examples is calculated and it is stored for use as the prediction for the examples reaching that leaf. Since the targets in SOP are represented as tuples/vectors, the prototypes are calculated as column-wise mean values of the targets (Y ). They can be used directly as predictions (in regression problems) or used to calculate the majority class (in binary and multi-class classification), or used to predict all labels with the mean above a certain threshold (in hierarchical and flat multi-label classification).</ns0:p><ns0:p>The time complexity of learning a split in standard PCTs is O(DN log N + NDK) <ns0:ref type='bibr' target='#b7'>(Kocev et al., 2013)</ns0:ref>, where K is the number of clustering attributes. For the SVM and gradient variant of SPYCTs, the time When learning SSL variants (SSL-PCTs and SSL-SPYCTs), clustering attributes consist of both features and targets, therefore K = D + T . This means that SSL-PCTs scale quadratically with the number of features, whereas both variants of SSL-SPYCTs scale linearly. SSL-SPYCTs are therefore much more computationally efficient, and can additionally take advantage of sparse data by performing calculations with sparse matrices. Our implementation of the proposed method is freely licensed and available for use and download at https://gitlab.com/TStepi/spyct.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL DESIGN</ns0:head><ns0:p>We evaluated our approach on 30 benchmark dataset for different predictive modeling tasks: binary classification (BC), multiclass classification (MCC), multi-label classification (MLC), and hierarchical multi-label classification (HMLC), single-target regression (STR) and multi-target regression (MTR).</ns0:p><ns0:p>The datasets are freely available and were obtained from the following repositories: openml 1 , mulan 2 , dtai-cs 3 and kt-ijs 4 . The selected datasets have diverse properties in terms of application domains, number of examples, number of features, and number of targets. Their properties and sources are presented in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>We focus on the comparison of our proposed SSL-SPYCT method with the original supervised method SPYCT and the semi-supervised learning of axis-parallel PCTs: the SSL-PCT <ns0:ref type='bibr' target='#b8'>(Levatić, 2017)</ns0:ref>. These two baselines are the most related supervised and semi-supervised methods of the proposed approach, respectively. For completeness, we also include supervised PCTs in the comparison. Note that SPYCTs and PCTs are the only available methods able to address all of the structured output prediction tasks in Table <ns0:ref type='table'>1</ns0:ref>. Details of the benchmark datasets used for the evaluation. The task column shows the predictive modeling task applicable to the datasets (BC is binary classification, MCC is multi-class classification, MLC is multi-label classification, HMLC is hierarchical multi-label classification, STR is single-target regression, MTR is multi-target regression), N is the number of examples, D is the number of features, and T is the number of targets (for MCC, it is the number of classes Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>times so that each fold is used once as the test set. On the two MTR datasets that have fewer than 500 examples (atp1d and oes97) experiments with L = 500 are not performed.</ns0:p><ns0:p>To measure the predictive performance of the methods on STR and MTR datasets, we use the</ns0:p><ns0:formula xml:id='formula_9'>coefficient of determination R 2 (y, ŷ) = 1 − ∑ i (y i − ŷi ) 2 ∑ i (y i − ȳ) 2 ,</ns0:formula><ns0:p>where y is the vector of true target values, ȳ is their mean, and ŷ is the vector of predicted values. For MTR problems, we calculate the mean of R 2 scores per target. For BIN and MCC tasks, we use F1 score, macro averaged in the MCC case.</ns0:p><ns0:p>Methods solving MLC and HMLC tasks typically return a score for each label and each example, a higher score meaning that an example is more likely to have that label. Let y ∈ {0, 1} n×l be the matrix of label indicators and ŷ ∈ R n×l the matrix of label scores returned by a method. We measured the performance of methods with weighted label ranking average precision</ns0:p><ns0:formula xml:id='formula_10'>LRAP(y, ŷ) = 1 n n−1 ∑ i=0 ∑ j:y i j =1 w j W i L i j R i j ,</ns0:formula><ns0:p>where L i j = |{k : y ik = 1 ∧ ŷik ≥ ŷi j }| is the number of real labels assigned to example i that the method ranked higher than label j, R i j = |{k : ŷik ≥ ŷi j }| is the number of all labels ranked higher than label j, w j</ns0:p><ns0:p>is the weight we put to label j and W i is the sum of weights of all labels assigned to example i. For the MLC datasets, we put equal weights to all labels, whereas, for the HMLC datasets, we weighted each label with 0.75 d , with d being the depth of the label in the hierarchy <ns0:ref type='bibr' target='#b7'>(Kocev et al., 2013)</ns0:ref>. For hierarchies that are directed acyclic graphs, the depth of a node is calculated as the average depth of its parent nodes plus one. The same weights are also used as the clustering weights for the targets for all methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head>Predictive performance comparison</ns0:head><ns0:p>We first present the results obtained on the rf2 dataset in Figure <ns0:ref type='figure' target='#fig_7'>2</ns0:ref>. Here, the semi-supervised approach outperforms supervised learning for both SPYCT variants. This is the case in both single-tree and ensemble settings and for all considered numbers of labeled examples. These results demonstrate the potential of the proposed SSL methods.</ns0:p><ns0:p>For a high-level comparison of the predictive performance of the proposed SSL methods and the baselines, we use average ranking diagrams <ns0:ref type='bibr'>(Demsar, 2006)</ns0:ref>. The results are presented in Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref> where it has the second-best rank, just slightly behind SSL-SPYCT-SVM. Additionally, SSL-SPYCT-SVM also ranks better than both its supervised variant and SSL-PCT for all values of L and both single tree and ensemble settings. For standard PCTs, the semi-supervised version performed better than the supervised version in a single tree setting with very few labeled examples (L = 25, 50), otherwise, their performances were similar. This is consistent with the previous studies <ns0:ref type='bibr' target='#b9'>(Levatić et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b10'>(Levatić et al., , 2018;;</ns0:ref><ns0:ref type='bibr' target='#b11'>Levatić et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Next, we dig deeper into the comparison of SSL-SPYCT variants to the supervised SPYCTs and SSL-PCTs. We performed pairwise comparisons among the competing pairs with sign tests <ns0:ref type='bibr'>(Demsar, 2006)</ns0:ref> on the number of wins. An algorithm 'wins' on a dataset if its performance, averaged over the 10 cross-validation folds, is better than the performance of its competitor. The maximum number of wins is therefore 30 (28 for L = 500). Table <ns0:ref type='table'>2</ns0:ref> and Table <ns0:ref type='table'>3</ns0:ref> present the results for single tree and ensemble settings, respectively.</ns0:p><ns0:p>The results show that in the single tree setting, SSL-SPYCTs tend to perform better than their supervised counterparts, though the difference is rarely statistically significant. When used in ensembles, the improvement of the SSL-SPYCT-SVM variant over its supervised counterpart is small. With the gradient variant, the improvement is greater, except for the largest number of labeled examples. Compared to SSL-PCTs, the improvements are generally greater. This holds for both single trees and especially ensembles, where the differences are almost always statistically significant. As the average ranking diagrams in Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref> already suggested, the gradient variant is especially successful. Overall, the results also show that SPYCTs are a more difficult baseline to beat than SSL-PCTs.</ns0:p></ns0:div>
<ns0:div><ns0:head>231</ns0:head><ns0:p>This is especially true in ensembles, where the studies of SSL-PCTs show that the improvement over 232 supervised PCT ensembles is negligible <ns0:ref type='bibr' target='#b9'>(Levatić et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b10'>(Levatić et al., , 2018;;</ns0:ref><ns0:ref type='bibr' target='#b11'>Levatić et al., 2020)</ns0:ref>. On the other 233 hand, our results show SSL-SPYCT-GRAD can improve even the ensemble performance. Another </ns0:p></ns0:div>
<ns0:div><ns0:head>Learning time comparison</ns0:head></ns0:div>
<ns0:div><ns0:head>238</ns0:head><ns0:p>To compare the learning times of the proposed SSL methods and SSL-PCTs, we selected one large 239 dataset for each predictive task. We focused on the large datasets where the differences highlight the 240 scalability of the methods with respect to the numbers of features and targets. We compare learning times of tree ensembles, as they also serve as a (more reliable) comparison for learning times of single trees.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref> shows the learning times on the selected datasets. The results confirm our theoretical analysis and show that the proposed SSL-SPYCTs are learned significantly faster than SSL-PCTs. The differences are especially large on datasets with many features and/or targets (e.g., ara interpro GO). The learning times are most similar on the gesture dataset, which has only 32 features, so the theoretical advantage of SSL-SPYCTs is less accentuated. Notwithstanding, the proposed methods are faster also on this dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Investigating the ω parameter</ns0:head><ns0:p>The ω parameter controls the amount of influence of the unlabeled examples on the learning process. examples is not obviously beneficial. The fact that ensembles tend to select larger ω values (especially the SVM variant) also explains why the differences in predictive performance between supervised and semi-supervised variants are smaller in ensembles compared to single trees. We also investigated whether the selected ω values were influenced by the predictive modeling task (regression vs. classification, single target vs. multiple targets), but we found no noticeable differences between the ω distributions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Investigating feature importances</ns0:head><ns0:p>We can extract feature importance scores from learned SPYCT trees <ns0:ref type='bibr' target='#b16'>(Stepišnik and Kocev, 2020)</ns0:ref>. The importances are calculated based on absolute values of weights assigned to individual features in all the split nodes in a tree (or ensemble of trees). For a single oblique PCT, they are calculated as follows:</ns0:p><ns0:formula xml:id='formula_11'>imp(T ) = ∑ s∈T s n N s w s w 1 ,</ns0:formula><ns0:p>where s iterates over split nodes in tree T , s w is the weight vector defining the split hyperplane, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science important features compared to supervised SPYCTs in problems with limited labeled data. To do this, we followed the setup from <ns0:ref type='bibr' target='#b16'>Stepišnik and Kocev (2020)</ns0:ref> and added random features (noise) to the datasets.</ns0:p><ns0:p>For each original feature, we added a random one so that the total number of features was doubled. The values of the added features were independently sampled from a standard normal distribution. Then we learned SPYCTs and SSL-SPYCTs and compared the extracted feature importances.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref> presents the results on the qsar-197 dataset. For convenience, we also show the predictive performances of SPYCT and SSL-SPYCT methods. A good feature importance scoring would put the scores of random features (orange) to zero, whereas some real features (blue) would have noticeably higher scores. Low scores of many real features are not concerning, as datasets often include features that are not very useful for predicting the target. This example shows that SSL-SPYCTs can be better at identifying useful features than supervised SPYCTs. The difference here is greater with the gradient variant, especially with 50-250 labeled examples. This is also reflected in the predictive performance of the methods.</ns0:p><ns0:p>In general, the quality of feature importance scores obtained from a model was correlated with the model's predictive performance. This is expected and means that the conclusions here are similar. In terms of feature importance scores, SSL-SPYCTs are often similar to supervised SPYCTs, but there are several examples (e.g., Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref>) where they are significantly better and worth the extra effort.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this paper, we propose semi-supervised learning of oblique predictive clustering trees. We follow the approach of standard semi-supervised predictive clustering trees and adapt both SVM and gradient variants of SPYCTs and make them capable of learning from unlabeled examples. The main motivation for the proposed methods was the improved computational scaling of SPYCTs compared to PCTs which is highlighted in the proposed SSL approach, where features are also taken into account when evaluating the splits.</ns0:p><ns0:p>We experimentally evaluated the proposed methods on 30 benchmark datasets for various predictive modeling tasks in both single tree and ensemble settings. The experiments confirmed the substantial theoretical computational advantage the proposed SSL-SPYCT methods have over standard SSL-PCTs.</ns0:p><ns0:p>The results also showed that the proposed methods often achieve better predictive performance than both supervised SPYCTs and SSL-PCTs. The performance edge was preserved even in ensemble settings, where SSL-PCTs typically did not outperform supervised PCTs. Finally, we demonstrated that SSL-SPYCTs can be significantly better at obtaining meaningful feature importance scores.</ns0:p><ns0:p>The main drawback of SSL-SPYCTs (which is shared with SSL-PCTs) is the requirement to determine the ω parameter dynamically with internal cross-validation. This increases the learning time compared to supervised learning but prevents occasions where introducing unlabeled examples into the learning process hurts the predictive performance. We investigated the selected values for ω and found that higher values tend to be selected when there is more labeled data available, and by ensembles compared to single trees. But the selected values were still very varied, which confirms the need for dynamic selection of ω.</ns0:p><ns0:p>For future work, we plan to investigate SPYCTs in boosting ensembles for both supervised and semisupervised learning. Variants of gradient boosting <ns0:ref type='bibr' target='#b4'>(Friedman, 2001)</ns0:ref> have proven especially successful in many applications recently. We will also try improving the interpretability of the learned models with Shapley additive explanations (SHAP, <ns0:ref type='bibr' target='#b12'>Lundberg et al. (2020)</ns0:ref>). Because our method is tree-based we might be able to calculate the Shapley values efficiently, similarly to how they are calculated for axis-parallel tree methods.</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The first column shows the SPYCT-SVM method, the second column the SPYCT-GRAD method. The first row shows importance scores obtained with the supervised method, the second row shows the importance scores obtained with the unsupervised method, and the third row presents the predictive performance of both supervised and semi-supervised methods.</ns0:p><ns0:p>Breiman, L., <ns0:ref type='bibr'>Friedman, J., Stone, C. J., and Olshen, R. A. (1984)</ns0:ref>. Classification and Regression Trees. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A toy dataset (A) with drawn decision boundaries learned by the axis-parallel (red, dashed) and oblique (blue, solid) decision trees (B, C).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>are a redesign of standard PCTs and use linear combinations of features to achieve oblique splits of the data -the split hyperplanes are arbitrary. The potential advantage of oblique splits compared to axis-parallel splits is presented in Figure 1. SPYCTs offer state-of-the-art predictive performance, scale better with the number 2/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54485:1:1:NEW 23 Mar 2021) Manuscript to be reviewed Computer Science Algorithm 1 Learning a SSL-SPYCT: The inputs are features X l ∈ R L×D and X u ∈ R U×D of labeled and unlabeled examples, targets Y ∈ R L×T of the labeled examples, and a vector c ∈ R D+T of clustering weights. 1: procedure GROW TREE(X l , X u , Y , c) 2:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>we calculate the Euclidean distance to the two centroids for each of the examples. For unlabeled examples, we only calculate the distance to the feature part of the centroids (c x 0 and c x 1 ):</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>where s = σ (Xw + b) and S = ∑ i s i . The terms S and L +U − S represent the sizes of the positive and negative subsets and are added to guide the split search towards balanced splits. The final optimization problem for learning the split hyperplane is minw,b ||w|| 1 2 +C f (w, b),where C again controls the strength of regularization. The objective function is differentiable, and we can efficiently solve the problem using the Adam<ns0:ref type='bibr' target='#b6'>(Kingma and Ba, 2014)</ns0:ref> gradient descent optimization method.4/14PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>complexities are O(N(I c K + I o D)) and O(NI o (D + K)), respectively (Stepišnik and Kocev, 2020), where I o is the number of w, b optimization iterations and I c is the number of clustering iterations (SVM variant).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>179</ns0:head><ns0:label /><ns0:figDesc>for both variants, and the SVM variant uses at most 10 clustering iterations. The strength of regularization 180 (C) is set to 10. For the gradient variant, the Adam optimizer uses parameters β 1 = 0.9, β 2 = 0.999, and 181 ε = 10 − 8. These are the default values from the PyTorch 5 library.182For semi-supervised methods, we select the ω parameter with 3-fold internal cross-validation on the 183 training set. We select the best value from the set {0, 0.25, 0.5, 0.75, 1}. We investigate the influence of184 the number of labeled examples L on the performance of the semi-supervised methods. We set L to the 185 following numbers of available labeled examples: {25, 50, 100, 250, 500}. We evaluate the methods with 186 a slightly modified 10-fold cross-validation corresponding to inductive evaluation setting. First, a dataset 187 is divided into 10 folds. One fold is used as the test set. From the other 9 folds, L examples are randomly 188 selected as labeled examples, and the rest are used as unlabeled examples. This process is repeated 10 189 5 https://pytorch.org/docs/1.1.0/_modules/torch/optim/adam.html 6/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54485:1:1:NEW 23 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>. The first observation is that SSL-SPYCT-GRAD achieves the best rank for all numbers of labeled examples in both single tree and ensemble settings. The only exception are single trees with 25 labeled examples,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Comparison of supervised and semi-supervised variants of SPYCT-SVM and SPYCT-GRAD methods (columns) in both single tree and ensemble settings (rows) on the rf2 dataset with different numbers of labeled examples (L).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>234Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Average ranking diagrams comparing the predictive performance of the proposed SSL-SPYCT-SVM and SSL-SPYCT-GRAD methods and the baselines with different numbers of labeled examples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison of learning times of the SSL algorithms on a selection of large benchmark datasets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 Figure 5 .</ns0:head><ns0:label>55</ns0:label><ns0:figDesc>Figure 5 shows the distributions of the ω values selected with the internal 3-fold cross-validation. We can see that the selected values varied greatly, sometimes different values were chosen even for different folds of the same dataset. This confirms the need to determine ω with internal cross-validation for each dataset separately. Additionally, we notice that larger ω values tend to be selected with more labeled examples and by ensembles compared to single trees. With larger numbers of labeled examples, it makes sense that the model can rely more heavily on the labeled part of the data and unlabeled examples are not as</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>s n is the number of learning examples that were present in the node and N is the total number of learning examples. The contributions of each node to the final feature importance scores are weighted according to the number of examples that were used to learn the split. This puts more emphasis on weights higher in the tree, which affect more examples. To get feature importance scores of an ensemble, we simply average feature importances of individual trees in the ensemble. These scores tell us how much the model relies on individual features and can also be used to identify important features for a given task. We investigated if SSL-SPYCTs are more successful at identifying 11/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54485:1:1:NEW 23 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Comparison of feature importance scores of real and random features (scaled to [0, 1] interval) for the qsar-197 dataset with different numbers of labeled examples.The first column shows the SPYCT-SVM method, the second column the SPYCT-GRAD method. The first row shows importance scores obtained with the supervised method, the second row shows the importance scores obtained with the unsupervised method, and the third row presents the predictive performance of both supervised and semi-supervised methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>323</ns0:head><ns0:label /><ns0:figDesc>CRC press.324Chapelle, O., Schölkopf, B., and Zien, A. (2006). Semi-supervised Learning. MIT Press.325 Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine 326 13/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54485:1:1:NEW 23 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>dataset</ns0:cell><ns0:cell>source</ns0:cell><ns0:cell>task</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>T</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>bioresponse</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>BC</ns0:cell><ns0:cell cols='2'>3751 1776</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mushroom</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>BC</ns0:cell><ns0:cell>8124</ns0:cell><ns0:cell>22</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>phoneme</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>BC</ns0:cell><ns0:cell>5404</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>spambase</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>BC</ns0:cell><ns0:cell>4601</ns0:cell><ns0:cell>57</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>speeddating</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>BC</ns0:cell><ns0:cell>8378</ns0:cell><ns0:cell>120</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>cardiotocography openml</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>2126</ns0:cell><ns0:cell>35</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>gesture</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>9873</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>isolet</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>7797</ns0:cell><ns0:cell>617</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>mfeat-pixel</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>plants-texture</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>1599</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>bibtex</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MLC</ns0:cell><ns0:cell cols='2'>7395 1836</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>birds</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MLC</ns0:cell><ns0:cell>645</ns0:cell><ns0:cell>260</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>bookmarks</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell cols='3'>MLC 87856 2150</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>delicious</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell cols='2'>MLC 16105</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scene</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MLC</ns0:cell><ns0:cell>2407</ns0:cell><ns0:cell>294</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ara interpro GO</ns0:cell><ns0:cell cols='4'>dtai-cs HMLC 11763 2815</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>diatoms</ns0:cell><ns0:cell cols='2'>kt-ijs HMLC</ns0:cell><ns0:cell>3119</ns0:cell><ns0:cell>371</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>enron</ns0:cell><ns0:cell cols='2'>kt-ijs HMLC</ns0:cell><ns0:cell cols='2'>1648 1001</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>imclef07d</ns0:cell><ns0:cell cols='3'>kt-ijs HMLC 11006</ns0:cell><ns0:cell>80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>yeast seq FUN</ns0:cell><ns0:cell cols='2'>dtai-cs HMLC</ns0:cell><ns0:cell>3932</ns0:cell><ns0:cell>478</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>cpmp-2015</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>STR</ns0:cell><ns0:cell>2108</ns0:cell><ns0:cell>23</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pol</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell cols='2'>STR 15000</ns0:cell><ns0:cell>48</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>qsar-197</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>STR</ns0:cell><ns0:cell cols='2'>1243 1024</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>qsar-12261</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>STR</ns0:cell><ns0:cell cols='2'>1842 1024</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>satellite image</ns0:cell><ns0:cell>openml</ns0:cell><ns0:cell>STR</ns0:cell><ns0:cell>6435</ns0:cell><ns0:cell>36</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>atp1d</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MTR</ns0:cell><ns0:cell>337</ns0:cell><ns0:cell>411</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>enb</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MTR</ns0:cell><ns0:cell>768</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>oes97</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MTR</ns0:cell><ns0:cell>334</ns0:cell><ns0:cell>263</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>rf2</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MTR</ns0:cell><ns0:cell>9125</ns0:cell><ns0:cell>576</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scm1d</ns0:cell><ns0:cell>mulan</ns0:cell><ns0:cell>MTR</ns0:cell><ns0:cell>9803</ns0:cell><ns0:cell>280</ns0:cell></ns0:row><ns0:row><ns0:cell>175</ns0:cell><ns0:cell cols='6'>a uniform manner. We evaluate the methods in single tree setting and in bagging ensembles (Breiman,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1996) of 50 trees.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>176For SPYCTs we use the same configuration as it was used in<ns0:ref type='bibr' target='#b16'>Stepišnik and Kocev (2020)</ns0:ref>. Tree depth 177 is not limited, leaves only need to have 1 (labeled) example, and splits are accepted if they reduce impurity 178 by at least 5% in at least one of the subsets. The maximum number of optimization iterations is set to 100</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://www.openml.org 2 http://mulan.sourceforge.net/datasets.html 3 https://dtai.cs.kuleuven.be/clus/hmc-ens/ 4 http://kt.ijs.si/DragiKocev/PhD/resources/doku.php?id=hmc_classification5/14PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54485:1:1:NEW 23 Mar 2021)</ns0:note>
</ns0:body>
" | "Reviewer 1
Basic reporting
- The proposal is in overall good as well as the organization of the paper. I greatly appreciate the fact of making the code of the tool publicly available through git repository.
ANSWER: Thank you kindly for the comment. We support open science.
- Please, provide vectorial images (eps, pdf, ...) when possible. Furthermore, for most of the figures (such as figures 4 and 5), authors should include a legend for lines/colors to improve their readability.
ANSWER: We now generated most figures in vector formats, the exceptions are feature importance figures because they contain too many points. We also added a legend to Figure 4. Figure 5 does not need a legend as the colors only illustrate the different values for L which are marked on the x axis.
- Please, be consistent with the notation. In table 2 methods are called 'SPYCT-GRAD-SSL', while in Table 3 they are called 'SSL-SPYCT-GRAD'
ANSWER: We updated the notation and made it consistent.
Experimental design
- Authors repeat the experiments 10 times with each configuration. It would be desirable to repeat at least 30 times for each different configuration, in order to obtain more consistent results in statistical tests.
ANSWER: Actually, we perform 10 repetitions for each of the 30 datasets. Each configuration is therefore executed 300 times. However, the results are first averaged on a per-dataset basis, hence, the statistical tests are performed on 30 data points. We clarified this in the manuscript when discussing the results.
- In Figure 3 as well as in Tables 2-3, authors include the results for all algorithms. These results are for all datasets, each of them averaged among their 10 (hopefully 30 in the future) executions? Please, make it clearer.
ANSWER: This is addressed with the answer to the previous comment.
- I encourage authors to make raw results publicly available to ease the reproducibility of results in any kind of supplementary material (as a website or in the gitlab repository). At least those averaged results among all executions, not necessary the results of each single execution.
ANSWER: We agree. We added the raw results from the experiments in the supplementary files.
- When studying the learning time, authors only report the results on big datasets. I think it would be a good approach to illustrate the results in a figure (as they do), but results for all datasets should be reported and studied. If the proposed method do not perform faster in smaller datasets but does in bigger, it should be stated (or if it run faster in all/most cases). Besides, I would encourage to include the times of single trees too.
ANSWER: We compare learning times on the larger datasets because the actual algorithmic differences can be clearly seen on those datasets. On smaller datasets, where execution times are very short, the measured times are both noisy and depend heavily on implementation. Learning times for individual trees are simply 50 times lower than learning times of ensembles of 50 trees - but the measurements are noisier. We provide all learning times in the supplementary file.
Validity of the findings
- The validity of the results is great; the proposed method perform significantly better than state-of-the-art in some cases, but never significantly worse. Besides, their proposed method run much faster than state-of-the-art.
Comments for the Author
- The paper is in overall good, but some improvements should be made in the experimental setting to finally accept the paper.
ANSWER: Thank you for the comments and suggestions! We implemented all of the suggestions in the revised manuscript.
Reviewer 2
Basic reporting
In general, the paper is well written and related literature is well cited. Figures and tables are adequate.
For reporting the results, the manuscript only contains ranking diagrams and the number of wins per method. To allow a better understanding of how good your method is, I would like to suggest the inclusion of the numerical values themselves (perhaps in an appendix or supplemental material).
ANSWER: We agree. We provide the raw results in the supplementary file.
Small remarks: figure 4 is missing a key and the symbol 'w' is used with different meanings (e.g. rule 121 versus rule 129).
ANSWER: We added a legend to Figure 4 and replaced one of the ‘w’s with omega.
Experimental design
The evaluation metrics for multi-target regression (MTR) and hierarchical multi-label classification (HMLC) are somewhat unconventional. What is the advantage of using the coefficient of determination for MTR as opposed to the more common average relative root mean squared error? For HMLC, why exactly was the LRAP measure chosen, has it been used before?
ANSWER: Coefficient of determination is calculated as 1 - aRRMSE, so it effectively measures the same thing. We simply used it so that higher value means better performance across all tasks. Regarding LRAP, we decided to use it because its two main properties: 1) it is threshold independent and 2) it computes for each relevant label the percentage of relevant labels among all labels that are ranked before it. Moreover, LRAP can be used for both MLC and HMLC. Previous usages include: https://tuprints.ulb.tu-darmstadt.de/3226/7/loza12diss.pdf, https://www.sciencedirect.com/science/article/pii/S0031320312001203, https://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/download/6170/6833.
The current evaluation for feature importance is rather unclear. It relies on adding noisy features and comparing their scores against real features, nonetheless this raises several comments.
a) What exactly do you mean by score?
ANSWER: The importance score of a given feature is the weighted average of the absolute values of split weights for that feature across the entire tree (or ensemble of trees). The rationale behind this is that the more important/relevant features will have scores bigger/smaller than zero. Hence, the noisy features should have scores close to zero. We expanded this section with more details on how it is calculated.
b) How different coefficients value affect your score? I assume that the same feature may have very different coefficients depending on the split, can they cancel each other out?
ANSWER: See above. The coefficients do not cancel out as absolute values are used.
c) How are these random features generated?
ANSWER: They are independent and all sampled from a standard normal distribution. We added this detail to the manuscript.
As reported in Kocev 2013, targets with different ranges can affect the heuristic values differently. This is solved by normalizing the targets using their variance beforehand. Does your implementation take that into account?
ANSWER: It does. This is handled with the standardization of the clustering attributes prior to learning each split and before clustering weights are applied. We expanded on this in the revised manuscript (lines 129-132).
Validity of the findings
My main concern regarding the validity of the results is the transductive experimental set-up for the proposed approach: if the unlabeled training set is used as test set, it seems to me somewhat an unfair comparison to the supervised setting. In my opinion, either the predictive performance should be compared on an independent test set for both SSL and supervised settings (i.e., inductive) or the learning task should be clearly defined in the method section (e.g., when introducing the notation in the first paragraph).
ANSWER: We repeated the experiments in an inductive setting and updated the manuscript with the results. The advantage of SSL-SPYCTs compared to supervised SPYCTs is smaller in single tree setting and greater in ensemble setting. Their comparison to SSL-PCTs remains favorable. The advantage in performance in ensemble setting increased notably.
In the ensemble setting, bagging is used. Indeed, with bagging you will have a time complexity quadratic in the number of features. However, more common ensembles like random forests will also reduce time complexity. How does bagging with oblique splits correspond to random forests with default splits in learning time (and predictive performance)?
ANSWER: Random forest ensembles do help a bit, but they only reduce the number of features considered for a split. All features are still included among the clustering attributes. This means that time complexity reduces from D*D to D*f(D) where f(D) is a RF parameter.
However, this change is orthogonal to the core change between SPYCTs and PCTs - use of oblique splits. SPYCTs can also learn individual splits on subsets of features, like standard random forests do. In fact, we explored this in our previous work that introduces supervised SPYCTs.
For reproducibility purposes, parameter values should be included. Please find a below a list of parameters that I have failed to find in the manuscript:
- C -> SVM and gradient variant;
- initial w, initial b and Adam parameters -> gradient variant;
- Stopping criteria to grow trees -> I could only find the following quote
'The splitting of the examples (i.e., the tree construction) stops when at least one of the following stopping criteria is reached. We can specify the minimum number of examples required in leaf nodes (at least one labeled example is always required otherwise predictions cannot be made). We can also require a split to reduce the impurity by a specified amount or specify the maximum depth of the tree'
ANSWER: We provide all of the details regarding these parameters in the experimental setup section of the revised manuscript.
- Label weights for the ara_interpro_GO dataset: -> I could only find the excerpt:
'we weighted each label with 0.75^d, where d is the depth of the label',
nonetheless a label can have multiple values for d since it may have several parent nodes. How do you define the weights here?
ANSWER: For DAG hierarchies, average depth of the parent nodes (+1) is used. We added this information to the revised manuscript.
I appreciate the effort of providing the source code to your method, nonetheless, to make it even further reproducible, I would suggest including the data partitions as well.
ANSWER: We provide the code used to split the datasets in the supplementary files.
Comments for the Author
The novelty of this work is rather limited. The proposed method is a direct combination of two already proposed methods in the literature by the same research group, namely oblique predictive clustering trees and semi-supervised predictive clustering trees. However, as I understand it, novelty should not play a role in the evaluation for this journal.
ANSWER: While it is true that oblique predictive clustering trees were proposed by the same authors, and SSL predictive clustering trees were proposed by the same research group, the approach presented here has different properties than previously existing SSL-PCT. The novel part in this work pertains to the fact that it is not clear in what way and by how much the unlabeled examples influence the learning of oblique hyperplanes for splitting the examples. The answer to this issue is not immediately obvious, and it cannot be answered as a transfer/inherited property of the oblique PCTs from standard PCTs. Furthermore, the proposed SSL SPYCTs learn the predictive models in a more efficient manner compared to SSL PCTs thus making their use in (especially large) domains practically possible. This is even more pronounced with the ensembles of SSL-SPYCTs and SSL-PCTs.
What is the size of trees generated by your method? Since your method allows multiple features per split, I assume that fewer splits should be enough to build a model, nonetheless that should come with a trade-off in interpretability and performance. For instance, how does one oblique split with 10 features in its hyperplane compare with a decision tree with 10 nodes on its path? I recommend the authors to provide a comparison between the number of features on average involved in performing a prediction.
ANSWER: Tree sizes are included among the raw results in the supplementary files. However, a direct comparison of tree sizes is tricky, as sizes heavily depend on the level of supervision selected with internal cross-validation. The lower levels of supervision typically result in larger trees for both SPYCTs and standard PCTs.
Finally, I have some concerns about the motivation for the paper.
What are exactly the advantages of employing a linear of combination of features rather than a single one? In the manuscript, I found the excerpt: 'The potential advantage of oblique splits compared to axis-parallel splits is presented in Figure 1. What do the authors mean exactly? I can notice that linear combinations may result in smoother splits, but how is that superior? Would you mind elaborating more on that?
ANSWER: The main advantage of oblique splits is that we can learn them more efficiently with the proposed methods. Additionally, the results show that oblique trees also generalize better. Intuitively, the reason for this is because oblique splits are more expressive and can fit the data better - the split hyperplanes are arbitrary, in contrast to the hyperplanes of standard splits that must be parallel to one of the coordinate axes.
" | Here is a paper. Please give your review comments after reading it. |
77 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiplereferences that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users' performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Graphs are a prevalent means for presenting information. Their advantages over other forms of representation, such as text, for visual data display and analysis have been thoroughly demonstrated (e.g. <ns0:ref type='bibr' target='#b16'>Larkin and Simon, 1987;</ns0:ref><ns0:ref type='bibr' target='#b28'>Tufte and Graves-Morris, 1983)</ns0:ref>. The ability to construct graphs is also critical in helping users not only visualise complex concepts, but also promote learning by doing <ns0:ref type='bibr' target='#b3'>(Bransford et al., 1999)</ns0:ref>. However, there are many situations where visual displays can be inadequate for accessing information. For example, when users engage in multiple tasks that compete for visual attention, or for individuals who experience a situational or permanent visual impairment <ns0:ref type='bibr' target='#b15'>(Kramer, 1994)</ns0:ref>. Sonification has been the focus of increasing research as a means for providing non-visual access to data, including graphs, by displaying data using non-speech sounds. In particular, developments in the field have found that the auditory system is well suited to detect patterns in data sets similar to those represented by visual graphs, allowing listeners to perceive and actively engage with data structures and properties <ns0:ref type='bibr' target='#b15'>(Kramer, 1994;</ns0:ref><ns0:ref type='bibr' target='#b10'>Flowers and Hauer, 2005)</ns0:ref>. In this context, we are interested in exploring support for non-visual point estimation tasks since they form an integral part of editing and interpreting graphed data <ns0:ref type='bibr' target='#b28'>(Tufte and Graves-Morris, 1983)</ns0:ref>. In particular, we are interested in exploring how adding contextual information to data sonification can improve support for such tasks. Previous research has suggested that adding contextual information such as reference markers can improve interaction with sonified graphs (e.g. <ns0:ref type='bibr' target='#b27'>Smith and Walker, 2005)</ns0:ref>. This paper presents an experiment that contributes to quantifying and analysing the extent of such benefits for non-visual point estimation tasks by contrasting three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references. The contrasting of these sonification techniques is explored in comparison to visual and speech-based displays. The reported results have implications for ongoing research into alternative information displays in general, and auditory displays in particular. Investigating factors affecting performance will lead to better design of auditory graphs and other alternative displays in ways that enhance user performance and flexibility, and allow the improvement of system interfaces wherever such displays are required.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND 2.1 Auditory Graphs</ns0:head><ns0:p>In their basic form, auditory graphs are produced by mapping data values on the visual X and Y axes to auditory dimensions, such as frequency and amplitude <ns0:ref type='bibr' target='#b6'>(Brown et al., 2003)</ns0:ref>. This basic sonification technique has been successfully used to provide non-visual access to a variety of visual graph-based representations including line graphs <ns0:ref type='bibr' target='#b18'>(Mansur et al., 1985)</ns0:ref>, seismograms <ns0:ref type='bibr' target='#b14'>(Hayward, 1992)</ns0:ref> and time series data <ns0:ref type='bibr' target='#b9'>(Flowers and Hauer, 1995)</ns0:ref>. <ns0:ref type='bibr' target='#b18'>Mansur et al. (1985)</ns0:ref> pioneered the technique when they developed sound graphs, which mapped data values on the y-axis of a line graph to continuous pitch and the x-axis to temporal presentation. They found that, after a small amount of training, users were able to identify key patterns in the underlying data such as linearity and symmetry on 79% to 95% of the trials. Similarly, <ns0:ref type='bibr' target='#b9'>Flowers and Hauer (Flowers and Hauer, 1995)</ns0:ref> conducted a series of studies in which they examined non-visual presentation of statistical data by combining various dimensions of sound. For instance, they used pitch to represent the y-axis of a polygon and loudness for the values on the x-axis and found that auditory scatter plots are as efficient as visual representations in conveying the sign and magnitude of correlations.</ns0:p><ns0:p>The issue of how to map the dimensions of sound to the data being represented is at the core of auditory graph design. For instance, whether to increase or decrease a perceptual dimension such as pitch in response to changes in the underlying data. <ns0:ref type='bibr' target='#b6'>Brown et al. (2003)</ns0:ref> examined such issues and produced guidelines for auditory graph design grounded in research into the sonification of line graphs, in addition to guidelines specific to the sonification of graphs containing two or three data series. Walker and Mauney (2010) explored preferred data-to-display mappings, polarities, and scaling functions to relate data values to underlying sound parameters for both sighted and visually impaired listeners and found general agreement about polarities obtained with the two listener populations in most studied cases. In terms of compatibility with other presentation modalities, <ns0:ref type='bibr' target='#b24'>Nees and Walker (2007)</ns0:ref> argued that pitch mappings allow for the emergence of patterns in data and showed that perceptual grouping of tones could act much like the primary display advantage of visual graphs, which lies in their ability to efficiently communicate unnoticed patterns. Early studies of auditory graphs have also found them to be comparable in efficacy to tactile displays <ns0:ref type='bibr' target='#b18'>(Mansur et al., 1985)</ns0:ref>, with tactile displays yielding slightly more accurate responses and auditory graphs resulting in faster reaction times. <ns0:ref type='bibr' target='#b2'>Bonebright et al. (2001)</ns0:ref> determined that, in general, users are able to match an auditory graph to a visual line graph or scatter plot of the same data. <ns0:ref type='bibr' target='#b6'>Brown et al. (2003)</ns0:ref> also found that people could produce a visual rendition of a graph that was over 80% accurate (on average) after hearing an auditory presentation, and <ns0:ref type='bibr' target='#b13'>Harrar and Stockman (2007)</ns0:ref> found that a continuous display of auditory graphs produced more accurate visual renderings when compared against the use of discrete tones.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Point Estimation in Auditory Graphs</ns0:head><ns0:p>However, researchers soon realised that there is more to designing effective auditory graphs than merely dealing with the issues of data-to-sound mappings. Whilst presenting quantitative data, visual graphs also present a rich set of information that helps improve the readability and comprehension of such data. In visual information display, additional information such as axes, labels and tick marks increases readability and aids perception by enabling more effective top-down processing <ns0:ref type='bibr' target='#b26'>(Smith and Walker, 2002)</ns0:ref>. A visual graph without context cues (e.g. no axes) provides no way to estimate values at any point. It is these kinds of characteristics that give visual graphs advantages over other means of information presentation, such as linear textual forms <ns0:ref type='bibr' target='#b16'>(Larkin and Simon, 1987)</ns0:ref>.</ns0:p><ns0:p>A common method for adding x-axis context to a sonification is to use a series of clicks or percussive sounds. <ns0:ref type='bibr' target='#b2'>Bonebright et al. (2001)</ns0:ref> investigated the use of rhythmic markers in the form of click sounds and explored whether students could match auditory representations with the correct visual graphs. Graph reading tasks, such as point estimation, which form the focus of this paper, can be greatly effected by the lack of context and reference information. For instance, <ns0:ref type='bibr' target='#b25'>Nees and Walker (2008)</ns0:ref> examined the role of data density (i.e. the number of discrete data points presented per second) and trend reversals for both point-estimation and trend-identification tasks with auditory graphs. For the point estimation tasks, they found that users' performance declined with increased data density and trend reversals. <ns0:ref type='bibr' target='#b27'>Smith and Walker (2005)</ns0:ref> investigated how adding a variety of contextual information can improve non-visual point estimation tasks in such cases. They explored the use of click sounds to represent context on the x-axis and the addition of reference markers that provide scaling cues on the y-axis and found that the addition of auditory context enhances the interpretation of auditory graphs. This line of research has shown that representing data through auditory graphs will be more effective if context information is included and properly designed. Further studies are needed to investigate possible methods for implementing context in order to allow users of sonifications to go beyond the tasks of trend analysis and also to be able to perform point estimation tasks effectively. Moreover, previous studies have mostly focused on passive listening. For example, to explore financial data, users listened to whole graph sonifications before estimating values at certain points of interest <ns0:ref type='bibr' target='#b26'>(Smith and Walker, 2002)</ns0:ref>. The experiment presented in this paper explores whether such benefits extend to interactive sonification, i.e. where users have active control over the audio output as they explore data on an axis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Pointing in haptic and tactile interaction</ns0:head><ns0:p>Other research has focused on haptic and tactile displays as a means for target acquisition at the user interface, although not specifically for point estimation tasks. Pointing, as a gesture for indicating direction or focus, and proprioception are indeed a natural fit for exploring haptic interaction at the user interface. Using a Fitt's model <ns0:ref type='bibr' target='#b8'>(Fitts, 1954)</ns0:ref>, <ns0:ref type='bibr' target='#b0'>Ahmaniemi and Lantz (2009)</ns0:ref> explored the use of tactile feedback to support pointing in augmented reality applications and found that the width and distance of a target had a significant effect on pointing and probing times. However, their findings also showed how Fitt's law is not an adequate model for point estimation because it does not account for the strategy employed by users when searching for targets. Focusing on accessibility, <ns0:ref type='bibr' target='#b17'>Li et al. (2010)</ns0:ref> explored the use of proprioception to support visually impaired users to rearrange icons in a virtual space through a mobile interactive prototype that leverage accelerometer and gyroscope data. <ns0:ref type='bibr' target='#b7'>Fiannaca et al. (2013)</ns0:ref> also used a mobile device to explore how proprioception coupled with haptic feedback can support interaction with invisible objects. Their technique uses haptic feedback to position the user's arm and hand to point out the location of a virtual object. <ns0:ref type='bibr' target='#b12'>Gustafson et al. (2010)</ns0:ref> investigated imaginary interfaces, which are screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback, combining both touch and gestural interaction. Their evaluations of using this technique for drawing and pointing to locations showed that users' visual short-term memory can, in part, replace the feedback conventionally displayed on a screen.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Non-visual graph editing</ns0:head><ns0:p>Although research on non-visual access to graphs is steadily growing, relatively little work has investigated strategies for actively constructing and editing such graphs through non-visual means. There are of course manual solutions for doing so, using physical artefacts such as pins and cardboard <ns0:ref type='bibr' target='#b4'>(Brookshire, 2006)</ns0:ref>, but these can be inadequate for handling complex graphics and do not allow for flexible storage, editing and reproduction. <ns0:ref type='bibr' target='#b19'>McGookin et al. (2010)</ns0:ref> examined how some of these issues could be addressed through tangible user interface design and developed the Tangible Graph Builder to allow for constructing and browsing of chart-based data. Most computer-based solutions to non-visual graph editing combine audio and haptic technologies. For instance, <ns0:ref type='bibr' target='#b20'>McGookin and Brewster (2007)</ns0:ref> developed an audio-haptic application for constructing bar graphs and <ns0:ref type='bibr' target='#b1'>Bernareggi et al. (2008)</ns0:ref> developed an interactive system to create, edit and explore graph structures through direct manipulation operations using audio-haptic interaction, supported by visual feedback. More recently, <ns0:ref type='bibr' target='#b23'>Metatla et al. (2012b)</ns0:ref> developed a cross-modal diagram editor to support collaboration between visually-impaired and sighted coworkers using virtual haptic and non-speech audio techniques. They also explored how connected graphs can be edited using audio-only interaction <ns0:ref type='bibr' target='#b22'>(Metatla et al., 2012a)</ns0:ref>. Evaluations of these applications show that users could effectively construct and manipulate graphical representations non-visually. There is, however, little research that looks at supporting interactive editing of auditory graphs based on sonification or that looks at supporting point estimation tasks through active as opposed to passive listening. This paper contributes to addressing this gap in the literature by exploring the impact of different types of sonifications of reference markers on point estimation tasks. designed an experiment to investigate what effect using sonification to add contextual reference markers has on the accuracy of estimating point positions. We focused our investigation on conveying information that could help when estimating the position of a point in terms of its distance from an origin point.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Apparatus</ns0:head><ns0:p>We designed a simple user interface to support the task of editing the position of a point, focusing on the part where users need to estimate the position of a point when placing it at a desired location on an axis (see Figure <ns0:ref type='figure'>1</ns0:ref>). The interface allows users to manipulate the position of a point using the keyboard up and down arrow keys on an axis containing a total of 30 positions (ranging from -15 to 15, the value 0 being the middle position). Audio was delivered through a Shure SRH240A closed stereo headphones.</ns0:p><ns0:p>Sonifications of feedback about the position of a point and references that mark how far it is from an origin are discussed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1'>Pitch-Only Mapping</ns0:head><ns0:p>In the first design, we sonified the position of a point on an axis by mapping the pitch of a sine tone to the point's Y coordinate following a positive polarity. That is, the tone's pitch changes in accordance with the point's movements on the axis; moving the point up increases the pitch, moving it down decreases it.</ns0:p><ns0:p>We used an exponential function to map the position of the point to frequencies in the range of 120Hz (for position -15 ) to 5000Hz (for position 15). The range and mapping were chosen to fit within the human hearing range, with the exponential distribution, subsequent frequencies differ by a constant factor instead of a constant term and this has been found to be superior to linear mappings <ns0:ref type='bibr' target='#b21'>(Meijer, 1992)</ns0:ref>. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the pitch values for each point on this scale. Interaction with this sonification was designed such that the point moves when users press and hold a cursor key and not in response to single keystrokes.</ns0:p><ns0:p>Pressing and holding a cursor key would therefore trigger a continuous progression of the sonified points being traversed as the point moves up or down on the axis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2'>One-Reference Mapping</ns0:head><ns0:p>In the second design, we used the same pitch mapping described above and added one tone to convey a reference to an origin point. In this case, the reference tone represented the middle point on the scale (position 0 at a pitch frequency of 774Hz lasting 100 milliseconds). We designed this such that the user hears pitch changes that correspond to the movement of the point when they press and hold a cursor key, and hears the reference tone with a static pitch on key release. Comparing the two pitches (on key pressed and on key released) is meant to provide a sense of distance between the current position on the axis and the origin point based on pitch difference; the larger the difference in pitch between the two points the further away from the origin the point is located.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.3'>Multiple-References Mapping</ns0:head><ns0:p>In the third design, we again used the same pitch mapping as described above. But, instead of hearing only one reference point on key release, the user hears multiple successive reference tones with varying pitches that correspond to all the points between the current position and the origin reference. Previous research has shown that the threshold for determining the order of temporally presented tones is from 20 to 100 milliseconds <ns0:ref type='bibr' target='#b11'>(Fraisse, 1978)</ns0:ref>. To create a succession of tones, our reference tones lasted 50 milliseconds and were interleaved by a delay also of 50 milliseconds. In this case, the position of a point in relation to an origin can be estimated by judging both the pitch difference at that point compared to the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science subsequent points, and the length of the sum of successive tones that separate it from the origin. A longer distance yields a longer succession of tones. Points located below the origin trigger an ascending set of tones, while those above the origin trigger a descending set of tones. For example, on reaching position 7, users hear a descending succession of tones made up of all the pitches of points 6, 5, 4, 3, 2, 1 and 0, the origin.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Experimental Design</ns0:head><ns0:p>We manipulated sonification and display type as independent variables in a between-subjects experimental design. Participants were divided into three groups; each group performed a set of point estimation tasks using one of the three sonification designs (between-subjects). To provide baseline comparisons, all participants performed two further sets of point estimation tasks under two control conditions (withinsubjects): because our participants were sighted we chose to include a visual control condition; because speech provides more accurate position information compared to non-speech output, we chose to include a speech-only control condition. No sound was displayed in the visual control condition and participants could see the points as they moved them on the axis. In the speech-only control condition, participants received spoken feedback about the position value of a point. We used the Windows Text-to-Speech Engine (TTS) to speak the position values at the rate of 0.3 Words Per Second (WPS). The spoken numbers were also organised on a scale of 30 (from -15 to 15, the value 0 being the middle position). The order of conditions for each participant was balanced using a Latin Square design to compensate for any effects within trials. Each participant performed 22 trials per condition, totalling 66 trials per participant; thus giving 1320 trial per condition and a total of 3960 points for the whole experiment.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. Point estimation task used in the experiment; participants were first asked to remember the location of a target position (A) and then to reposition a second point (B) from a randomly generated starting point on the y-axis back to the previously memorised target position.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.1'>Point Estimation Task</ns0:head><ns0:p>The task to be completed in each trial was to move a point on the y-axis to a target position and to do so as accurately as possible. The task involved:</ns0:p><ns0:p>• Looking at a target position (in all conditions)</ns0:p><ns0:p>• Estimating its position based on its visual position on the axis (Figure <ns0:ref type='figure'>1</ns0:ref>)</ns0:p><ns0:p>• Using the keyboard arrow key to move the test point to the estimated position (by relaying on the visual, speech or sonification display)</ns0:p><ns0:p>In each trial, participants were first presented with a visual representation of the target position and were asked to memorise it. When the participants indicate they are ready to proceed, the system generates a random starting point on the y-axis from which the participants are required to use the cursor keys to move to the previously memorised target position (see Figure <ns0:ref type='figure'>1</ns0:ref>). Participants pressed on a 'next' button to move to the next trial. In the non-visual conditions (Speech, Pitch, OneRef and MultRefs), participants could see the initial target position (without the reference tone(s)) but were presented with a blank screen hiding the randomly generated starting point and the axis when performing the second part of the task (i.e. they were unable to see the visual display). In this case, they had to rely on the spoken display or Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Step Table <ns0:ref type='table'>2</ns0:ref>. Experimental procedure for one participant.</ns0:p><ns0:p>the sonifications to estimate the position of the point as they moved it to the target position. No accuracy feedback was provided between trials. Points positions were randomly generated by a computer program designed to ensure comprehensive coverage of points distribution along the axis across the 22 trials.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Participants</ns0:head><ns0:p>We recruited 60 sighted participants to take part in this experiment (29 men and 31 women). The mean age was 26 (SD = 6.49). Participants were recruited through various means, including mailing lists and leaflets distributions. They were a mixture of university staff (both academic and non-academic), undergraduate and postgraduate students, and members of the public. All participants received a cash incentive for their participation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.3'>Procedure</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> shows the steps of the experimental procedure for each participant 1 . Upon arrival, participants were provided with an overview of the experiment and were asked to complete an initial questionnaire that asked them about demographic details, their musical training (in terms of years of practice), their experience with non-visual interaction, and they were tested to establish whether or not they had perfect pitch perception. A total of 25 participants rated their musical training as beginner, 17 as intermediate, 6 experts and 12 had no prior musical training. Participants had no prior experience with non-visual interaction, and only one participant had perfect pitch perception. Participants were then randomly assigned to one of the three groups with the exception that care was taken to ensure that the different musical abilities were broadly equally distributed between the groups. Participants were then asked to complete 22 trials per condition (visual, speech, and one of the non-speech sonification conditions). Before the trials began, participants were trained on the particular display they were going to use and were allowed to spend as much time as they wished to get familiar with the interfaces. In particular, participants were introduced to the different sonification mappings used and instructed to spend as much time as they needed until they felt familiar with the mappings used. Once familiar with the interfaces, participants then performed 4 trials similar to the actual trials used in the testing phases. Training typically lasted from 2 to 10 minutes per condition. We conducted informal interviews with the participants at the end of all the trials in order to discuss their impressions, preferences and experience. An entire session lasted between 30 minutes to 1 hour per participant.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.4'>Dependent Variables</ns0:head><ns0:p>The dependent variables were point estimation errors and target selection time. Point estimation errors were measured as the difference between estimated points' positions and the target positions. Target selection time was measured as the duration from the first keystroke pressed when moving a point to the instance the 'next' button press was logged.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Hypotheses</ns0:head><ns0:p>The main hypotheses of the experiment were:</ns0:p><ns0:p>H1: Participants will make significantly more point estimation errors when using the pitch-only sonification mapping compared to the one-reference and the multiple-references mappings. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>H2: Participants will make significantly more point estimation errors when using the one-reference sonification mapping compared to the multiple-references mapping.</ns0:p><ns0:p>H3: Participants will be significantly slower at point estimation tasks when using the multiplereferences sonification mapping compared to the pitch-only and the one-reference mappings.</ns0:p><ns0:p>H4: The one-reference and multiple-references mappings will yield better performances for estimating points near the origin.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>We used single-factor repeated measures ANOVAs with display type as a factor (3 levels: visual, speech, and sonification) and a confidence level of α = 0.05 to analyse data within groups against control conditions. We used Student t-tests when a statistically significant effect was detected to reveal differences between pairs. To analyse data across the three sonification conditions, we used single-factor independent measures ANOVAs with sonification type as a factor (3 levels: pitch-only, one-reference, and multiple-references) and a confidence level of α = 0.05. We used Tukey tests (HSD, 95% confidence level) and Bonferroni corrections when a statistically significant difference was found to reveal differences between sonification conditions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Point Estimation Errors Within Groups</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows the mean point estimation error for each sonification condition as compared to the visual and speech control conditions in each group. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Group 1: Pitch-Only Mapping</ns0:head><ns0:p>The ANOVA test for point estimation errors for Group 1 showed a significant main effect for display type (F(2, 38) = 66.589, p < 0.001, η 2 = 0748). Pairwise Student t-tests showed that participants made significantly less errors when using the visual display (M = 0.67, S D = 0.07) compared to the pitchonly mapping condition (M = 2.77, S D = 0.2) (t = −2.566, p = 0.019), and when using a speech display (M = 0.96, S D = 0.1) compared to the pitch-only mapping condition (t = −8.547, p < 0.001). Differences between the visual and the speech control conditions were also statistically significant (t = −8.626, p < 0.001).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Group 2: One-Reference Mapping</ns0:head><ns0:p>Similarly, the ANOVA test for point estimation errors for Group 2 showed a significant main effect for display type (F(2, 38) = 45.901, p < 0.001, η 2 = 0.705). Pairwise Student t-tests showed that participants made significantly less errors when using the visual display (M = 0.52, S D = 0.16) compared to the onereference mapping condition (M = 1.96, S D = 0.93) (t = −2.806, p = 0.011), and when using the speech display (M = 0.67, S D = 0.22) compared to the one-reference mapping condition (t = −6.784, p < 0.001). Differences between the visual and the speech control conditions were also statistically significant (t = −6.947, p < 0.001).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6465:1:1:NEW 11 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Group 3: Multiple-Reference Mapping</ns0:head><ns0:p>The ANOVA test for point estimation errors for Group 3 also showed a significant main effect for display type (F(2, 38) = 7.425, p < 0.002, η 2 = 0.586). Pairwise Student t-tests showed that participants made significantly less errors when using the visual display (M = 0.85, S D = 1.15) compared to the multiplereferences mapping condition (M = 1.77, S D = 1.03) (t = −2.518, p = 0.021), and when using the speech display (M = 0.74, S D = 0.33) compared to the multiple-reference mapping condition (t = −4.508, p < 0.001). Differences between the visual and the speech control conditions were not statistically significant in this case (t = −0.401, p < 0.693).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Point Estimation Errors Across Groups</ns0:head><ns0:p>The ANOVA test for point estimation errors across groups showed a significant main effect for sonification type (F(2, 57) = 5.908, p = 0.005, η 2 = 0.127). Post hoc pairwise comparisons using Tukey HSD showed that participants who used the pitch-only mapping made significantly more point estimation errors (M = 2.77, S D = 0.95) when compared to participants who used the one-reference mapping (M = 1.96, S D = 0.93) (p = 0.03) and when compared to participants who used the multiple-references mapping (M = 1.77, S D = 1.33) (p = 0.006). There was no significant difference between the one-reference and the multiple-references mappings (p = 0.806) (see Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). These results support hypothesis H1 and reject hypothesis H2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Target Selection Time Within Groups</ns0:head><ns0:p>The ANOVA tests showed that, for all three groups, there was a significant main effect of display type on target selection time (Group 1 F(2, 38) = 33.224, p < 0.001, η 2 = 0.674), Group 2 F(2, 38) = 73.601, p < 0.001, η 2 = 0.911, Group 3 F(2, 38) = 59.936, p < 0.001, η 2 = 0.732). Pairwise Student t-tests showed that participants in all three groups were significantly faster at estimating the position of points on the visual control condition when compared to both the speech condition and the corresponding sonification condition. Participants in all three groups were also significantly faster in the speech control conditions when compared to the sonification conditions (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Target Selection Time Across Groups</ns0:head><ns0:p>The ANOVA test for target selection time across groups showed a significant main effect for sonification type (F(2, 57) = 6.577, p = 0.003, η 2 = 0.233). Post hoc pairwise comparisons using Tukey HSD showed no significant effect between participants who used the pith-only mapping and the one-reference mapping (p = 0.975). However, there was a significant difference between target selection times between participants who used the multiple-references mapping and the pitch-only mapping (p = 0.006) and between the multiple-references mapping and the one-reference mapping (p = 0.01). As shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, participants who used the multiple-references mapping were significantly slower than those who used the other two sonification mappings. These results support hypothesis H3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Point Estimation Errors Within Ranges of Target Positions</ns0:head><ns0:p>To test hypothesis H4, we examined differences in point estimation errors across groups within a variety of target ranges. We explored a number of strategies for dividing the scale and grouping target positions as Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>shown in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. In the first strategy, we divided the scale into four equal segments at positions 0, 7 and -7. In a second strategy, we divided the scale into six equal segments at positions 0, 5, -5, 10 and -10. In the third strategy, we gradually increased the range of target positions considered in the analysis starting from -1 and 1 all the way to -14 and 14. We ran a two-way mixed ANOVA to examine the effects of interactions between range (within-subjects variable) and type of sonification (between-subjects variable) on participants performance on point estimation tasks. The aim was to examine whether and where on the scale range would a given type of sonification be most effective. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> summarises the results we obtained from analysing point estimation errors using the third grouping strategy, which was particularly effective at revealing differences between the sonification conditions. In particular, analyses showed distinct differences in performances across four ranges of target positions; Range 1 encompassing target points between positions -3 and 3, Range 2 for target points between -5 and 5; Range 3 for target positions between -11 and 11; and Range 4 for target points between -14 and 14. Target points located within the range of -1 and 1 were excluded from the analyses because this range did not contain enough data points to reliably run the statistical tests. There was no significant main effect of range (F(3, 171) = 1.972, p = 0.159) and no significant range × type of sonification interaction (F(6, 171) = 1.963, p = 0.131). There was a significant main effect of sonification type (F(2, 57) = 3.23, p = 0.047, η 2 = 0.102). Results of simple effects were as follows: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Range:</ns0:p><ns0:p>1. Pitch sonification: There was a significant main effect of range (F(3, 55) = 4.339, p = 0.008, η 2 = 0.191), in particular participants made significantly more errors in range 1 compared to ranges 2, 3 and 4 (p = 0.001, p = 0.029, p = 0.009, respectively). They also made significantly more errors in Range 3 compared to Range 4 (p = 0.017).</ns0:p><ns0:p>2. One-Reference sonification: There was a significant main effect of range (F(3, 55) = 3.766, p = 0.016, η 2 = 0.17), with participants making significantly more errors in Range 3 compared to Range 4 (p = 0.018).</ns0:p><ns0:p>3. Multiple-References: There was a significant main effect of range (F(3, 55) = 3.223, p = 0.029, η 2 = 0.15), also with participants making significantly more errors in Range 3 compared to 4 (p = 0.01).</ns0:p><ns0:p>• Sonification type:</ns0:p><ns0:p>1. Range 1: There was a significant main effect of sonification type (F(2, 57) = 3.96, p = 0.025, η 2 = 0.122). In this range, participants who used the pitch-only sonification made significant more errors than those who used the one-reference (p = 0.027) and the multiplereferences sonifications (p = 0.013).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Range 2:</ns0:head><ns0:p>There was no significant main effect of sonification type on participants performances in Range 2 (F(2, 57) = 1.156, p = 0.218).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Range 3:</ns0:head><ns0:p>There was no significant main effect of sonification type on participants performances in Range 3 (F(2, 57) =, p = 0.093), however participants using the pitch-only sonification made significantly more errors than those who used the multiple-references sonification (p = 0.035).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Range 4:</ns0:head><ns0:p>There was a significant main effect of sonification type on participants performances in this range (F(2, 57) = 3.99, p = 0.024, η 2 = 0.052). Participants who used the pitch-only sonification made significantly more errors than those who used the one-reference (p = 0.06) and the multiple-references (p = 0.008) sonifications.</ns0:p><ns0:p>Overall, the above results provide partial support for hypotheses H2 and H4. They show that participants' performances using the multiple-references mapping was fairly consistent across the scale, outperforming the pitch-only mapping in all but one range of target points (Range 2). On the other hand, performances using the one-reference were less consistent, failing to outperform the pitch-only mapping within ranges 2 and 3 of target points.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>The goal of the experiment presented in this paper was to contrast different ways of conveying reference markers using sonification and examine the effects that these have on users when performing non-visual point estimation tasks. The hypotheses of the experiment addressed the question of what effect does the addition of reference tones have on users' performance when using different types of sonificationbased displays. With regards to performances in the control conditions, the results from the experiment showed that participants performance on point estimation tasks was significantly affected when using the sonification displays. Point estimation errors were lowest in the visual condition, which was anticipated given that our participants were sighted and had no prior experience with non-visual interaction. Errors were also significantly lower when participants used a speech-based display compared to sonification conditions, which we also anticipated because of the precise nature of spoken feedback.</ns0:p><ns0:p>In relation to the main question addressed, the results showed that there were differences between performances across the three sonification conditions, suggesting that the way reference markers are conveyed using these sonifications does affect target accuracy. Users made significantly more point estimation errors when using the pitch-only sonification mapping compared to the one-reference and the multiple-references mappings. Meanwhile, the mean error across the whole range of the scale investigated in this experiment was similar for one-reference and multiple-references mappings. This shows that the addition of reference markers is important and needs to be taken into consideration when designing non-visual interaction techniques. Similar findings were reported for adding context information when sonifying graphs <ns0:ref type='bibr' target='#b26'>(Smith and Walker, 2002;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nees and Walker, 2008)</ns0:ref> and scroll bars <ns0:ref type='bibr'>(Yalla and Walker, 2008)</ns0:ref>. However, previous research have only explored the addition of context in the case of passive listening. The experiment presented in this paper shows that such benefits extend to interactive sonification where users have active control over the audio output as they explore a data set from an unknown starting point.</ns0:p><ns0:p>There were also differences between the three sonification mappings when compared in terms of target selection times. The results showed that participants were significantly slower at estimating the position of points when using a multiple-references mapping. We had anticipated that participants will be faster when using the pitch-only and the one-reference mappings. This was because reference markers in the multiple-references mapping are presented by aggregating a succession of tones of 50 milliseconds each, which automatically results in a lengthier display compared to a single 100 milliseconds tone in the one-reference mapping. However, the informal interview discussions also revealed that participants tended to spend more time interpreting the sonified reference information they received through the multiple-references mapping. Combined with the proportion of point estimation errors highlighted above, these results explain participants' superior performances under this condition and suggest that there is a trade-off between speed and accuracy related to the amount of reference information to embed in a sonification of this kind. Designers should therefore take this trade-off into consideration when designing sonification of reference information.</ns0:p><ns0:p>For a more in-depth analysis, we examined how point estimation errors differed across various ranges of target positions. Participants who used the pitch-only mapping made significantly more point estimation errors across all target positions with the exception of the range between -5 and 5 on the scale. Analysis of performances using the one-reference and multiple-references mapping revealed more varied results. The mean error between these two sonification mappings was similar for target positions near the origin and near the extreme ends of the scale. However, the mean error between the one-reference and the pitch-only mappings were similar for target positions within the middle ranges of the scale (from -6 and 6 to -11 and 11). The performance of participants who used the one-reference mapping was therefore not as consistent as those who used the multiple-references, which consistently outperformed the pitch-only mapping across these ranges.</ns0:p><ns0:p>We referred back to the subjective data gathered from the informal interviews to further explore the reasons that might explain the above results. Participants seem to have found it difficult to estimate target positions in the middle ranges of the scale due to the lack of accurate reference information. When close to the origin, it was easy to either count the number of tones in a succession of multiple reference tones or to judge pitch differences between the tone of a point's position and the origin tone. However, as the succession of tones and pitch differences increased, counting tones and comparing pitch differences became more difficult and less accurate. Multiple references contained too many points that were presented too quickly to be counted, and pitch differences were too far apart to be judged accurately. Thus, the analysis of participants' point estimation errors in these middle ranges seems to suggest that not only was there a threshold at which the information conveyed through sonified reference markers became less accurate -and hence less useful -but also that such a threshold was different for the multiple-references and the one-reference mappings. Multiple reference tones continued to give useful information throughout the middle ranges, while the one reference tone became less accurate as soon as the target position moved farther from the origin. This is also illustrated by the similarity in mean errors between the pitch-only and the one-reference mapping in target positions located in the middle of the scale.</ns0:p><ns0:p>Interestingly, participants also commented that they did not often rely on the sonified reference markers when targeting points near the extreme ends of the scale (from -12 and 12 to -14 and 14). This confirms that multiple reference tones, while useful across the middle ranges, still reached a threshold where the information they conveyed became redundant. These findings confirm those reported elsewhere in the literature <ns0:ref type='bibr' target='#b27'>(Smith and Walker, 2005)</ns0:ref>. Although they do not explain why participants who used the multiple-references mapping still outperformed those who used a pitch-only mapping in this range of target positions, they suggest a more dynamic redesign of the sonification mappings. For example, while adding reference information is important, being able to switch the point of origin from the middle of the scale (representing 0 in our case) to mark other areas of interest, such as the extreme values, might improve point estimation. This technique would be similar to the use of auditory beacons for audio-only navigation <ns0:ref type='bibr'>(Walker and Lindsay, 2006)</ns0:ref>, and our results show that such a technique might be successfully adapted to support orientation within sonification of graphs. This technique might only be applicable when target positions are known in advance, however, which means that providing reference to one or more static origins is still important for exploratory interaction.</ns0:p><ns0:p>Overall, the above findings contribute to research on non-visual interaction with graphs by extending relevant research with the investigation of active rather than passive point estimation tasks (e.g. <ns0:ref type='bibr' target='#b26'>Smith and Walker, 2002;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nees and Walker, 2008)</ns0:ref>, i.e. where users have direct control over the auditory display as they estimate the position of a given point. Also, existing work that investigated audio-haptic interaction with graphs did not explicitly address the question of support for point estimation tasks (e.g. <ns0:ref type='bibr' target='#b20'>McGookin and Brewster, 2007)</ns0:ref>. Given the nature of the task examined in the presented study, our findings can be used to support better designs of graphing and drawing applications, where point estimation forms an integral part of editing graphs and sketching, as well as more general applications involving pointing and target acquisition actions, such as interfaces for virtual and augmented reality <ns0:ref type='bibr' target='#b17'>(Li et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b7'>Fiannaca et al., 2013)</ns0:ref>. and for audio-haptic and tangible interactions with graphed data. There are limitations to our findings, however. Results may differ for scales larger than the one used in this experiment. In particular, it is likely that larger scales would require more references and context information embedded in the axis to ease navigation and orientation. Additionally, we have restricted our investigation to one-dimensional vertical movements. Results may differ if sonification of reference markers are used to support two-dimensional movements. Finally, it would also be interesting to examine different types of sonifications, e.g. using a musical scale, and how these compare to the exponential sonification used in the presented experiment, and to investigate the impact of physical muscle memory over the trials as a possible confounding variable.</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The mean point estimation error within groups. Error bars represent the Standard Deviation (Group 1: Pitch-only, Group 2: One-Ref, Group 3: Multi-Refs).</ns0:figDesc><ns0:graphic coords='9,224.45,331.57,248.15,148.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The mean target selection time (milliseconds) within groups. Error bars represent the Standard Deviation (Group 1: Pitch-only, Group 2: One-Ref, Group 3: Multi-Refs).</ns0:figDesc><ns0:graphic coords='10,245.13,395.86,206.79,124.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:08:6465:1:1:NEW 11 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The various groupings we explored to analyse point estimation errors within target ranges.</ns0:figDesc><ns0:graphic coords='11,265.81,159.50,165.42,186.37' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Effects of interactions between range and type of sonification.</ns0:figDesc><ns0:graphic coords='11,245.13,511.11,206.78,183.35' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Pitch values (in Hz) used to sonify the 30 points scale.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>Position Pitch Position Pitch Position Pitch Position Pitch</ns0:cell></ns0:row><ns0:row><ns0:cell>-15</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>-7</ns0:cell><ns0:cell>324</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>877</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>2371</ns0:cell></ns0:row><ns0:row><ns0:cell>-14</ns0:cell><ns0:cell>135</ns0:cell><ns0:cell>-6</ns0:cell><ns0:cell>367</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>993</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>2685</ns0:cell></ns0:row><ns0:row><ns0:cell>-13</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>-5</ns0:cell><ns0:cell>416</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>1124</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>3040</ns0:cell></ns0:row><ns0:row><ns0:cell>-12</ns0:cell><ns0:cell>174</ns0:cell><ns0:cell>-4</ns0:cell><ns0:cell>471</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1273</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>3443</ns0:cell></ns0:row><ns0:row><ns0:cell>-11</ns0:cell><ns0:cell>197</ns0:cell><ns0:cell>-3</ns0:cell><ns0:cell>533</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1442</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>3899</ns0:cell></ns0:row><ns0:row><ns0:cell>-10</ns0:cell><ns0:cell>223</ns0:cell><ns0:cell>-2</ns0:cell><ns0:cell>604</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1633</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>4415</ns0:cell></ns0:row><ns0:row><ns0:cell>-9</ns0:cell><ns0:cell>253</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>684</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>1849</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>5000</ns0:cell></ns0:row><ns0:row><ns0:cell>-8</ns0:cell><ns0:cell>286</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>774</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2094</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='3'>EXPERIMENTPrevious studies have shown that the addition of context cues, such as tick marks and labels can have added benefits for non-visual interaction with graphs (e.g.<ns0:ref type='bibr' target='#b26'>Smith and Walker, 2002;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nees and Walker, 2008)</ns0:ref>. Estimating the position of a point in space forms an integral part of reading and/or constructing graph-based representations (c.f.<ns0:ref type='bibr' target='#b16'>Larkin and Simon, 1987;</ns0:ref><ns0:ref type='bibr' target='#b28'>Tufte and Graves-Morris, 1983)</ns0:ref>. We therefore4/15 PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6465:1:1:NEW 11 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='11'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6465:1:1:NEW 11 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='12'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2015:08:6465:1:1:NEW 11 Feb 2016)</ns0:note>
<ns0:note place='foot' n='6'>CONCLUSIONSWe presented an experiment that examined the effects of adding reference tones when designing sonifications to support non-visual point estimation tasks. Our results showed that adding context information in the form of reference markers is important for supporting navigation and exploration, and that care must be taken to account for thresholds of information redundancy. In our case, multiple references as a succession of tones was useful for providing a sense of distance from a target position but became redundant when conveying long distances. Similarly, a single static pitch reference was also useful, but harder to interpret for targets farther away from a static origin. We also found that using multiple reference tones supported more consistent superior performance on point estimation tasks particularly in the middle ranges of an axis, and recommended that sonification of reference information should be designed to account for a speed/accuracy trade-off and allow for dynamic control of reference direction to account for both known and unknown target points. These findings have implications for the design of auditory graphs and more generally for interfaces that require target acquisition and employ interactive sonification as a display technique.</ns0:note>
</ns0:body>
" | "Sonification of Reference Markers for Auditory Graphs:
Effects on Non-Visual Point Estimation Tasks
Responses to reviewer’s comments
We would like to thank the reviewers very much for their time, and their constructive
feedback and comments, which we have addressed as follows:
Reviewer 1’s comments
Comment: There is an additional ”Further Discussion” section which appears
to deviate from the standard form for Peerj
Response: We have removed the section in question and incorporated the content in a
single Discussion section to match the standard form of presentation.
Comment: There seems to be segments of the literature that are missing, in
particular work from the augmented reality sphere (which is relevant given
the device being used in the experiment). Examples include Air Pointing by
Cockburn, Gutwin et al., haptic target acquisition by Fiannca. These possibly
are relevant.
Response: We have added more literature review on the use of proprioception and tactile
interaction to support pointing actions in Section 2.3.
Comment: The introduction of speech as a control condition seems to come
out of no where. It is not clear why it was introduced.
Response: We have addressed this comment in two ways:
1. We brought forward the introduction of the control conditions (both visual and
speech) to the abstract and the introduction sections
2. We provided further justifications for the control conditions in the Experimental
Design Section 3.2
1
Additionally, in relation to this comment, we also took on board reviewer 2’s comment
on the description of our experimental design and changed this from “mixed between-within
subjects” to “between-subjects experimental design” and within-subjects baseline control
conditions.
Comment: By separating your variables out into one way ANOVAs you miss
the potential for (interesting) interaction effects between the different variables.
Response: We note that it is not possible to conduct such an analysis on the first part
of the data to test hypotheses H1, H2, and H3. This is because participants in each group
performed point estimation tasks under the same control conditions which means that no
factorial analysis is possible and the analysis has to proceed with individual ANOVAs. Indeed as mentioned by Reviewer 2 as a minor comment “Because each sonification condition
included two control conditions, the analysis also was forced to proceed with multiple individual ANOVAs for each group [..] I don’t think it substantively affected the interpretation
of any of the findings.”
Comment: The introduction of the ranges as a variable is interesting; however,
again you are separating each out where there may be interactions that are
happening with the other variables. As a result you are also over testing - each
test has the potential of finding an effect and you are compounding that with
each of the ranges. You should be able to integrate these ranges as emergent
variables into your analysis.
Response: We agree with Reviewer 1 that, in this case, it is possible to examine interactions between sonification type and range instead of separating out these variables into
one-way ANOVAs. We have therefore conducted a new factorial analysis to test hypothesis
H4 and presented those results the Section 4.5 “Point Estimation Errors Within Ranges of
Target Position”. This new analysis revealed similar results to the ones that we previously
reported with multiple ANOVAs, which meant that our discussions and general conclusions
remained the same.
Comment: There is a lack of discussion about limitations in the design, the possible confounds that may impact the results (such as physical muscle memory
over the trials).
Response: We have highlighted the current discussion about the limitations and added
the potential of physical muscle memory to be a confounding variable on the results over
the trials as suggested.
2
Reviewer 2’s comments
Comment: I was not totally clear on the task structure (section 2.2.1). I
think more detail should be provided. Was the target tone presented visually,
with sound, or both? Or did the presentation of the target tone match the
presentation format of the test condition (speech for speech, etc.)? Was a
target tone for the reference conditions presented with references or as a single
tone without references? As participants responded, did they always see the
visual display? If yes, could they have visually anchored the target and simply
used the visual slider to match its remembered position in space?
Response: We have added more detailed description about the structure of the task in
section 3.2.1 to provide clarifications.
Comment: did the method of interaction with the keyboard possibly influence
results in addition to the auditory display, since the interaction method was
(necessarily) slightly different across conditions?
Response: We note that the physical controls were exactly the same across all conditions.
What would have been different is participants’ choice of strategies to solve the task when
using each type of display, which we have discussed in the section 5.
Comment: The task is described as a point estimation task, but, if I understand
correctly, the task required: 1) remembering the target tone; and 2) using the
keyboard to move the test tone until it matched the remembered target tone.
As such, the current description in 2.2.1 makes the task sound more like a
perceptual matching task?sort of like a method of adjustment approach–than
a point estimation task. Point estimation usually requires the participant to
generate and assign a value to the magnitude/quantity represented by the
sonification. I don?t see this as a critical flaw, but I think a clarification of the
task (and if necessary a reconsideration of whether it actually involved point
estimation or instead was more of a memory matching task) could strengthen
the clarity and precision of the manuscript.
Response: To clarify, the task involved:
1. Looking at a target position (which was presented visually and not as a tone under
all conditions)
2. Estimating its position
3
3. Using the keyboard to reproduce the estimated position using a visual, speech or
sonification display (no-reference, one-reference, multiple-references)
Thus, we agree with the reviewer that the task has an element of perceptual matching
(particularly in the visual control condition where the target point and test point were
presented using the same modality). However, this was not the case in the sonification
and speech conditions, because participants had to estimate the value of the point in one
modality and reproduce it in another. This is why we refer to this task as point estimation.
We have added this clarification in the description of experimental task in section 3.2.1.
We also note that we have pointed out difference between what we called passive and active
point estimation at the end of section 2.3
Comment: I think the Discussion would be strengthened if the manuscript
linked the study task to specific potential applications of this type of task.
Response: We have added a general description of potential applications in the Discussion section
Comment: Effect sizes would be nice to report for significant results.
Response: We have added effect sizes for significant results throughout the paper
Comment: In 2.2.4, the DV was the absolute value of the difference, correct?
This should be clarified. Correspondingly, the Y axis in Figure 2 shows negative values that should be removed from the figure if no negative values were
possible.
Response: We have removed the negative values for the graph
Comment: Yalla and Walker used a design that was similar to the manuscript?s
one-reference mapping. They called it a ?double-tone? auditory scroll bar. It
might be worth consulting: Yalla, P., & Walker, B. N. (2008). Advanced
auditory menus: Design and evaluation of auditory scroll bars. Proceedings
of the Tenth International ACM SIGACCESS Conference on Computers and
Accessibility (ASSETS2008), Halifax, Canada (13-15 October, 2008). pp. 105112.
Response: We have consulted this paper and found it to be very relevant indeed. We
have added a reference to it in the Discussion section
4
Comment: I wanted more explanation/extrapolation after the single sentence
that was at the end of 4.1.4.
Response: This part of the paper has been changed in response to reviewer 1’s comments
regarding the use of mixed ANOVA to examine interaction between variables. The ending
of the corresponding section now provides the following explanation:
“Overall, the above results provide support for hypothesis H4 as well as partial
support for hypothesis H2. They show that participants’ performances using
the multiple-references mapping was fairly consistent across the scale, outperforming the pitch-only mapping in all but one range of target points. On the
other hand, performances using the one-reference were less consistent, failing
to outperform the pitch-only mapping within ranges 2 and 3 of target points.”
5
" | Here is a paper. Please give your review comments after reading it. |
78 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Psychosocial risks, also present in educational processes, are stress factors particularly critical in state-schools, affecting the efficacy, stress, and job satisfaction of the teachers. This study proposes an intelligent algorithm to improve the prediction of psychosocial risk, as a tool for the generation of health and risk prevention assistance programs. Methods. The proposed approach, Physical Surface Tension-Neural Net (PST-NN), applied the theory of superficial tension in liquids to an artificial neural network (ANN), in order to model four risk levels (low, medium, high and very high psychosocial risk). The model was trained and tested using the results of tests for measurement of the psychosocial risk levels of 5,443 teachers. Psychosocial, and also physiological and musculoskeletal symptoms, factors were included as inputs of the model. The classification efficiency of the PST-NN approach was evaluated by using the sensitivity, specificity, accuracy and ROC curve metrics, and compared against other techniques as Decision Tree model, Naïve Bayes, ANN, Support Vector Machines, Robust Linear Regression and Logistic Regression Model. Results. The modification of the ANN model, by the adaptation of a layer that includes concepts related to the theory of physical surface tension, improved the separation of the subjects according to the risk level group, as a function of the mass and perimeter outputs. Indeed, the PST-NN model showed better performance to classify psychosocial risk level on state-school teachers than the linear, probabilistic and logistic models included in this study, obtaining an average accuracy value of 97.31%.</ns0:p><ns0:p>Conclusions. The introduction of physical models, such as the physical surface tension,</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Psychosocial risks are stress factors that can alter and unbalance a person's resources and abilities to manage and respond to a flow of work activity, negatively affecting physical and psychological health <ns0:ref type='bibr'>(Sauter & Murphy, 1984)</ns0:ref>. Among initial prediction models that enable identification of risks associated with work-related stress <ns0:ref type='bibr'>(Karasek, 1979)</ns0:ref> and workplace variables, some are based on mental stress resulting from workplace demands and decision-making. Workplace variables may cause the worker to feel their effort is inadequate, in accordance with the compensation obtained therefrom, and contribute to the development of work-related stress <ns0:ref type='bibr'>(Siegrist, 1996)</ns0:ref>.</ns0:p><ns0:p>This situation is particularly critical in state-schools teachers, where work-related stress are present in educational processes <ns0:ref type='bibr'>(Kinman, 2001)</ns0:ref>. A previous study <ns0:ref type='bibr'>(Collie, Shapka & Perry, 2012)</ns0:ref> shows how teachers' perception of their work environment influence levels of: teaching efficacy, stress, and job satisfaction. The study demonstrates that the teachers' perceptions of students' motivation and behavior have the highest risk level. Workplace variables directly impact the perception of well-being among participants. Stress is negatively associated with the teaching efficacy variable. Additionally, stress influences directly on sense of job satisfaction, workload, and teaching efficiency. Indeed, prediction of psychosocial risk levels in state-school teachers is fundamentally important as a tool for the generation of health and risk prevention assistance programs.</ns0:p><ns0:p>Similar studies, focused on population different from teachers, have used machine learning techniques as Dynamic Bayesian Networks (DBN), Logistic Regression, Support Vector Machine, and Naïve Bayes classifiers <ns0:ref type='bibr'>(Wenhui Liao et al., 2005;</ns0:ref><ns0:ref type='bibr'>Subhani et al., 2017)</ns0:ref>, to attempt a recognition of the patterns associated with workplace stress and for the detection of mental stress at two or multiple levels. Variables as facial expressions, eye movements, physiological conditions, behavioral data from activities in which users interact with the computer, and performance measurement, have been considered in that previous studies. A high level of stress is associated with symptoms including rapid heartbeat, rapid breathing, increased sweating, cold skin, cold hands and feet, nausea, and tense muscles, among others. Accuracy of 94.6% for two-level identification of stress and 83.4% accuracy for multiple level identifications have been reported.</ns0:p><ns0:p>Artificial neural networks (ANN) are a classification technique that in recent years have regained importance thanks to improvements associated with technology, as the deep learning <ns0:ref type='bibr'>(Saidatul et al., 2011;</ns0:ref><ns0:ref type='bibr'>Sali et al., 2013)</ns0:ref>. One of the crucial components of deep learning are the neural network activation functions, which are mathematical equations that determine the output of the ANN model, its accuracy, and also the computational efficiency. Different linear and nonlinear activation functions have been proposed in the literature <ns0:ref type='bibr'>(Tzeng, Hsieh & Lin, 2004)</ns0:ref>, each one with its advantages and disadvantages, but reporting a better performance when nonlinear mathematical equations are included. The present work introduces a novel approach based on a modification in the activation function of the neural network, based on the theory of surface tension, in order to optimize the convergence of solutions in the classification processes. Indeed, the neural network calculates the desired output, using the surface tension function instead of the sigmoid function. In terms of mass and perimeter, these two surface tension equation parameters intervene to replace the network sigmoid function, with the aim to reduce data variability and dimensionality, and to improve the response in the classification and resulting error.</ns0:p><ns0:p>In the present study, the development of an new approach of neural network, based on Physical Surface Tension <ns0:ref type='bibr'>(Jasper, 1972)</ns0:ref> to model and predict psychosocial risk levels among Colombian state-school teachers, is presented. The Physical Surface Tension-Neural Net (PST-NN) approach is applied to psychosocial factors, musculoskeletal and physiological variables, present in academic environments in state-schools, in order to recognize their patterns, and thereby predict the type of risk to which a new individual may be exposed in such a work environment.</ns0:p><ns0:p>The next part of the document is organized as follows: first, the database, the preprocessing of the data, the definition of the new PST-NN approach, and the applied statistical tests, are described in Materials & Methods section; then, the Results section contains information about the training and test of the PST-NN approach, and its comparison with other published techniques; finally, the results are discussed and concluded in Discussion and Conclusions sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>In this study, the results of tests for measurement of the psychosocial risk levels of 5,443 teachers, in five Colombian state-schools in cities in the same area, were analyzed. The data were obtained over a period of one and a half years. The dataset is a selfadministered survey by labor psychologist and it was approved by university ethics committee public health at Universidad Nacional de Colombia, campus Manizales (Acta 01, SFIA-0038-17, legal document Mz. ACIOL-009-17, January 18, 2017). The dataset can be consulted in https://zenodo.org/record/1298610 <ns0:ref type='bibr'>(Mosquera, Castrillón Gómez & Parra-Osorio, 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Database and Data Pre-processing</ns0:head><ns0:p>The dataset contains information about the following variables: i) psychosocial; ii) physiological, and; iii) variables associated with pain and musculoskeletal disorders. Psychosocial risk factors may be separated into two main classes: those which have negative effects on health, and those which contribute positively to the worker's well-being. Although both are present in all work environments, the present study considered those which negatively affect health in academic public-schools organizations <ns0:ref type='bibr'>(El-Batawi, 1988;</ns0:ref><ns0:ref type='bibr'>Bruhn & Frick, 2011;</ns0:ref><ns0:ref type='bibr'>Lippel & Quinlan, 2011;</ns0:ref><ns0:ref type='bibr'>Weissbrodt & Giauque, 2017;</ns0:ref><ns0:ref type='bibr'>Dediu, Leka & Jain, 2018)</ns0:ref>.</ns0:p><ns0:p>Among the risk factor variables associated with work environment analysis, there was a total of 131 input variables: X ij = (psychosocial factors, j = 1, …, 123), P ij = (physiological factors, j =1, … 3) and M ij = (musculoskeletal symptoms, j = 1, … 5), where, i is the subject under study. Output variables were identified as the level of risk in which the person may be characterized E ij = Class [low risk (E 1 ), medium risk (E 2 ), high risk (E 3 ), and very high PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>risk(E 4 )]. Surface electromyography was performed to corroborate the musculoskeletal problems declared in patients with level of risk medium, high and very high, and confirmed in their clinical history. Electromyography data were collected with a BITalino (r)evolution Plugged kit (PLUX Wireless Biosignals S.A, Lisbon, Portugal) and validated by a medical specialist to find out if the patients actually had osteomuscular problems.</ns0:p><ns0:p>Redundant psychosocial factors (X ij ) were filtered by means of rank importance of predictors using ReliefF algorithm procedure (1) <ns0:ref type='bibr'>(Robnik-Sˇ & Kononenko, 2003)</ns0:ref>, with the goal of identifying noisy variables in the dataset using the Chebyshev metric criteria. The ReliefF algorithm located important predictors throughout the 10 nearest neighbors and put the 123 X ij independent factors into groups. Predictor numbers were listed by ranking, and the algorithm selected the predictors of greatest importance. The weights yielded weight values in the same order as that of the predictors. Distances between factor pairs, at this weight, were measured once again, and the factor with the lowest total value (distance) was chosen, which yielded 12 X ij factors per group. It further added physiological (P ij ) and musculoskeletal symptom variables (M ij ). The algorithm recognized the variables with the lowest value and punished those predictors (risk associated with each individual X ir, where r =1, … 4 represents the risk level: low risk (X i1 ), medium risk (X i2 ), high risk (X i3 ), and very high risk(X i4 )), which produced different values for neighbors in the same group (risk factors group F ij ), and it increased those which produce different values for neighbors in different groups. ReliefF initially defined predictor weights R ij at 0, and the algorithm subsequently selected a random value X ir , iteratively. The k-nearest values X ir for each group were identified, and all predictor weights F ij for the nearest neighbors X iq were updated <ns0:ref type='bibr'>(Robnik-Sˇ & Kononenko, 2003, p. 26)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>'W[A]≔ 𝑊[𝐴] - 𝑘 ∑ 𝑗 = 1 𝑑𝑖𝑓𝑓(𝐴, 𝑅 𝑖 ,𝐻 𝑗 ) 𝑚 . 𝑘 + ∑ 𝐶 ≠ 𝑐𝑙𝑎𝑠𝑠(𝑅 𝑖 ) + [ 𝑃(𝐶) 1 -𝑃(𝑐𝑙𝑎𝑠𝑠(𝑅 𝑖 )) 𝑘 ∑ 𝑗 = 1 𝑑𝑖𝑓𝑓(𝐴, 𝑅 𝑖 ,𝑀 𝑗 (𝐶))] 𝑚 . 𝑘 ' (1)</ns0:formula><ns0:p>Where, is randomly selected instances. 𝑅 𝑖 is k nearest hits (k-nn with the same class). 𝐻 𝑖 is k nearest misses (k-nn with the different class). 𝑀 𝑗 (𝐶) is the quality estimation for all attributes A for Ri, and Hj and misses values</ns0:p><ns0:formula xml:id='formula_1'>𝑊[𝐴] Mj(C).</ns0:formula><ns0:p>is the sum of probabilities for the misses classes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>-𝑃(𝑐𝑙𝑎𝑠𝑠(𝑅 𝑖 ))</ns0:head><ns0:p>is the processing time repeated. 𝑚 In total, 20 input variables E ij = X ij + P ij + M ij were selected (Tables <ns0:ref type='table' target='#tab_0'>1 -3</ns0:ref>): twelve variables X ij = (j = 1, …, 12), which constituted psychosocial variables; three physiological variables P ij = (j = 1, …3), and; five variables associated with musculoskeletal symptoms M ij = (j = 1, …5). This variables were normalized, in accordance with Equation 2. Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_2'>𝑬 𝒏𝒐𝒓𝒎𝒂𝒍𝒊𝒛𝒆𝒅 = (𝑬 -𝑬 𝒎𝒊𝒏 ) 𝑬 𝒎𝒂𝒙 -𝑬 𝒎𝒊𝒏 (2)</ns0:formula><ns0:p>Where, E corresponds to the variable to be normalized, is the maximum value of 𝐸 𝑚𝑎𝑥 each variable, is the minimum value, and is the normalized variable within 𝐸 𝑚𝑖𝑛 𝐸 𝑛𝑜𝑟𝑚𝑎𝑙𝑖𝑧𝑒𝑑 the -1 to 1 range.</ns0:p></ns0:div>
<ns0:div><ns0:head>Basis of the surface tension-neural net algorithm (PST-NN)</ns0:head><ns0:p>The approach was based on the theory of liquid surface tension <ns0:ref type='bibr'>(Macleod, 1923;</ns0:ref><ns0:ref type='bibr'>Jasper, 1972;</ns0:ref><ns0:ref type='bibr'>Tyson & Miller, 1977)</ns0:ref>, given by equation 3. Liquid surface tension is defined as the amount of energy necessary to increase surface area of a liquid per unit of area. Surface tension (a manifestation of liquid intermolecular forces) is the force that tangentially acts, per unit of longitude, on the border of a free surface of a liquid in equilibrium, and which tends to contract the said surface <ns0:ref type='bibr' target='#b0'>(Adamson & Gast, 1967a)</ns0:ref>. The cohesive forces between liquid molecules are responsible for a phenomenon known as surface tension <ns0:ref type='bibr'>(Fowkes, 1962;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adamson & Gast, 1967b;</ns0:ref><ns0:ref type='bibr'>Lida & Guthrie, 1993;</ns0:ref><ns0:ref type='bibr'>Law, Zhao & StrojnisÌtva., 2016;</ns0:ref><ns0:ref type='bibr'>Almeida et al., 2016)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'> = 𝐹 2𝐿 (3)</ns0:formula><ns0:p>Where, is the surface tension that measures the force per unit length (in the model  is the classification risk level), F is the force required to stop the side from starting to  slide, L the length of the movable side, and the reason for the 1/2 is that the film has two surfaces <ns0:ref type='bibr'>(Macleod, 1923)</ns0:ref>. In this model, the multiplication of the perimeter of an object by the surface tension of a liquid yields the force that a liquid exerts on its surface, on an object, in order to prevent said tension from breaking. As such, if the weight of an object is greater than the force exerted by the liquid on its surface, the object tends to sink.</ns0:p><ns0:p>The theory of surface tension addresses cohesion between molecules in a liquid, and their energetic relationship with the exterior, generally a gas. When submitted to a force that breaks the energetic state of molecular cohesion, the surface of a liquid causes the object producing internal force in the liquid to sink. This proposal sought to emulate the surface tension theory in the psychosocial analysis of risk factors present in work environments and their degrees of risk, from the viewpoint of improving a machine learning model. It used and adapted the said theory to improve risk classification and modify the necessary parameters of a neural network (the number of layers, nodes, weights, and thresholds) to reduce data dimensionality, and increase precision.</ns0:p></ns0:div>
<ns0:div><ns0:head>Implementation of the PST-NN algorithm</ns0:head><ns0:p>The 20 input variables E ij became two physical variables, perimeter and mass, throughout an artificial neural network with four layers. Three of these layers constitute the architecture of a standard neural network, with the difference that, the last level contains a new neural network model based on physical surface tension <ns0:ref type='bibr' target='#b1'>(Adamson & Gast, 1967b)</ns0:ref>. Eighty neurons were used in layers one and two, due to the fact that substantial changes were not registered using more neurons in these layers . Additionally, just two neurons were used for layer 3, in order to annex the new proposed surface tension layer. The architecture of the artificial neural classification network is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>. This included the three standard neuron layers, as well as a fourth layer with a novel design.</ns0:p><ns0:p>For the initialization of the neural network parameters, the Nguyen-Widrom algorithm was used <ns0:ref type='bibr'>(Pavelka & Prochazka, 2004;</ns0:ref><ns0:ref type='bibr'>Andayani et al., 2017)</ns0:ref>, in which random parameters were generated. However, the advantage of this was that the parameters distribute the active neural regions much more uniformly in layers, which improved neural network training, as it presented a lower learning rate error from the beginning.</ns0:p><ns0:p>Layer 1 output calculation: The 20 input variables of a specific individual from the training set, a vector called , went through an initial layer of 80 neurons. Each neuron 𝑬 had 20 parameters, called weights, which multiplied each input variable of vector . A 𝑬 parameter called bias was added to this multiplication. It was associated with each 𝒃 neuron, which results in the partial output of Layer 1. This procedure is described throughout the following equation:</ns0:p><ns0:formula xml:id='formula_4'>𝒚 𝒌 𝟏 = ( 𝟐𝟎 ∑ 𝒊 = 𝟏 𝑬 𝒊 * 𝒘 𝒌,𝒊 𝟏 ) + 𝒃 𝒌 𝟏 𝒇𝒐𝒓 𝒌 = 𝟏 𝒕𝒐 𝟖𝟎 (4) 𝒚 𝟏 = {𝒚 𝟏 𝟏 ,𝒚 𝟐 𝟏 … 𝒚 𝟖𝟎 𝟏 } (5)</ns0:formula><ns0:p>Where is the i variable of the individual chosen from the training set, is the k 𝑬 𝒊 𝒘 𝒌,𝒊 𝟏 neuron's weight in Layer 1, which is multiplied by variable i, is neuron k's bias in Layer 𝒃 𝒌 𝟏 1, which is added to the total, and is the result of each k neuron. These 80 results 𝒚 𝒌 𝟏 were represented by vector, and went through a hyperbolic tangent transfer 𝒚 𝟏 𝒚 𝟏 function, as this is a continuous transfer function, and is recommended for patternrecognition processes <ns0:ref type='bibr'>(Harrington, 1993)</ns0:ref>. Layer 1 output is described in the following equation</ns0:p><ns0:formula xml:id='formula_5'>𝒀 𝒌 𝟏 = 𝟐 𝟏 + 𝒆 -𝟐 * 𝒚 𝒌 𝟏 -𝟏 𝒇𝒐𝒓 𝒌 = 𝟏 𝒕𝒐 𝟖𝟎 (6) 𝒀 𝟏 = {𝒀 𝟏 𝟏 ,𝒀 𝟐 𝟏 … 𝒀 𝟖𝟎 𝟏 } (7)</ns0:formula><ns0:p>Where, e is the exponential function and is the final output for Layer 1 and is 𝒀 𝟏 composed of 80 outputs, one for each neuron.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Layer 2 output calculation: The 80 outputs from Layer 1, , became the inputs for 𝒀 𝟏 Layer 2, which had the same number of neurons as Layer 1. As such, in accordance with the procedure performed in Layer 1, the following equations were obtained:</ns0:p><ns0:formula xml:id='formula_6'>𝑦 𝑘 2 = ( 80 ∑ 𝑖 = 1 𝑌 𝑖 1 * 𝑤 𝑘,𝑖 2 ) + 𝑏 𝑘 2 𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 80 (8) re is the output of neuron i from Layer 1, is the weight of neuron 𝑌 𝑖 1 𝑤 𝑘,𝑖 2 k, associated with the output of neuron i in Layer 1, is neuron k's bias in 𝑏 𝑘 2</ns0:formula><ns0:p>Layer 2, and includes the 80 responses of each neuron, prior to passing 𝑦 2 through the transfer function. In order to obtain the final output for Layer 2 (𝑌 2 ) the hyperbolic transfer function was applied:</ns0:p><ns0:formula xml:id='formula_7'>𝑌 𝑘 2 = 2 1 + 𝑒 -2 * 𝑦 𝑘 2 -1 𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 80 (10) 𝑌 2 = {𝑌 1 2 ,𝑌 2 2 … 𝑌 80 2 } (11)</ns0:formula><ns0:p>Layer 3 output calculation: The 80 outputs for Layer 2 were the inputs of Layer 3, which contains two neurons:</ns0:p><ns0:formula xml:id='formula_8'>𝒀 𝒌 𝟑 = ( 𝟖𝟎 ∑ 𝒊 = 𝟏 𝒀 𝒊 𝟐 * 𝒘 𝒌,𝒊 𝟑 ) + 𝒃 𝒌 𝟑 𝒇𝒐𝒓 𝒌 = 𝟏 𝒕𝒐 𝟐 (12) 𝒀 𝟑 = {𝒀 𝟏 𝟑 ,𝒀 𝟐 𝟑 } = {𝒎,𝑷𝒆𝒓}<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>Where is the output of neuron i in Layer 2, is the weight of neuron k in Layer 𝑌 𝑖 2 𝑤 𝑘,𝑖 3 3, which multiplies the output of neuron i in Layer 2, and is the final output of each of 𝑌 𝑘 3 the two neurons represented in vector . In the approach of Physical Surface Tension 𝑌 3 Neural Net (PST-NN), these two output variables were then considered mass (m) and perimeter (Per), respectively, which went into a final layer called the surface tension layer. This was composed of four neurons, one neuron for each risk level. Each of these contributed to a balance of power defined by the following equation: , the perimeter is the output of the second neuron, 𝑷𝒆𝒓 from Layer 3; and is the value of the surface tension in neuron k, which were 𝑻 𝒌 associated to the surface tensions of four liquids: Ethanol (22.1), Ethylene glycol (47.7), Water (72.8), and Mercury (425.41) (Surface tension value (mN/m) at 20ºC) <ns0:ref type='bibr'>(Jasper, 1972)</ns0:ref>. The four liquids shown above were used, as they are common, relatively wellknown, and present different surface tensions. Here, the main idea was the relationship that exists between the four surface tensions and the different weights of objects that can break the surface tension of the liquid. For our model, the surface tension of each liquid was similar to each level of psychosocial risk, where the lowest risk level corresponded to the surface tension of the ethanol, and the very high-risk level was equivalent to the surface tension of the mercury. In this sense, when a person has, according to the psychosocial evaluation, a high-risk level, the parameters in the new surface tension neuron will be equivalent to having traveled the surface tension of ethanol, of ethylene glycol, to finally break the surface tension of the Water. Theoretically, at this point the liquid tension will be broken and the classification of the patient under study will be high risk.</ns0:p><ns0:formula xml:id='formula_9'>𝑶 𝒌 = 𝟏 -𝒆 𝑭 𝟐𝑳 𝒇𝒐𝒓 𝒌 = 𝟏 𝒕𝒐 𝟒<ns0:label>(</ns0:label></ns0:formula><ns0:p>The transfer function was used, owing to its behavior. Note that:</ns0:p><ns0:formula xml:id='formula_10'>𝑂 𝑘 𝒍𝒊𝒎 𝒙→∞ (𝟏 -𝒆 -𝒙 ) = 𝟏 (18) 𝒍𝒊𝒎 𝒙→𝟎 (𝟏 -𝒆 -𝒙 ) = 𝟎 (19)</ns0:formula><ns0:p>Thus, when the force exerted by the weight was greater than that exercised by the liquid, the surface tension was broken (See Fig. <ns0:ref type='figure'>2</ns0:ref>). When this occurs, tends to be one, 𝑂 𝑘 and when it does not, the value of tends to be zero.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑂 𝑘</ns0:head><ns0:p>The correct outputs for the four types of risk must be as shown below: </ns0:p></ns0:div>
<ns0:div><ns0:head>Computation of the error backpropagation</ns0:head><ns0:p>The four outputs contained in were compared to the response , which the neuron 𝑶 𝑬 𝒊𝒋 network should have yielded, thus calculating the mean squared error:</ns0:p><ns0:formula xml:id='formula_11'>𝑒𝑟𝑟𝑜𝑟𝑐𝑚 = 4 ∑ 𝑘 = 1 (𝑂 𝑘 -𝐸 𝑘 ) 2 2 (21)</ns0:formula><ns0:p>The following steps calculated the influence of each parameter on neuron network error, through error backpropagation, throughout partial derivatives. The equation below was derived from :</ns0:p><ns0:p>𝑂 𝑘</ns0:p><ns0:p>The derivative of the error, regarding neural network output:</ns0:p><ns0:formula xml:id='formula_12'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑂 𝑘 = 4 ∑ 𝑘 = 1 (𝑂 𝑘 -𝐸 𝑘 )<ns0:label>(22)</ns0:label></ns0:formula><ns0:p>The derivative of the error, regarding layer 3 output: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_13'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 1 3 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑚 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑂 𝑘 * ∂𝑂 𝑘 ∂𝑚<ns0:label>(</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_14'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑦 2 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 (33) ∂𝑌 2 ∂𝑦 2 = 1 -(𝑌 2 ) 𝟐 (34) ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 𝑤 𝑘,𝑖 2 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 𝑤 𝑘,𝑖 2 𝑓𝑜𝑟 𝑘,𝑖 = 1 𝑡𝑜 80<ns0:label>(35)</ns0:label></ns0:formula><ns0:p>Derivative of error, according to layer 2 bias:</ns0:p><ns0:formula xml:id='formula_15'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 𝑏 𝑘 2 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 𝑏 𝑘 2 (36) ∂𝑦 2 𝑏 𝑘 2 = 1<ns0:label>(37)</ns0:label></ns0:formula><ns0:p>Derivative of error, according to layer 1 output:</ns0:p><ns0:formula xml:id='formula_16'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 1 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 ∂𝑌 1<ns0:label>(38)</ns0:label></ns0:formula><ns0:p>Derivative of error, according to layer 1 weights: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>∂𝑦 2 ∂𝑌 1 = 𝑤 𝑘,𝑖 3 𝑓𝑜𝑟 𝑖,𝑘 = 1 𝑡𝑜 80 (39) ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑦 1 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 ∂𝑌 1 * ∂𝑌 1 ∂𝑦 1 (40) ∂𝑌 1 ∂𝑦 1 = 1 -(𝑌 1 ) 2<ns0:label>(</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_18'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑤 𝑘,𝑖 1 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 ∂𝑌 1 * ∂𝑌 1 ∂𝑦 1 * ∂𝑦 1 ∂𝑤 𝑘,𝑖 1 (42) ∂𝑦 1 ∂𝑤 𝑘,𝑖 1 = 𝐸 𝑖 𝑓𝑜𝑟 𝑘 = 1 𝑡𝑜 80; 𝑖 = 1 𝑡𝑜 20<ns0:label>(43)</ns0:label></ns0:formula><ns0:p>Derivative of error, according to layer 1 bias:</ns0:p><ns0:formula xml:id='formula_19'>∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑏 1 = ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑌 2 * ∂𝑌 2 ∂𝑦 2 * ∂𝑦 2 ∂𝑌 1 * ∂𝑌 1 ∂𝑦 1 * ∂𝑦 1 ∂𝑏 1 (44) ∂𝑦 1 ∂𝑏 1 = 1 (45) ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠 = { ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑏 1 , ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑤 1 ,… ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑏 2 , ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑤 2 ,… ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑏 3 , ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑤 3 } (46)</ns0:formula><ns0:p>The new parameters in iteration n+1 were calculated throughout the conjugate gradient method:</ns0:p><ns0:p>𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠(𝑛 + 1) = 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠(𝑛) + 𝜂(𝑛) * 𝑑(𝑛)</ns0:p><ns0:p>Where, 𝜂(𝑛) * 𝑑(𝑛)</ns0:p><ns0:p>Depends on the ∂𝑒𝑟𝑟𝑜𝑟𝑐𝑚 ∂𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠 𝑣𝑎𝑙𝑢𝑒𝑠.</ns0:p><ns0:p>(49)</ns0:p><ns0:p>This procedure was repeated, beginning at step in equation 4 for the remaining training data, thus completing the first iteration. Later, iterations were performed repeatedly until there was an artificial neural network convergence, according with the following three stop criteria: a) Minimum performance gradient, the value of this minimum gradient is 10 -6 . This tolerance was assigned for adequate neuron network learning; b) Performance, in order to measure neural network performance, the mean squared error was employed. The value to be achieved is zero, so as to avoid presenting neural output errors; c) Number of Iterations, the training was stopped if 300 iterations were reached. A high number of iterations was chosen, as ideally, it stopped with error criteria.</ns0:p><ns0:p>The code developed in Matlab V9.4 software can be consulted here: https://codeocean.com/capsule/6532855/tree/v1 (Mosquera, Castrillón Gómez & Parra-Osorio, 2019).</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>The data set was divided into training (80%) and test (20%) groups (train/test split) as published in <ns0:ref type='bibr'>(Vabalas et al., 2019)</ns0:ref>. For the evaluation of the algorithm the following metrics were used (Rose, 2018): a) Sensitivity, which provides the probability that, given a positive observation, the neural network will classify it as positive (50); b) Specificity, which provides the probability that, given a negative observation, the neural network will classify it as negative (51); c) Accuracy, which gives the total neural network accuracy percentage (52) and, d) the ROC curve by plotting the sensitivity (true-positive rate) against the falsepositive rate (1 − specificity) at various threshold settings. Different authors in other studies as have been used the sensitivity, specificity, and, AUC, for the performance statistics within the independent dataset <ns0:ref type='bibr'>(Le, Ho & Ou, 2017;</ns0:ref><ns0:ref type='bibr'>Do, Le & Le, 2020;</ns0:ref><ns0:ref type='bibr'>Le et al., 2020)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_22'>𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁<ns0:label>(50)</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑁 𝑇𝑁 + 𝐹𝑃 (51) 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁<ns0:label>(52)</ns0:label></ns0:formula><ns0:p>Where TP, TN, FP and FN denote the number of true positives, true negatives, false positives and false negatives, respectively. In order to analyze the stability of the system in the results obtained, a variance analysis, using equation ( <ns0:ref type='formula'>53</ns0:ref>) was performed, to establish whether there were significant differences in the results. In this analysis, representing the response to the variables, T i , was the effect caused by nth treatment, and εi, the nth experimental error. The information collected must comply with independence and normality requirements. The variance analysis was performed under a confidence interval of 99.5% <ns0:ref type='bibr'>(Rodriguez, 2007)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_24'>𝑌 𝑖 = 𝜇 + 𝑇 𝑖 + 𝜀 𝑖 (53)</ns0:formula><ns0:p>The efficiency of the PST-NN approach was compared with previous published techniques <ns0:ref type='bibr'>(Mosquera, Parra-Osorio & Castrillón, 2016;</ns0:ref><ns0:ref type='bibr'>Mosquera, Castrillón & Parra, 2018a,b;</ns0:ref><ns0:ref type='bibr'>Mosquera, Castrillón & Parra-Osorio, 2019)</ns0:ref>, which were applied over the original data included in the present work. Accuracy was the metric used to make the comparison between PST-NN and Decision Tree J48, Naïve Bayes, Artificial Neural Network, Support Vector Machine Linear, Hill Climbing-Support Vector Machine, K-Nearest Neighbors-Support Vector Machine, Robust Linear Regression, and Logistic Regression Models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Adjustment of the PST-NN approach</ns0:head><ns0:p>The 20 input variables (psychosocial, physiological, and musculoskeletal symptoms) belonging to the 5443 subjects were used to train and test the Physical Surface Tension Neural Net (PST-NN), according with the level of risk in which the person may be characterized (low, medium, high, and very high risk).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> shows the mean squared error that was obtained during the training and testing process of the PST-NN approach, as a function of the iterations number that was used in the adjustment of the neural network parameters. The trend of the blue line, corresponding to the training group, showed how the mean squared error rapidly decreases around the first 100 iterations, reaching a plateau for higher values of the iterations. This plateau indicated that the neural net model has reached the parameters optimization and therefore, any additional increment in the number of iterations not significatively improve the parameters adjustment. Concretely, in in this study and for the next results, 108 iterations were considered in the adjustment of the PST-NN parameters. The curve of the mean squared error corresponding to the testing group (red line) showed a similar behavior to the training group. Indeed, the following results were reported only for the test set.</ns0:p><ns0:p>In relation with the layer that represents the surface tension model in the PST-NN approach (Fig. <ns0:ref type='figure'>1</ns0:ref>), Figure <ns0:ref type='figure'>4</ns0:ref> showed the results of the perimeter and mass outputs for each subject in the test group, according with the risk level. The outputs were plotted in a XY graph, where the mass output corresponds to the X axes and the perimeter to the Y axes. As result, it was possible to see that the points were grouped in specific areas as a function of the risks level. In this sense, the types of risk may additionally be interpreted in physical form. Indeed, the highest risk in the graph corresponded to the red crosses, which present mass values which were relatively larger than the rest, along with relatively smaller perimeters, which cause the surface tension of the four liquids to break. The lowest risk (represented in blue with asterisks) had relatively high perimeters and relatively low masses, which cause them to remain on the surface of certain liquids.</ns0:p><ns0:p>The square root of the mass/perimeter relationship was represented in Fig. <ns0:ref type='figure'>5</ns0:ref>. This transformation of the relationship between mass and perimeter was applied only for improved visualization of the separations between the risk levels. The figure showed that the lowest value of the square root of the mass/perimeter relationship corresponded to the lowest risk level and the highest value to the highest risk level. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Classification performance of the PST-NN approach</ns0:head><ns0:p>The specific confusion matrix for the test set (Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>) showed the performance of the PST-NN algorithm, as a function of the TP, TN, FP and FN. The number of subjects in each target risk group was 116, 117, 347, and 521 for risk levels 1, 2, 3, and 4, respectively. The number of subjects classified by the algorithm in each risk group was 109 (Risk Level 1), 113 (Risk Level 2), 339 (Risk Level 3), and 540 (Risk Level 4).</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> included the values of sensitivity, specificity, accuracy and AUC for each of the risk levels in the test set. The highest sensitivity value was 97.5% (Risk level 4) and the lowest sensitivity value was 77.8% (Risk level 2), indicating that Risk Level 2 was the most difficult type of risk to classify. On the contrary, the best specificity value was obtained in Risk level 1 (98.2%) and the lowest was in Risk level 3 (96.0%). In relation to the accuracy, Risk Level 2 had the lowest value, indicating that the surface tension neural network would correctly classify an individual, with a probability of the 82.7%, to belong or not to this risk level (it includes true positive and true negative cases). The risk levels with the greatest accuracy values were Risk level 1 followed by Risk level 4, with values of 98.85% and 97.37, respectively. Complementary, Figure <ns0:ref type='figure'>6</ns0:ref> showed the receiver operating characteristic curves (ROC curve) for each risk level for the test set. Risk level 4 had the best classification with AUC value of 0.984 (Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>), while Risk level 2 was the one that presents the most confusion on classification (AUC = 0.883).</ns0:p><ns0:p>Finally, the performance of the PST-NN approach was compared in terms of accuracy against the results of linear, probabilistic, and logistic models, previously published (see Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>). The proposed PST-NN method had the best accuracy value (97.31%), followed by Support Vector Machines (92.86%), Hill-Climbing-Support Vector Machines (92.86%), and Artificial Neural Networks (92.83%). The lowest accuracy values were obtained with the Robust Linear Regression (53.47%), and Logistic Regression (53.65%) techniques. The statistical stability analysis, based on the ANOVA method, showed statistically significant differences between PST-NN and the other techniques, in relation to the accuracy values, with p-value<0.05.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this study, the Physical Surface Tension-Neural Net (PST-NN) approach was developed and applied to model and predict psychosocial risk levels among Colombian state-school teachers. The fundamental point of the structure of this model was the improvement of the neural model by the adaptation of a layer that includes concepts related to the theory of physical surface tension. Indeed, the psychosocial risk level was associated with the probability that a 'surface' can be broken as a function of the psychosocial, physiological, and musculoskeletal factors impact. For each risk level, a different value of the physical surface tension was set in analogy with the surface tensions of four common liquids (Ethanol, Ethylene glycol, Water, and Mercury). This attempts to benefit from the characteristics of neural networks and increase precision via innovation (theory of physical surface tension), in the form of neural network modification. It is expected that this combination enables the elimination of linear model deficiencies and the development of an approach to the real world, with fewer shortcomings. This technique presented an important advantage, due it allowed the dimensionality in the input variables to be reduced. In this study, the 20 input variables in the first layer of the neural network were reduced to 2 variables (mass and perimeter) in the surface tension layer, in order to facilitate the classification process. In this layer, the surface tension equation intervened to replace the network sigmoid function, which reduced data variability and dimensionality, improving the response in the classification and resulting error. The results reported in Fig. <ns0:ref type='figure'>4</ns0:ref> and 5 supported this behavior, so it was possible to see a clear grouping of the subjects according to the risk level group, as a function of the mass and perimeter outputs. This was according to the surface tension theory by which a low mass and high perimeter reduce the probability of breaking the surface, and on the contrary, a high mass with a low perimeter increases that probability.</ns0:p><ns0:p>The neural network models possess high flexibility and fewer parameters compared with parametric models <ns0:ref type='bibr'>(Darvishi et al., 2017)</ns0:ref>. Results in Fig. <ns0:ref type='figure'>3</ns0:ref> showed that the neural model iteration process quickly catches up to the number of iterations necessary to establish the model and provide objective, precise results. However, in supervised machine learning, overfitting could be present, indicating that model does not generalize well from observed data to unseen data <ns0:ref type='bibr'>(Ying, 2019)</ns0:ref>. Because of that, the model performs better on training set that in testing set. In order to reduce the effect of overfitting during the adjustment process of the PST-NN parameters, the train/test split methodology <ns0:ref type='bibr'>(Vabalas et al., 2019)</ns0:ref>, besides to the control in the number of iterations during the neural network training, and the normalization and reduction in dimensionality of the input data, were used. <ns0:ref type='bibr'>(Ying, 2019)</ns0:ref>. However, the number of subjects in each risk level group was not uniformly distributed, being the Risk level 4 the group with more subjects, and Risk level 1 and 2 the groups with less subjects. This situation could generate that the PST-NN model tends to memorize in more detail the data belonging to Risk level 4, and in less detail the data of Risk level 1 and 2.</ns0:p><ns0:p>The application of the PST-NN approach to the data belonging to Colombian stateschool teachers, showed an average accuracy value of 97.31% in the psychosocial risk classification, including all the risk level groups and all the subjects in the database. The confusion matrix results (Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>) and ROC curve (Fig. <ns0:ref type='figure'>6</ns0:ref>) demonstrated that the PST-NN model was highly efficient, in terms of psychosocial risk classification, as compared to other experiments and models <ns0:ref type='bibr'>(Larrabee et al., 2003;</ns0:ref><ns0:ref type='bibr'>Baradaran, Ghadami & Malihi, 2008;</ns0:ref><ns0:ref type='bibr'>Aliabadi, 2015;</ns0:ref><ns0:ref type='bibr'>Farhadian, Aliabadi & Darvishi, 2015;</ns0:ref><ns0:ref type='bibr'>Yigit & Shourabizadeh, 2017;</ns0:ref><ns0:ref type='bibr'>Jebelli, Khalili & Lee, 2019)</ns0:ref>. The level of precision and low error percentage of PST-NN approach demonstrated the ease adaptation of the mathematical structure to the input variables, generating a model that can be used to perform preventive interventions in occupational health by way of prediction, based on psychosocial, physiological, and musculoskeletal factors.</ns0:p><ns0:p>Psychosocial, physiological, and musculoskeletal factors fundamentally involve nonlinear relationships. While neural networks are linear models that provide adequate approaches for the classification problem, the introduction of a physical concept to the neural model, such as the physical surface tension theory, adapted better to the type of data present in organizational and psychosocial climate evaluations. As such, the PST-PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>NN model, by way of the transformation and neural suitability procedure, may discover improved solutions. Alternatively, other authors <ns0:ref type='bibr'>(Tzeng, Hsieh & Lin, 2004;</ns0:ref><ns0:ref type='bibr'>Hong et al., 2005;</ns0:ref><ns0:ref type='bibr'>Azadeh et al., 2015;</ns0:ref><ns0:ref type='bibr'>Jebelli, Khalili & Lee, 2019)</ns0:ref> have avoided the non-linear relationships transforming the data in four linear variables: a positive relationship, negative relationship, no relationship, and non-linear relationship, in studies to analyze the performance and personnel turnover data. However, the results showed values of classification and prediction that could be improved.</ns0:p><ns0:p>The performance of the PST-NN approach, for psychosocial risk level prediction, showed better average accuracy value (97.31%) than the results of support vector machine linear models, neural networks, probabilistic models, linear and logistic regression models, and decision tree models, previously published (Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>). Particularly, the ANN model, corresponding to a perceptron neural network without the modification proposed in this study, only reached an average accuracy value of 92.83%, suggesting that the modification introduced by the PST-NN approach could significatively improve the classification performance. The use of regression techniques showed that the misclassification probability was high, with accuracy values of 53.47% and 53.65% for the Robust Linear Regression and Logistic Regression, respectively. This suggest that linear models are not well fitted to the type of data that were used in the present study.</ns0:p><ns0:p>The results of previous experiments support the opinion that the strategy of combining different methods (physical surface tension theory and artificial neural networks) may improve predictive performance. Similar strategies have been applied previously to work safety and health classification problems, for work stress, psychosocial factor, and mental stress factor prediction <ns0:ref type='bibr'>(Jackofsky, Ferris & Breckenridge, 1986;</ns0:ref><ns0:ref type='bibr'>Somers, 1999;</ns0:ref><ns0:ref type='bibr'>Kursad Zorlu, 2012;</ns0:ref><ns0:ref type='bibr'>Sriramprakash, Prasanna & Murthy, 2017;</ns0:ref><ns0:ref type='bibr'>Subhani et al., 2017;</ns0:ref><ns0:ref type='bibr'>Xia, Malik & Subhani, 2018;</ns0:ref><ns0:ref type='bibr'>Lotfan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In all industries and organizations, analysis of the psychosocial risk level is very important. Studies have shown the direct relationship between psychosocial risks and the gross domestic products of nations <ns0:ref type='bibr'>(Dollard et al., 2007)</ns0:ref>. The implementation of artificial intelligence techniques can contribute to the development of this field of research, which could be called psychosocial analytics. It´s vital the development of these types of tools in global occupational and public health. Colombia's leadership in Ibero-America in the development of tools which contribute to the occupational health and safety field is highlighted by this kind of work.</ns0:p><ns0:p>As a limitation, is important to point that the performance of the prediction model depends on both the quality and quantity of the dataset, as well as the optimal structure design. Indeed, and for the PST-NN model developed in this study, the performance will be affected by the psychosocial factor management, which depends, among other things, of the teacher population and if the data is taken by region, or similar geographical areas or annexes. When this is not the case, model function is affected, and high error rates and low precision levels are generated, as is significant statistical data dispersion. Thus, to predict performance and implement prevention programs for workers, data should be grouped from culturally, politically, socially, and economically similar regions. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A novel approach, the Physical Surface Tension-Neural Net (PST-NN), was proposed in this study to classify psychosocial risk levels among Colombian state-school teachers. Psychosocial, physiological, and musculoskeletal input variables were used to train and test the PST-NN, as a function of four risk level groups (low, medium, high, and very high risk).</ns0:p><ns0:p>The proposed method obtained better classification results than models such as Decision Tree, Naïve Bayes, Artificial Neural Networks, Support Vector Machines, Hill-Climbing-Support Vector Machines, k-Nearest Neighbor-Support Vector Machine, Robust Linear Regression, and Logistic Regression. Indeed, the PST-NN had an average accuracy value of 97.31%, including all the risk level groups and all the subjects in the database.</ns0:p><ns0:p>The results obtained in the prediction of the model demonstrated that the proposed PST-NN approach is applicable for the identification of the psychosocial risk level among Colombian state-school teachers, with high levels of accuracy, and it may contribute as a tool in the generation of guidelines in public health plans, defined by the government. The possibility that a job provides an individual to apply, learn, and develop their abilities and knowledge. Remuneration granted to the worker to compensate their effort at work. This remuneration includes recognition, payment, and access to wellness services and possibilities for growth. Heart rate is the speed of the heartbeat measured by the number of contractions (beats) of the heart per minute (bpm). Electrodermal activity (P 2 ) Property of the human body that causes continuous variation in the electrical characteristics of the skin. Skin conductance can be a measure of emotional and sympathetic responses. Electromyography (P 3 ) Is an electrodiagnostic medicine technique for evaluating and recording the electrical activity produced by skeletal muscles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3. Musculoskeletal symptoms Physiological variables (Related to work absenteeism and psychosocial factors) Symptoms Description</ns0:head><ns0:p>Headache & (M 1 ) Cervical pain</ns0:p><ns0:p>A headache in general is a sign of stress or emotional distress, and can be associate to migraine or high blood pressure, anxiety or depression. Some patients experience headache for 2 hours or less. (Headache Classification Committee of the International Headache Society (IHS), 2013).</ns0:p><ns0:p>Migraine (M 2 )</ns0:p><ns0:p>Migraines can be associate to by a severe headache that often appears on one side of the head. They tend to affect people aged 15 to 55 years. Symptoms include hyperactivity, hypoactivity, depression, fatigue and neck stiffness and/or severe pain (Headache Classification Committee of the International Headache Society (IHS), 2013).</ns0:p><ns0:p>Shoulder pain (M 3 )</ns0:p><ns0:p>The pain is elicited or aggravated by movement. Pain and stiffness usually restrict the use of the Superior limbs and thereby limit daily activities during work (van der Heijden, 1999).</ns0:p><ns0:p>Arm pain (M 4 )</ns0:p><ns0:p>Arm pain is caused by repetitive movements at work, usually the symptoms are described as pain, discomfort, or stiffness that occurs anywhere from your shoulders to your fingers.</ns0:p><ns0:p>Back pain (M 5 ) Back pain at work usually can affect people of any age, heavy lifting, repetitive movements and sitting at a desk all day can produce a injury.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>14) Where, PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021) Manuscript to be reviewed Computer Science 𝑶 𝒌 = 𝟏 -𝒆 -𝒎 * 𝒈 𝑻 𝒌 * 𝑷𝒆𝒓 𝒇𝒐𝒓 𝒌 = 𝟏 𝒕𝒐 𝟒 (15) 𝑶 = {𝑶 𝟏 ,𝑶 𝟐 ,𝑶 𝟑 ,𝑶 𝟒 } (16) With: 𝑻 𝒌 = {𝟐𝟐.𝟏; 𝟒𝟕.𝟕; 𝟕𝟐.𝟖; 𝟒𝟐𝟓.𝟒𝟏} (17) Where is the mass that corresponds to the output of the first neuron from Layer 3, 𝒎 is the value of the gravity constant ( (The multiplication of mass times gravity m*g 𝒈 𝟗.𝟖 𝒎 𝒔 𝟐 ) yields the weight of an object);</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021) Manuscript to be reviewed Computer Science { 𝑹𝒊𝒔𝒌 𝟏, 𝑶 = {𝟏,𝟎,𝟎,𝟎} 𝑹𝒊𝒔𝒌 𝟐, 𝑶 = {𝟏,𝟏,𝟎,𝟎} 𝑹𝒊𝒔𝒌 𝟑, 𝑶 = {𝟏,𝟏,𝟏,𝟎} 𝑹𝒊𝒔𝒌 𝟒, 𝑶 = {𝟏,𝟏,𝟏,𝟏}(20)Risk 4 breaks through all surface tensions, while Risk 1 only breaks through the first surface tension.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>of error, according to layer 2 weights: PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>41) PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>, lighting, temperature, ventilation), chemical, or biological (viruses, bacteria, fungi, or animals) conditions, workstation design, cleanliness (order and sanitation), physical loads, and industrial security.Emotional demands(D 2 ) Emotional demands Require worker ability to: a) Understand the situations and feelings of others, and b) exercise self-control over their own emotions or feelings, in order to avoid affecting work performance. Quantitative demands(D 3 ) Demands relative to the amount of work to be performed and the time available to do so. Influence of work on the non-work environment(D 4 ) Work demands on individuals' time and effort which impact their after-work activities, personal, or family life. Mental workload demands(D 5 )These refer to the cognitive processing demands required for a task, and which involve superior mental attention, memory, or information analysis processes to generate a response. The mental load is determined by the information characteristics (quantity, complexity, and detail), as well as the time available to process made on an individual, in terms of duration and work hours, including times for pauses or periodic breaks.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,334.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,525.00,333.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,183.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,154.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>. Variables for intralaboral psychosocial risk factors. Adapted from:(Villalobos et al., 2010) </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Psychosocial risk variables</ns0:cell></ns0:row><ns0:row><ns0:cell>Factor</ns0:cell><ns0:cell>Stressor</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Attributes of immediate superiors' management, as related to task</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Leadership</ns0:cell><ns0:cell>planning and assignment, attainment of results, conflict resolution,</ns0:cell></ns0:row><ns0:row><ns0:cell>Leadership</ns0:cell><ns0:cell>characteristics(L 1 )</ns0:cell><ns0:cell>participation, motivation, support, interaction, and communication with</ns0:cell></ns0:row><ns0:row><ns0:cell>and social</ns0:cell><ns0:cell /><ns0:cell>employees.</ns0:cell></ns0:row><ns0:row><ns0:cell>relations at</ns0:cell><ns0:cell /><ns0:cell>Information that a worker receives regarding the way in which they do</ns0:cell></ns0:row><ns0:row><ns0:cell>work (L)</ns0:cell><ns0:cell>Performance</ns0:cell><ns0:cell>their job. This information allows the identification of strengths and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>feedback(L 2 )</ns0:cell><ns0:cell>weaknesses, as well as action for performance maintenance or</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>improvement.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Clarity in the functions and role(C 1 )</ns0:cell><ns0:cell>Definition and communication of the role that the worker is expected to play within the organization, specifically as relates to work objectives, functions, results, degree of autonomy, and the impact of said role within the company.</ns0:cell></ns0:row><ns0:row><ns0:cell>Control over work (C)</ns0:cell><ns0:cell>staff training(C 2 ) Skills and</ns0:cell><ns0:cell>Induction activities, training, and instruction provided by the organization, so as to develop and strengthen worker knowledge and abilities.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>knowledge</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>opportunities for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>its use and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>development(C 3 )</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Physiological variablesPhysiological variablesHeart rate (P 1 )</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The Confusion matrix for Physical Surface Tension-Neural Net model for the prediction of psychosocial risk level. For test set (20%)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The Confusion matrix for Physical Surface Tension-Neural Net model for the prediction of psychosocial risk level. For test set (20%).</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Confusion Matrix Test</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Risk 1</ns0:cell><ns0:cell>Risk 2</ns0:cell><ns0:cell>Risk 3</ns0:cell><ns0:cell>Risk 4</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Risk</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>98.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>9.7%</ns0:cell><ns0:cell>0.1%</ns0:cell><ns0:cell>0.1%</ns0:cell><ns0:cell>0%</ns0:cell><ns0:cell>1.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Risk</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>80.5%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>0.3%</ns0:cell><ns0:cell>8.3%</ns0:cell><ns0:cell>1.7%</ns0:cell><ns0:cell>0.0%</ns0:cell><ns0:cell>19.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Output Class</ns0:cell><ns0:cell>Risk 3</ns0:cell><ns0:cell>4 0.4%</ns0:cell><ns0:cell>13 1.2%</ns0:cell><ns0:cell>309 28.1%</ns0:cell><ns0:cell>13 1.2%</ns0:cell><ns0:cell>91.2% 8.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Risk</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>508</ns0:cell><ns0:cell>94.1%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>0.2%</ns0:cell><ns0:cell>1.1%</ns0:cell><ns0:cell>1.6%</ns0:cell><ns0:cell>46.1%</ns0:cell><ns0:cell>5.9%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>92.2%</ns0:cell><ns0:cell>77.8%</ns0:cell><ns0:cell>89.0%</ns0:cell><ns0:cell>97.5%</ns0:cell><ns0:cell>92.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>7.8%</ns0:cell><ns0:cell>22.2%</ns0:cell><ns0:cell>11.0%</ns0:cell><ns0:cell>2.5%</ns0:cell><ns0:cell>7.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Risk 1 Risk 2 Risk 3 Risk 4</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Target Class</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Statistical measures for the classification test (20%) for the four risk levels</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Statistical measures for the classification test (20%) for the four risk levels.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Risk Levels</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Statistical measure</ns0:cell><ns0:cell>Risk level 1</ns0:cell><ns0:cell>Risk level 2</ns0:cell><ns0:cell>Risk level 3</ns0:cell><ns0:cell>Risk level 4</ns0:cell></ns0:row><ns0:row><ns0:cell>Sensitivity</ns0:cell><ns0:cell>92.2%</ns0:cell><ns0:cell>77.8%</ns0:cell><ns0:cell>89.0%</ns0:cell><ns0:cell>97.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Specificity</ns0:cell><ns0:cell>98.2%</ns0:cell><ns0:cell>96.8%</ns0:cell><ns0:cell>96.0%</ns0:cell><ns0:cell>96.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell>98.2%</ns0:cell><ns0:cell>82.7%</ns0:cell><ns0:cell>96.0%</ns0:cell><ns0:cell>97.3%</ns0:cell></ns0:row><ns0:row><ns0:cell>AUC</ns0:cell><ns0:cell>0.961</ns0:cell><ns0:cell>0.883</ns0:cell><ns0:cell>0.971</ns0:cell><ns0:cell>0.984</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results applying different classification techniques in psychosocial factors dataset. *Accuracy</ns0:figDesc><ns0:table><ns0:row><ns0:cell>id</ns0:cell><ns0:cell>Algorithm</ns0:cell><ns0:cell>%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>average*</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>J48</ns0:cell><ns0:cell>91.29</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Naïve Bayes</ns0:cell><ns0:cell>89.71</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>ANN</ns0:cell><ns0:cell>92.83</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>92.86</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>HC-SVM</ns0:cell><ns0:cell>92.86</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>SVM-RBF</ns0:cell><ns0:cell>89.26</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>KNN-SVM</ns0:cell><ns0:cell>86.66</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Robust Linear Regression</ns0:cell><ns0:cell>53.47</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Logistic Regression</ns0:cell><ns0:cell>53.65</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Proposed Method: PST-NN</ns0:cell><ns0:cell>97.31</ns0:cell></ns0:row></ns0:table><ns0:note>.</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:06:50165:2:0:NEW 20 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Department of Industrial Engineering
Rodolfo Mosquera Navarro
Tel. 57 (2) 488 22 22 ext. 5058
Email: rmosqueran@usbcali.edu.co
Dirección: Cra. 122 # 6 - 65. Cali - Colombia February, 22th, 2020
Dear editors:
We thank the reviewers for their generous comments on the manuscript and have have edited the manuscript to address their concerns.
We believe that the manuscript is now according to the recommendations of reviewers and PeerJ.
Dr. Rodolfo Mosquera N
Associate Professor of Industrial Engineering
On behalf of all authors.
Reviewer 1
In my opinion, many of the comments have remained unchanged. Particularly in method and material, the authors should be modified the subsections the order as below:
- Description of the samples (teachers)
- Description of model variables.
- Database and Data Pre-processing
- Description of Model construction process:
¬ - Step 1: …
- Step 2 …
- Step 3 …
- Step 4
- Comparison techniques
R:// Thanks for the comment. We are sorry about this situation. However, many of the comments really were included in the previous version in Track Changes feature of Microsoft Word. In this new version of the manuscript, we again include the modifications according with the suggestion of the reviewers, improving the different sections of the paper.
Concretely, in this new version, the Materials & Methods sections was organized as you suggest. We include subheadings as Database and Data Pre-processing, Basis of the surface tension-neural net algorithm (PST-NN), Implementation of the PST-NN algorithm, Computation of the error backpropagation, and Statistical analysis, in which we detail samples, variables, preprocessing, description of the proposed approach, and more.
We hope this new structure be clearer and more understandable.
Pg 4 ln 129 – 414.
Reviewer 2 (Anonymous)
Comments for the Author
Thanks for addressing my previous comments. Although some efforts have been done, the authors have not addressed well in most of my comments. Therefore, I do not suggest to accept this manuscript for publication. Some comments are still not answered well as follows:
R:// Thanks for the comment. We understand your position and we want to thank for this second opportunity. In this occasion, we hope to address your comments in a better way.
1. Literature review is long and some of them are not related to the study. The authors should optimize and re-organize the literature review.
R:// Thanks for the comment. The new version of our manuscript has a shorter and improved 'Introduction' section, with a literature review more accord with the topic of our work.
Pg 3 ln 69 – 126.
2. Why did the authors not have section 2, but they have sub-sections 2.1, 2.2., ...?
R:// Thanks for the comment. The structure of the 'Materials & Methods' sections was reorganized. We include subheadings as Database and Data Pre-processing, Basis of the surface tension-neural net algorithm (PST-NN), Implementation of the PST-NN algorithm, Computation of the error backpropagation, and Statistical analysis, in which we detail samples, variables, preprocessing, description of the proposed approach, and more. We eliminate the numbers in the headings and subheadings, according with the PeerJ journal template.
Pg 4 ln 129 – 414.
3. Measurement metrics (such as sensitivity, specificity, accuracy) have been used in previously published works such as PMID: 28643394, PMID: 32942564, and PMID: 32613242. Therefore, the authors should refer more works in this description to attract broader readership.
R:// Thanks for the comment. Now, we did reference to more works related to the sensitivity, specificity, accuracy, and AUC metrics. Specifically, we included the following references: (Le, Ho & Ou, 2017; Do, Le & Le, 2020; Le et al., 2020)
Pg 13 ln 382 – 391.
4. Why did the authors run the tests 6 times with different ratios of train/val/test? ==> Table 7 did not show any reason that the authors used different ratios of different test times.
R:// Thanks for the comment. Initially, this was done to explore the behavior of the algorithm, during the training and test procedure, for different split of the database. In order to avoid distraction of the reader and to facilitate de comparison of the proposed algorithm with other previously published techniques, we decided to delete this table and to did reference only to the 80/20 split (train/test split).
5. The authors proposed a lot of things in methodology section, but very few information in results section. It did not show clearly the findings and impacts of the results. ==> I even don't understand the answer from the authors in this question. (We believe that the evaluator confuses the methods of previous studies with this one and the results. The results of the proposed method shown are those sufficient and necessary to evaluate a new classification method, regarding both stability and overfitting. However, as the evaluator suggested, the ROC metric was added to guarantee the adequate performance of the algorithm.)
R:// Thanks for the comment. As it was indicated before, we review the structure of the 'Materials & Methods' section. In that section, the theoretical basis of the surface tension-neural net algorithm (PST-NN) were stablished, and the procedure to implement it. Besides that, it was indicated that the PST-NN approach was compared with previous published techniques (Mosquera, Parra-Osorio & Castrillón, 2016; Mosquera Navarro et al., 2018; Mosquera, Castrillón & Parra, 2018; Mosquera, Parra-Osorio & Castrillón, 2018; Mosquera, Castrillón Gómez & Parra-Osorio, 2019b), which were applied over the original data included in the present work. Detail information of these previous published techniques was deleted, in order to avoid distraction of the reader and to reduce the lenght of the manuscript.
Also, the 'Results' section was reorganized, including subheadings as 'Adjustment of the PST-NN approach' and 'Classification performance of the PST-NN approach', in order to show more clearly the findings and impacts of the results.
For example, it is indicated that: 'In relation with the layer that represents the surface tension model in the PST-NN approach (Fig.1), Figure 4 shows the results of the perimeter and mass outputs for each subject in the test group, according with the risk level. The outputs are plotted in a XY graph, where the mass output corresponds to the X axes and the perimeter to the Y axes. As result, it is possible to see that the points are grouped in specific areas as a function of the risks level. In this sense, the types of risk may additionally be interpreted in physical form. Indeed, the highest risk in the graph corresponds to the red crosses, which present mass values which are relatively larger than the rest, along with relatively smaller perimeters, which cause the surface tension of the four liquids to break. The lowest risk (represented in blue with asterisks) presents relatively high perimeters and relatively low masses, which cause them to remain on the surface of certain liquids.'
Pg 14 ln 420-481
6. In Fig. 6, the best performance might come at epoch of 100, why did the authors need to run to 1000 epochs?
R:// Thanks for the comment. We agree with you. We run 1000 epochs because the app that we developed to automatically do the computation was configured for that value. Runninig 1000 epochs it is unnecessary in this work due the best performance come around of epoch 100.
Therefore, we decided to modify Fig.6 (Fig. 3 in the new version of this manuscript), reducing the lenght of X-axe to 300 epochs, and indicating in the body of the manuscript the following: 'The trend of the blue line, corresponding to the training group, shows how the mean squared error rapidly decreases around the first 100 iterations, reaching a plateau for higher values of the iterations. This plateau indicates that the neural net model has reached the parameters optimization and therefore, any additional increment in the number of iterations not significatively improve the parameters adjustment. Concretely, in in this study and for the next results, 108 iterations were considered in the adjustment of the PST-NN parameters. The curve of the mean squared error corresponding to the testing group (red line) shows a similar behavior to the training group. Indeed, the following results are reported only for the test set.'
Pg 14 ln 425.
7. The authors mentioned 'To prevent overfitting, the preprocessed dataset was split in 5-fold cross validation', but I think it is not a solution to prevent overfitting. This is a normal evaluation method for all machine learning problems. I'd like to ask whether the authors applied some other techniques to avoid overfitting in their models. According to the results, I think the models contained a lot of overfitting.
R:// Thanks for the comment. As the reviewer mentions, overfitting is a general problem in supervised machine learning processes. In order to point out this problem in our work and the techniques that we used to limit its effect, we include the following paragraph in the 'Discussion' section:
'However, in supervised machine learning, overfitting could be present, indicating that model does not generalize well from observed data to unseen data (Xue Ying 2019). Because of that, the model performs better on training set that in testing set. In order to reduce the effect of overfitting during the adjustment process of the PST-NN parameters, the train/test split methodology (Vabalas et al, 2019), besides to the control in the number of iterations during the neural network training, and the normalization and reduction in dimensionality of the input data, were used. (Xue Ying 2019 J. Phys.: Conf. Ser. 1168 022022). However, the number of subjects in each risk level group is not uniformly distributed, being the Risk level 4 the group with more subjects, and Risk level 1 and 2 the groups with less subjects. This situation could generate that the PST-NN model tends to memorize in more detail the data belonging to Risk level 4, and in less detail the data of Risk level 1 and 2.'
Pg 16 ln 508-530.
" | Here is a paper. Please give your review comments after reading it. |
79 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cloud computing has emerged as a cutting-edge technology which is widely used by both private and public institutions, since it eliminates the capital expense of buying, maintaining, and setting up both hardware and software. Clients pay for the services they use, under the so-called Service Level Agreements (SLAs), which are the contracts that establish the terms and costs of the services. In this paper, we propose the CloudCost UML profile, which allows the modeling of cloud architectures and the users' behavior when they interact with the cloud to request resources. We then investigate how to increase the profits of cloud infrastructures by using price schemes. For this purpose, we distinguish between two types of users in the SLAs: regular and high-priority users. Regular users do not require a continuous service, so they can wait to be attended to. In contrast, highpriority users require a constant and immediate service, so they pay a greater price for their services. In addition, a computer-aided design tool, called MSCC (Modeling SLAs Cost Cloud), has been implemented to support the CloudCost profile, which enables the creation of specific cloud scenarios, as well as their edition and validation. Finally, we present a complete case study to illustrate the applicability of the CloudCost profile, thus making it possible to draw conclusions about how to increase the profits of the cloud infrastructures studied by adjusting the different cloud parameters and the resource configuration.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The importance of the cloud has increased enormously over the last few years, and currently it dominates the Information Technology (IT) markets. According to a recent study by the Synergy Research Group <ns0:ref type='bibr' target='#b28'>(SRG, 2021)</ns0:ref>, the cloud market keeps expanding geographically in all regions of the world. Although the public cloud is currently controlled by a few top providers, new opportunities for new participating companies and business models have emerged.</ns0:p><ns0:p>Therefore, it is essential for cloud providers to be more competitive and to have the ability to manage their resources more effectively. With this goal in mind, the modeling and analysis of cloud systems can be a powerful tool for cloud providers to manage their resources, increase their profits, and be more competitive. Thus, we have defined a UML 2.5 parameterized profile, named CloudCost, to model the architecture and interactions of a cloud system based on Service Level Agreements (SLAs) <ns0:ref type='bibr' target='#b15'>(Khan, 2016)</ns0:ref>. The Unified Modeling Language (UML) <ns0:ref type='bibr'>(OMG, 2017)</ns0:ref> is one of the most widely recognized and used modeling languages in this field. UML is a standard of the Object Management Group (OMG) <ns0:ref type='bibr' target='#b20'>(OMG, 2016)</ns0:ref>, and is designed to be a general-purpose, developmental, modeling language that provides a standard method to visualize the final design of a system.</ns0:p><ns0:p>Our profile consists of a UML component and a sequence diagram that models the relationships and associations between the system components, the flow of actions, and the interactions between the roles in the system. The main advantage of the use of a parameterized profile is that it makes it possible to specify a wide spectrum of cloud configurations helping cloud service providers to maximize their profit and be PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science more competitive. The main parameters considered are those related to the cost of the different VMs offered and the SLAs signed by the users, such as discounts offered for user subscriptions when their requests cannot be immediately attended to, offers made by the cloud provider to resume execution in the event that it did not finish within the estimated time and compensations due to resource unavailability, in the case of high-priority users.</ns0:p><ns0:p>A previous version of the profile was presented in <ns0:ref type='bibr' target='#b5'>(Bernal et al., 2019b)</ns0:ref>. In that work, the cloud infrastructure and the interactions between the users and the cloud provided were modeled without considering any cost-per-use strategy or different user types (SLAs). Thus, the present work is an extension that includes cost-related parameters and two different types of users, namely regular users and high-priority users, depending on the kind of SLA they sign. Regular users do not require a continuous service, so they can wait to be attended to, while high-priority users pay for a continuous service, so they need an immediate answer to their requests.</ns0:p><ns0:p>The proposed UML profile captures the main elements of the cloud infrastructure and the client interactions, which is reflected in a methodological way to model different scenarios and then launch the corresponding simulations on a cloud simulator. As testbed, we use the simulator Simcan2Cloud <ns0:ref type='bibr' target='#b4'>(Bernal et al., 2019a)</ns0:ref>, which makes it possible to load the cloud scenarios created by the MSCC tool and simulate the execution of the workloads generated, which consist of a large number of users. It is worth noting that these simulations can be executed on a personal computer, and therefore no special features are required of the platform to support these executions. The results provided by the simulator allow us to carry out the performance evaluation and the profit analysis of our cloud models.</ns0:p><ns0:p>The paper is structured as follows. The motivation behind, and the main contributions of the paper are explained in Section 2. A complete description of the related work is given in Section 3, and Section 4 details the methodology used. Section 5 presents the CloudCost UML Profile. Section 6 describes the complete MSCC (Modeling SLAs Cost Cloud) modeling tool that we have implemented to create, edit, and validate sequence and component models of cloud systems based on SLAs. Section 7 examines the profile and draws some conclusions about how to increase the cloud profit for the cloud studied by adjusting the different parameters and resource configuration. And finally, Section 8 contains the conclusions and future lines of work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MOTIVATION AND CONTRIBUTION</ns0:head><ns0:p>In this section, we present the motivation and main contributions of this work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Motivation</ns0:head><ns0:p>Most of the existing works about the modeling and analysis of cloud systems are focused on the cloud infrastructure and the performance evaluation from a user's viewpoint. Our goal, however, is to put the focus on the interactions between the users and the cloud service provider in order to analyze the profits obtained by the latter. The results obtained in this work could then be useful to increase these profits by setting the appropriate cloud configuration for an expected workload. Thus, the CloudCost UML profile allows us to model both the cloud infrastructure and the users' interactions with the cloud service provider in order to analyze the profits obtained under different workloads.</ns0:p><ns0:p>For this purpose, the CloudCost UML profile includes the pay-per-use model in cloud systems, by considering two different types of users, namely regular and high-priority users, who sign the so-called Service Level Agreements (SLAs) to establish the specific conditions and prices to access and use the cloud. Regular users request a number of virtual machines from the cloud provider, with some specific characteristics, but they can wait to be attended to when these services are not available. In contrast, high-priority users should obtain the services they request immediately, so they usually pay a greater price, and must be compensated when these services cannot be provided.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Contribution</ns0:head><ns0:p>This paper extends the CloudCost UML profile and the MSCC modeling tool -presented in <ns0:ref type='bibr' target='#b5'>(Bernal et al., 2019b)</ns0:ref> -by including the new features related to the analysis of profits, namely, the user types, costs per resource, discounts, offers, and compensations. This new version also includes the SLAs, with both types of user, and the users' behavior in terms of their interactions with the cloud service provider for both types of user. Hence, the modeling tool has been extended as well, so that we can easily create and edit parameterized cloud models, so as to consider different cloud infrastructures and different pricing schemes.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In addition, these UML models can be validated, and then we can generate the input configuration files required to carry out the performance evaluation using the Simcan2Cloud simulator <ns0:ref type='bibr' target='#b4'>(Bernal et al., 2019a)</ns0:ref>.</ns0:p><ns0:p>To summarize, we can highlight the following main contributions of this paper:</ns0:p><ns0:p>• the definition of a new parameterized UML profile -called CloudCost -for modeling cloud systems with costs, considering the new characteristics of the cost-per-use business model,</ns0:p><ns0:p>• the validation of the parameters assigned in the cloud models using OCL rules,</ns0:p><ns0:p>• the extension of the modeling and validation tool (MSCC), so as to allow us to easily design the cloud infrastructure and the user interactions with SLAs and cost-per-use, and then validate the models,</ns0:p><ns0:p>• a complete case study that illustrates the whole process, from the design of a cloud system to the performance evaluation and profit analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RELATED WORK</ns0:head><ns0:p>In recent years, there has been a growing interest among the scientific community in cloud services, cloud computing models, SLAs and pricing schemes <ns0:ref type='bibr' target='#b8'>(Chi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b7'>Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Soni and Hasan, 2017;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cong et al., 2018)</ns0:ref>. This fact is mainly due to the potential expansion of this computational paradigm and the significance of increasing the cloud service provider's profitability. <ns0:ref type='bibr' target='#b6'>Bokhari et al. (2016)</ns0:ref> present a comparative study to evaluate the cloud models with the purpose of helping clients to determine what kind of service they need and the risks associated with each model. Sala-Zárate and Colombo-Mendoza (2012) present a review of different cloud service providers in the market to help developers, users, and enterprises to select the one which meets their needs. There are other works that focus on a specific cloud model. For instance, <ns0:ref type='bibr' target='#b25'>Sharma and Sood (2011)</ns0:ref> present an SaaSoriented work, in which an architecture for defining cloud software services using a platform-independent model (PIM) is introduced. This model is then transformed into one or more platform-specific models (PSMs). The purpose of this paper by Sharma and Sood is to emphasize the benefits of MDA-based software development in developing software applications in the cloud independently of the specific technologies used. An IaaS-oriented work is presented by <ns0:ref type='bibr' target='#b12'>Ghosh et al. (2013)</ns0:ref>, in which they model a specific class of IaaS cloud to offer services with machines divided into three pools with different values for two parameters: provisioning delay and power consumption. They propose a multi-level interacting stochastic model, in which the model solution is obtained iteratively over individual submodel solutions.</ns0:p><ns0:p>From the results obtained in the paper, they state that the workloads and the system characteristics had an impact on two performance measures: mean response delay and job rejection probability. <ns0:ref type='bibr' target='#b18'>Naseri and Jafari Navimipour (2019)</ns0:ref> propose a hybrid method for efficient cloud service composition.</ns0:p><ns0:p>An agent-based method is also used to compose services by identifying the QoS parameters. Then a particle swarm optimization algorithm is employed to select the best services. They perform several experiments on a simulator implemented in Matlab and analyze the results by considering the number of combined resources, waiting time and the value of a fitness function. The whole process requires a significant time to find a solution. <ns0:ref type='bibr' target='#b29'>Zanbouri and Jafari Navimipour (2020)</ns0:ref> propose a honeybee mating optimization algorithm for cloud service composition. A trust-based clustering algorithm is used to address the trust challenge. The proposed method is simulated repeatedly with a real workload and a random workload to evaluate its efficiency. It works well for small-scale problems, but its performance with regards to computation time is worse for large-scale problems.</ns0:p><ns0:p>The above works focus on improving the use of cloud services from the user's point of view, while our work aims at increasing the provider's profits without negatively affecting the services offered.</ns0:p><ns0:p>Regarding cloud modeling, different UML profiles have been proposed for modeling multiple aspects of a cloud system. <ns0:ref type='bibr' target='#b14'>Kamali et al. (2014)</ns0:ref> present a UML profile to model the deployment of a system in the cloud. It allows the modeling of instances and the infrastructure offered as a service. However, the physical infrastructure, the interactions of the users with the cloud provider and the cost per usage are not considered in that work. <ns0:ref type='bibr' target='#b3'>Bergmayr et al. (2014)</ns0:ref> propose the Cloud Application Modeling Language (CAML), a UML-based modeling language that considers pricing features, modeling and deployment of cloud topologies. CAML provides dedicated UML profiles in which the cost of the cloud resources is However, the cost of the services is not considered. <ns0:ref type='bibr' target='#b30'>Zhou et al. (2019)</ns0:ref> propose a model based on smart contracts to detect and register SLA violations in a trustworthy way using witnesses. They use blockchain to automate the SLA lifecycle and ensure fairness between roles. They deploy the implemented model on a blockchain test net to test all the functionalities. Both provider and customer must reward witnesses for the monitoring service and therefore this entails a cost. This proposal only takes into account the CPU usage and RAM memory, butnetwork features are not considered. <ns0:ref type='bibr' target='#b16'>Li et al. (2019)</ns0:ref> propose a host overloading/underloading detection algorithm based on a linear regression prediction model to forecast CPU utilization. The goal is to minimize the power consumption and SLA violations by using an SLAaware and energy-efficient virtual machine consolidation algorithm. They perform several experiments with a real and a random workload on the CloudSim simulator. The authors focus on reducing the energy consumption of the cloud data centers. However, our work tries to increase the income of the cloud service provider through the study of pricing schemes.</ns0:p><ns0:p>Regarding the research works that take into account pricing schemes, we can mention the work by <ns0:ref type='bibr' target='#b7'>Chen et al. (2018)</ns0:ref>, who conducted a comparative study analyzing two pricing schemes offered to cloud users by some of the biggest cloud service providers: the reservation-based scheme and the utilizationbased scheme. The former is also called the R-scheme and is frequently used by Microsoft and Amazon.</ns0:p><ns0:p>The latter is also called the U-scheme and is commonly adopted by Google. <ns0:ref type='bibr' target='#b10'>Cong et al. (2018)</ns0:ref> present a work focused on maximizing the cloud service provider's profits. Their approach analyzes and varies the pricing schemes without violating the established SLA. For this purpose, the authors provide a dynamic pricing model based on the concept of user-perceived value, which captures the real supply and demand relationships in the cloud service market. <ns0:ref type='bibr' target='#b27'>Soni and Hasan (2017)</ns0:ref> present a pricing scheme comparison based on several characteristics, such as fairness, merits, and demerits. In this study, the authors include a discussion related to both service and deployment models. All of these studies analyze several pricing schemes offered to the users, with the main objective of maximizing the cloud service provider profit, while offering better prices to the users for some services. However, the main goal of our work, beyond the study of pricing schemes, is to analyze and increase the profitability of the cloud service provider while maintaining a balance between the cost of the infrastructure and the user's demands. <ns0:ref type='bibr' target='#b13'>Herzfeldt et al. (2018)</ns0:ref> discuss different guidelines for the profitable design of cloud services. They carried out 14 interviews with cloud service provider experts, in which they addressed the relationship between value facilitation, that is, the capability to accumulate resources for future customer demands, and profitability for the cloud service provider. In the present work, we adopt the second perspective, that is, the study of the profitability for the cloud service provider. However, we should point out that our approach is quite different from the above works. We model both the cloud infrastructure and the user interactions with the cloud service provider, with the goal of analyzing how they affect the global incomes for the cloud service provider.</ns0:p><ns0:p>For the purpose of showing the main differences between this paper and the existing works, as well as presenting the main novelties of our proposal, we have conducted a comparison between some of the most relevant approaches analyzed in this section and our work (see Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The first column of the table shows the authors of the proposal. The next two columns, namely textitPricing scheme and CP profits, concern aspects related to SLAs. Specifically, the former indicates whether the proposal analyzed provides some type of pricing scheme. As can be seen, all the papers reviewed except <ns0:ref type='bibr' target='#b14'>Kamali et al. (2014)</ns0:ref> provide it. The latter shows whether the approaches are aimed at enhancing the profits of the cloud service provider. In this case, <ns0:ref type='bibr' target='#b27'>Soni and Hasan (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>Cong et al. (2018)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Herzfeldt et al. (2018)</ns0:ref>, and our work provide this feature. The following five columns focus on Cloud modeling aspects. Services indicates whether the proposal is able to model cloud services. This feature is actually supported by all the works in the table. Infrastructure denotes whether the work provides the mechanisms to model a cloud computing infrastructure. In this case, besides the current work, only <ns0:ref type='bibr' target='#b14'>Kamali et al. (2014)</ns0:ref> Manuscript to be reviewed Computer Science service provider. This feature is only supported by our work. Next, the column labeled CP shows whether the cloud service provider's behavior is modeled with a high level of detail. In this case, only <ns0:ref type='bibr' target='#b10'>Cong et al. (2018)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Herzfeldt et al. (2018)</ns0:ref> and our proposal include this feature. Finally, the last column denotes whether experiments were conducted in the study. With regard to experimental studies, <ns0:ref type='bibr' target='#b14'>Kamali et al. (2014)</ns0:ref> deploy a bank management's system in a cloud computing environment based on their profile. <ns0:ref type='bibr' target='#b7'>Chen et al. (2018)</ns0:ref> conduct a numerical study to examine the impact of the pricing model parameters on the service providers' optimal pricing decisions and social welfare. <ns0:ref type='bibr' target='#b10'>Cong et al. (2018)</ns0:ref> conduct simulation experiments with Matlab to test the effectiveness of the proposed scheme based on the user-perceived value. Finally, in our work, we perform simulations with Simcan2Cloud to draw conclusions about how to improve the management of resources to increase the provider's profits, without negatively affecting the services offered. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>METHODOLOGY</ns0:head><ns0:p>In this section, we describe the methodology used to model and analyze the performance of cloud system infrastructures by considering SLAs for two different types of user (regular and high-priority). Figure <ns0:ref type='figure'>1</ns0:ref> shows the different phases of this methodology, which are described as follows:</ns0:p><ns0:p>1. System Modeling. The CloudCost UML profile is defined to model both the cloud infrastructure and the interactions between the cloud service provider and the users when they access a cloud to request resources. This profile consists of sequence and component diagrams. As the behavior of regular and high-priority users is different, we consider two parameterized sequence diagrams, one for each type of user. These diagrams show the interactions of the users with the cloud service provider, thus defining the behavior of each type of user. In addition, the number of virtual machines required by the users and their specifications are established by setting up the corresponding parameters in these diagrams. Furthermore, a component diagram is used to model the infrastructure of the cloud system. The specific cloud infrastructure configuration is then established by setting the corresponding parameters in the component diagram.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Model Validation. Each cloud model generated from the profile is validated to check certain</ns0:head><ns0:p>properties they must fulfill. For instance, the costs must be greater than or equal to 0, the users must sign one SLA, etc. A set of OCL rules are defined for this purpose, and as a result of this validation we obtain the possible errors or warnings in the model. If there are errors in the model, they must be fixed, so we return again to Phase 1 to correct the model, and then we must validate it again (Phase 2).</ns0:p><ns0:p>3. Performance Evaluation. Once the model has been validated, the configuration files are generated for the cloud simulator (Simcan2Cloud <ns0:ref type='bibr' target='#b4'>(Bernal et al., 2019a)</ns0:ref>). Simulations are then executed, providing us with the performance metrics, namely the number of regular/high-priority users that were served, the number of them that left the system without being served, the waiting times for the users, etc.</ns0:p><ns0:p>The analysis of these results allows us to draw relevant conclusions about the most appropriate cloud infrastructure for a specific workload.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Model</ns0:head></ns0:div>
<ns0:div><ns0:head>CLOUDCOST PROFILE</ns0:head></ns0:div>
<ns0:div><ns0:head>Cloud Infrastructure</ns0:head><ns0:p>Component Diagrams</ns0:p></ns0:div>
<ns0:div><ns0:head>Interactions between Cloud and Users</ns0:head><ns0:p>Sequence Diagram</ns0:p></ns0:div>
<ns0:div><ns0:head>SLAs</ns0:head><ns0:p>High-priority and Regular Users</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Validation</ns0:head><ns0:p>OCL Rules </ns0:p></ns0:div>
<ns0:div><ns0:head>Validated</ns0:head></ns0:div>
<ns0:div><ns0:head n='5'>CLOUDCOST PROFILE</ns0:head><ns0:p>In this section we define the CloudCost profile, which is an extension of the Model4Cloud profile that we introduced in <ns0:ref type='bibr' target='#b5'>(Bernal et al., 2019b)</ns0:ref>, including costs and SLAs for two types of user (regular and high-priority).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>CloudCost profile</ns0:head><ns0:p>Users are classified into two types, namely regular and high-priority, and they request certain VM resources, according to the catalog offered by the cloud provider. Regular users do not require an immediate answer to their requests, so they can wait to be attended to, and thus the price they pay varies depending not only on the VM features they have requested, but also on the conditions in which they are finally provided with them. In contrast, high-priority users expect an immediate answer to their requests, in some cases on a 24/7 basis, so they are able to pay for extra resources (if required) for their services to be immediately executed. It should be very unlikely for a high-priority user request not to be met, and a compensation must be offered in this case. We consider that a cloud infrastructure consists of one or several data centers, each of which consists of a set of nodes grouped in racks that are interconnected through a communication network. Each rack contains a collection of nodes with the same hardware features, that is, CPU, memory, and storage. All this infrastructure is managed by a cloud service provider that offers a catalog of VMs with assigned Service Level Agreements (SLAs), which include the service costs to rent these resources. These VMs are mapped to physical machines by using a specific resource allocation policy.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:ref> Manuscript to be reviewed Some of these machines are always available, ready to serve a user request, but there are some reserved physical machines that will only be used for high-priority user requests. Thus, the cloud service provider reserves some machines to be only used when high-priority user requests cannot be met by the set of normal (non-reserved and always in execution) machines. In such a case, one of the reserved machines with enough resources to fulfill the user demands must be activated to satisfy the request. It is a critical decision for the cloud service provider to fix the number and features of the available machines to attend to the regular user requests, as well as to define the ratio of reserved machines that will attend to the incoming requests from high-priority users when they cannot be served by the non-reserved machines. In this decision, the cloud service provider must take into account both the total number of available physical resources and the workload generated by the users, in order to attend to the largest number of users. A computing rack consists of a set of computing machines (ComputingMachine), and finally a storage rack consists of a set of storage machines (StorageMachine).</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Many components of a data center normally have the same characteristics, as they are usually purchased in large quantities. Therefore, we have defined the relationships between components as associations between stereotypes (see Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>), so that each component can be referenced from different places and reused. We can see in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> that a CloudInfrastructure consists of a cloud service provider, which manages a number of data centers. The DataCenterElement data type represents a collection of data centers with the same configuration.</ns0:p><ns0:p>In the same way, the RackElement data type has been modeled to represent a collection of racks with the same configuration. As an illustration, in the rack specification, we must define the number of boards, the number of machines per board and the network bandwidth for the communication between machines.</ns0:p><ns0:p>Furthermore, computing machines can be either non-reserved or reserved, as indicated above.</ns0:p><ns0:p>The following step to define the profile is to define the stereotypes for the interactions between the users and the cloud provider (see Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>). Users request virtual machines (VM stereotype) with their associated SLAs (SLA stereotype), and the execution of applications on the virtual machines (Application Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As an illustration, we can see the different SLAs offered by the cloud service provider with the cost of the VMs and signed by the users (User stereotype). A VM request consists of the following elements: number of cores, computing units (CUs) for the CPU cores, disk size and memory size. Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> shows the sequence diagram for the regular user's behavior. In this diagram, we capture the interactions of a regular user with the cloud service provider when renting a VM. First, the user requests the list of VMs offered (requestOfferedVms message) by the cloud service provider in order to know which of them fits his needs best. In response, the cloud service provider sends the list of available VMs to the user. In this message, the cloud service provider indicates the attributes of each VM: CPUs, storage, memory and base cost per hour defined in the SLA. This base cost is the amount to be paid for one hour of the VM under normal conditions when a request can be immediately attended to. However, if no VM is currently available to provide the service (with the user's required features), the user will be informed and receive a discount for the delay.</ns0:p><ns0:p>Thus, with the list of VMs the user requests (request message) one of these VMs for a time (renting-Time). Afterwards, the cloud service provider responds with the IP, type, and cost of the allocated rental VM. When the request can be immediately satisfied, the type will be normal, otherwise, the type returned will be deferred. In both cases, the corresponding cost is also returned. Obviously, the deferred price will be lower than the normal price, because the user has to wait for the service to become available.</ns0:p><ns0:p>After receiving a normal type answer, the user can execute the applications on the VM provided (see Figure <ns0:ref type='figure'>7</ns0:ref>). Two cases can now arise: either the execution of the applications finishes on time, so the user receives an ok message from the cloud service provider and the interaction terminates, or the renting time expires before the applications have been completely executed. In this case, the cloud provider offers the user an extension to the renting time with the base price per hour plus a surcharge (offer), i.e. the user can pay for this extra time in order to complete the execution, or the user can decline and stop the interactions (see Figure <ns0:ref type='figure'>8</ns0:ref>).</ns0:p><ns0:p>Finally, when no VM with the required features becomes available, the user receives the deferred message (see Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>). As mentioned above, the price, in this case, will be lower, so the VM renting price will have a discount applied to the normal cost. The user can now decide to wait for the required VM to become available or leave. If the user decides to wait, he subscribes to the VM characteristics (see Figure <ns0:ref type='figure'>9</ns0:ref>) for a specific time (maxSubTime), with the intention of being notified when a VM with these features is available, and then, upon receiving this notification message, the user starts the execution of his applications. Note that when the subscription time expires and no VM has been available for this period, the user leaves without being able to execute the applications.</ns0:p><ns0:p>B.-UML profile for high-priority users.</ns0:p><ns0:p>The sequence diagram for high-priority users is shown in Figure <ns0:ref type='figure'>10</ns0:ref>. The SLA for these users states that they should obtain the requested services immediately, and if no VM matching their needs is available at that moment, the cloud service provider must start up a VM in order to provide the service. In the unlikely event that the cloud service provider cannot start up a VM with the requested features, the user must be compensated for the damages caused. This case would only occur when a VM with the requested features in the pool of reserved machines cannot be allocated, which would be caused by an unexpected number of high-priority user requests. This would be a consequence of a misconfiguration of the cloud, and would probably require the addition of new racks in order to be able to deploy some additional VMs while keeping the system well balanced.</ns0:p><ns0:p>Like the regular users, high-priority users request (message requestOfferedVms) the list of VMs from the cloud service provider. The cloud service provider replies with the list of VMs (message response),</ns0:p><ns0:p>indicating the corresponding costs per hour for each one of them. The user then requests one for a certain period of time (argument rentingTime in message request). If the requested VM is available, the user executes (see Figure <ns0:ref type='figure'>7</ns0:ref>) his applications, paying the amount indicated. Otherwise, if there is no available VM with the requested features, the cloud service provider should start up one of the reserved machines with the VM matching the user's requirements. In this case, the user must pay a surcharge, which is included in the cost indicated in the response message. As mentioned above, should the cloud service</ns0:p><ns0:p>provider not be able to start up a VM matching the user's requirements, compensation must be provided.</ns0:p><ns0:p>In this case, the interactions stop (no vm in Figure <ns0:ref type='figure'>10</ns0:ref>), and the user receives an economic compensation, as indicated in the SLA.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Validation of CloudCost Profile Models</ns0:head><ns0:p>The relationships between stereotypes and their properties define constraints by themselves (see Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>). For instance, a cloud service provider must offer at least one SLA, where the SLA must at least have the cost of a VM, but this SLA could include more than one VM. However, some constraints cannot be defined and checked through stereotype relationships. Thus, they have to be explicitly checked to ensure the model's correctness. Our proposed MSCC (Modeling SLAs Cost Cloud) tool also makes it possible to validate the model by defining a set of Object Constraint Language (OCL) rules <ns0:ref type='bibr' target='#b19'>(OMG, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Table 2.</ns0:note><ns0:p>OCL rules for the model validation process I. Notification Type Error MSCC Recommendation Please, one of the following actions must be performed to resolve the error: 1. A user must request another VM that is in the SLA. 2. A user must sign another SLA including that VM type. 3. The cloud service provider must include that type in this SLA signed by the user! Rule 4</ns0:p><ns0:p>SLA base cost must be greater than or equal to zero Rule Description</ns0:p><ns0:p>The defined base cost value must be greater or equal to zero. SLA inc-priority cost must be greater than or equal to zero Rule Description</ns0:p></ns0:div>
<ns0:div><ns0:head>Rule OCL Code</ns0:head><ns0:p>The defined inc-priority cost for high-priority users must be greater or equal to zero.</ns0:p><ns0:p>Rule OCL Code</ns0:p><ns0:formula xml:id='formula_0'>c o n t e x t SLA i n v s e l f . v m c o s t −> f o r A l l ( v m c o s t | v m c o s t . i n c p r i o r i t y >= 0 )</ns0:formula><ns0:p>Notification Type Error MSCC Recommendation Please, enter a positive value for the inc-cost cost value! Rule 6 SLA discount must be greater than or equal to zero Rule Description</ns0:p><ns0:p>The discount offered to regular users must be greater or equal to zero. Notification Type Error MSCC Recommendation Please, enter a positive value for the discount value! OCL is a declarative language designed to specify detailed aspects of a system designed using UML 358 models, and it is now part of the UML standard. OCL is considered a formal specification constraint Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Rule OCL Code</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. OCL rules for the model validation process II.</ns0:p></ns0:div>
<ns0:div><ns0:head>Rule 7 SLA compensation must be greater than or equal to zero Rule Description</ns0:head><ns0:p>The compensation cost for high-priority users must be greater or equal to zero. <ns0:ref type='table'>4</ns0:ref>. OCL rules for the model validation process III.</ns0:p></ns0:div>
<ns0:div><ns0:head>Rule OCL Code</ns0:head></ns0:div>
<ns0:div><ns0:head>Rule 9</ns0:head><ns0:p>PriorityUser has been modeled but no machine has been reserved Rule Description</ns0:p><ns0:p>There are no reserved VMs for the high-priority user. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Rule OCL Code</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>that the costs defined in the SLA have positive values. OCL also makes it possible define restrictions in the model's behavior to show recommendations to the users in order to parameterize the model. These recommendations appear as warnings in the validation process. Specifically, we have defined two possible warnings, which could be launched during the validation process (see rules 9 and 10 in Table <ns0:ref type='table'>4</ns0:ref>). Rule 9 launches a warning when there are high-priority users making requests, but there are no machines reserved for them. Rule 10 checks whether the resume time for a VM is set too long compared with the renting time, since we consider that the user will probably want to rent the VM for longer than in the first request.</ns0:p><ns0:p>As an example of validation, let us consider the situation presented in Figure11, which shows a fragment of an interaction diagram between the user and the cloud service provider. <ns0:ref type='figure'>11</ns0:ref>, we can see that the MSCC tool launches a warning when the user sets the resume time to 10 hours. The renting time in the initial request was 2 hours, but the user applications did not finish their execution in that time, so the user decides to resume for a further 10 hours. As a consequence, the last rule in Table <ns0:ref type='table'>4</ns0:ref> is violated, and the MSCC tool shows a warning message. In order to address this warning, the user should initially rent the VM for a longer period (renting Time) and reduce the resumption time (resume Time).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>12</ns0:ref> shows the validation of a component diagram. In this case, the user has established a negative value for the base cost of a VM of VM nano type. As a consequence, the fourth rule in Table <ns0:ref type='table'>2</ns0:ref> launches a violation, and the MSCC tool shows an error. Then, to address this warning the user should set a positive value for all the VM costs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>MSCC DESIGN TOOL</ns0:head><ns0:p>This section presents the MSCC (Modeling SLAs Cost Cloud) computer-aided design tool 1 . This tool focuses on the modeling of cloud systems, considering SLAs to define different user types, and the resources that can be provided for a given cost. Therefore, as mentioned above, this tool allows the user to parameterize the CloudCost profile to establish the value of certain parameters, such as the VMs requested by the user, the waiting time for regular users, and the compensation for high-priority users, and then to validate the models.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>13</ns0:ref> shows a screenshot from the MSCC tool, which is an extension of the tool presented in a previous work <ns0:ref type='bibr' target='#b5'>(Bernal et al., 2019b)</ns0:ref>, and has been adapted to the new UML profile, as described in Section 5. It has been implemented using Papyrus <ns0:ref type='bibr' target='#b11'>(Gérard et al., 2010)</ns0:ref>, which is an open-source</ns0:p><ns0:p>Eclipse-based tool that provides an integrated environment for creating and editing UML models. The plug-ins used in the previous version have been adapted in order to include the new functionalities. These new features are the following:</ns0:p><ns0:p>• es.uclm.uml2cloud.profile: This plug-in includes the new UML profile, which takes into account both types of user, namely regular and high-priority, and the resource costs, in order to parameterize different cloud configurations and user interactions.</ns0:p><ns0:p>• es.uclm.uml2cloud.validation: This plug-in implements the constraints mentioned in Section 5.2 in order to validate the model.</ns0:p><ns0:p>• es.uclm.uml2cloud.customization: With this plug-in, the property views and the tool palettes have been extended to suit the new stereotypes. </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CASE STUDY</ns0:head><ns0:p>This section provides a case study that shows the applicability of both CloudCost, our proposed UML profile for representing the users' behavior in cloud environments, and MSCC, a tool for modeling cloud infrastructures. In essence, we are interested in analyzing the overall cloud income for processing the requests of a large number of users (workload) when different data-centers -supporting the cloud -are used. The workloads are generated using, as their basis, two different user roles: regular users and high-priority users. The experiments in this study were run on the Simcan2Cloud simulator <ns0:ref type='bibr' target='#b4'>(Bernal et al., 2019a)</ns0:ref>. In summary, the process for carrying out the experiments consists of the following steps: 1) modeling five cloud environments using MSCC; 2) generating the configuration files representing these clouds for the Simcan2Cloud simulator; 3) encoding the behavior of the users represented in Section 5 into Simcan2Cloud; and 4) simulating the processing of each workload in the five cloud environments modeled.</ns0:p><ns0:p>In order to clearly present this case study, the rest of this section is structured as follows. Firstly, we describe -in Section 7.1 -how each part of the cloud environment, that is, the underlying cloud architecture and the workloads, were modeled. Next, in Section 7.2, we analyze these models by simulating different cloud scenarios. Finally, we draw conclusions from the results obtained in Section 7.3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.1'>Experimental settings</ns0:head><ns0:p>In order to conduct the experimental study, we generated five different cloud configuration models, by using a data-center with 64, 128, 256, 384, and 448 physical machines. Figure <ns0:ref type='figure' target='#fig_5'>14</ns0:ref> Manuscript to be reviewed Computer Science components of the data center are interconnected using an Ethernet Gigabit network. The cloud service provider offers seven different configurations of VMs, and a data-intensive application to be executed on the VMs. We also include the parameters required to model the costs for all the VMs included in the SLA signed by the user. In addition, we designed different synthetic workloads by parameterizing the diagrams of the CloudCost profile. Specifically, we created four different workloads containing 2000, 5000, 7500, and 10000 users. In these workloads, the percentage of high-priority users ranged from 0% to 40% of the total number of users.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>15</ns0:ref> shows the sequence diagram, which defines the parameterized behavior of the regular usernamely User A in the diagram -when interacting with the cloud service provider. In this case, User A requests a small VM for 2 hours and executes an intensive data application on it. The renting time of the initial request is 2 hours, but in the event of the user applications not completing their execution in that time, the user can decide to resume the execution for 1 hour more. However, it may happen that there are no available VMs with the required features to attend to the initial user request. In such a case, the user can then decide to wait for VMs meeting their requirements to become available, or the user can leave. If the user decides to wait, he subscribes for 24 hours, indicating the features required for the VM, with the intention of being notified when a VM with these features becomes available, and then, upon receiving the notification message, the execution of the applications starts. If the subscription time expires and no VM has become available in this period, the user leaves without being able to execute his applications.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.2'>Performance Analysis</ns0:head><ns0:p>In this section, we study the income of a cloud service provider with a specific cloud configuration, that is, with a given infrastructure and processing specific workloads. For this purpose, we simulate -using Simcan2Cloud -the execution of four different workloads -containing 2000, 5000, 7500, and 10000 users -o five different cloud infrastructures, consisting of 64, 128, 256, 384, and 448 physical machines. Furthermore, we consider two possible resource allocation strategies. In the first one, called NR-first, high-priority users are served first by using non-reserved machines. Only when these resources are not available are the high-priority user applications executed on reserved machines. In the second strategy, called R-first, high-priority users are served first by using the reserved machines. In this case, when there are no more reserved resources available, non-reserved machines are used to attend to their requests.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>16</ns0:ref> shows the results obtained from simulating the execution of the workloads on the cloud configuration with 64 computing machines. We indicate the income obtained in relation to the percentage of high-priority users (x-axis) and the percentage of reserved machines (y-axis). The income is represented in each square of the chart using the colored scale placed on the right-hand side of the figure. In this particular case, magenta and blue (> 400) represent higher incomes, while red and yellow (< 0) indicate lower incomes. The graphs on the left show the profits when the NR-first strategy is applied, i.e., nonreserved machines are used first to attend to high-priority user requests. The graphs on the right show the cloud service provider's incomes when using the R-first strategy, i.e., the high-priority user requests are served first by using reserved machines.</ns0:p><ns0:p>Broadly speaking, the results obtained when the two different allocation strategies are used are similar.</ns0:p><ns0:p>We can see a slight difference when the cloud processes a workload consisting of 2k users, in which case the R-first strategy (Figure <ns0:ref type='figure' target='#fig_9'>16</ns0:ref>.b) provides slightly better results than the NR-first strategy (Figure <ns0:ref type='figure' target='#fig_9'>16</ns0:ref>.a), as high-priority users use the reserved machines from the beginning and then they pay the corresponding surcharge. These graphs also show that the total income decreases as the size of the workload processed increases. This effect is mainly caused by the saturation of the cloud, which is not able to process such a number of users requesting resources and, consequently, the number of unattended high-priority users increases significantly, which in turn increases the number of compensations. In this particular case, the size of the cloud clearly limits the overall income, this therefore being the main bottleneck of the system. The best-case scenario using this cloud configuration reaches an income of -approximately -600 monetary units when the workload of 2k users is processed.</ns0:p><ns0:p>Next, we analyze how a cloud consisting of 128 machines processes the workloads (Figure <ns0:ref type='figure' target='#fig_22'>17</ns0:ref>). We observe some similarities with the previous experiment. First, both allocation strategies obtain almost the same results. Second, the highest income is obtained when a workload consisting of 2k users is processed <ns0:ref type='bibr'>(Figures 17.a and 17.b)</ns0:ref>. Third, the income decreases when the size of the workload and the percentage of high-priority users increases -especially with a high number of reserved machines -because there are a large number of users leaving the system without being served. In contrast to the previous cloud providing only 64 machines, this cloud obtains higher incomes, which is mainly thanks to increasing the number of physical machines. In this case, there is no negative income. However, the cloud is still saturated and, therefore, the number of reserved machines is not enough to allow these users to be attended to, so compensations reduce the final incomes. The best-case scenario generates an income of -approximately -1600 units when the workload of 2k users is processed.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_23'>18</ns0:ref> shows the results obtained for a cloud with an infrastructure containing 256 physical nodes.</ns0:p><ns0:p>In this case, processing workloads containing requests from 5k, 7.5k, and 10k users obtains better results than those when the workload of 2k users is processed. Hence, increasing the physical resources has a significant impact on the overall income.</ns0:p><ns0:p>It is important to note that these charts clearly show a turning point in the overall income when the workload containing 7.5k users is processed. Note that with the R-first strategy, high-priority users are Manuscript to be reviewed</ns0:p><ns0:p>Computer Science served first using the reserved machines. Thus, regular users have more non-reserved machines available, as long as the system is not saturated, so they do not have to compete with high-priority users in this case. We can also see that from 7.5k users upwards <ns0:ref type='bibr'>(Figures 18.e,</ns0:ref><ns0:ref type='bibr'>18.f,</ns0:ref><ns0:ref type='bibr'>18</ns0:ref>.g and 18.h) the cloud becomes saturated again with a high number of high-priority users. This case provides the best results for this cloud (Figures 18.e and 18.f). However, in this particular case, the two allocation strategies lead to different results. Figure <ns0:ref type='figure' target='#fig_23'>18</ns0:ref>.e shows that as the number of high-priority users increases (using the NR-first strategy), the income is only maintained with a percentage of reserved nodes lower than 14%. However, when the R-first strategy is used (Figure <ns0:ref type='figure' target='#fig_23'>18</ns0:ref>.f) we can see that the income can be maintained with a percentage of reserved machines greater than 14% (see purple area), so the global income -in this particular case -is better with the R-first strategy.</ns0:p></ns0:div>
<ns0:div><ns0:head>21/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The best-case scenario in this cloud is obtained when the R-first strategy is used to process a workload consisting of 7.5k users, and it generates an income of 4000 monetary units. Note that the saturation of the cloud reduces the income when the number of users increases (Figure <ns0:ref type='figure' target='#fig_23'>18</ns0:ref>.g and 18.h).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_24'>19</ns0:ref> shows the results obtained for a cloud configuration with 384 physical nodes. Processing the workloads of 2k and 5k users obtains similar results to those for the previous clouds, that is, the R-first strategy provides slightly better results -since high-priority users pay a surcharge for using the reserved machines -than the NR-first strategy. In these cases, the cloud service provider income is very similar regardless of the percentage of high-priority users. This situation occurs because the workload can be processed without the cloud becoming saturated, and thus most users are served using only non-reserved resources.</ns0:p></ns0:div>
<ns0:div><ns0:head>22/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science These charts show a turning point between 7.5k and 10k users. When the percentage of high-priority users is low, as the percentage of reserved machines increases, the profits decrease due to the regular users that have to leave the system (see the light blue squares in the upper-left corner). In the same way, when the percentage of high-priority users is high and the percentage of reserved machines is low, the profits also decrease due to the compensations. These effects can be observed in figures <ns0:ref type='bibr'>19.e, 19.f, 19.g and 19.h.</ns0:ref> In this cloud, in contrast to the previous ones, the highest incomes (> 6000 monetary units) is obtained when processing a workload consisting of 10k users and using the R-first strategy.</ns0:p><ns0:p>The last experiment (Figure <ns0:ref type='figure' target='#fig_25'>20</ns0:ref>) shows the results obtained for a cloud configuration with 448 physical nodes. In this case, we observe the same tendency as in the previous experiment (Figure <ns0:ref type='figure' target='#fig_24'>19</ns0:ref>), that is, the two allocation policies obtain the same results when the cloud is not saturated <ns0:ref type='bibr'>(Figures 20.a,</ns0:ref><ns0:ref type='bibr'>20.b,</ns0:ref><ns0:ref type='bibr'>20.</ns0:ref> <ns0:ref type='bibr'>(Figures 20.e,</ns0:ref><ns0:ref type='bibr'>20.f,</ns0:ref><ns0:ref type='bibr'>20.g and 20.h)</ns0:ref>. In this case, however, increasing the number of physical machines generates an improvement in the overall income, reaching 8000 monetary units when the cloud processes a workload of 10k users (Figure <ns0:ref type='figure' target='#fig_25'>20</ns0:ref>.h).</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.3'>Discussion of the results</ns0:head><ns0:p>In this section, we provide a detailed discussion of the results obtained in the experimental study and draw some interesting conclusions.</ns0:p><ns0:p>After a careful analysis of the results, we discover that the size of the cloud, that is, the number of physical machines, has a significant impact on the overall income, and therefore it should be dimensioned Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in proportion to the workload to be processed. The CloudCost profile allows us to model the behavior of the users and, then, to simulate the behavior of cloud systems when processing different workloads, so as to determine the turning point at which the cloud increases the overall income. It is therefore desirable that the percentage of reserved machines (y-axis) is balanced in proportion to the percentage of high-priority users (x-axis) requesting resources from the cloud.</ns0:p><ns0:p>Note from the experiments that when the cloud is not saturated (all the users' requests are served), the two allocation policies obtain similar results. However, when the cloud is saturated, the R-first strategy provides slightly better results than the NR-first strategy. The experiments clearly indicate the saturation point of the cloud when increasing the number of users requesting resources from the cloud. This is an important aspect that must be carefully analyzed by the cloud service provider in order to adapt the size of the cloud to the load that is to be processed.</ns0:p><ns0:p>In addition, we have discovered a boundary in the percentage of high-priority users that clearly limits the overall income. In these experiments, when the percentage of high-priority users increases by 20%, the income decreases. The results show that increasing the number of high-priority users could potentially harm the cloud service provider's profit when the resources are not appropriately assigned. However, the cloud service provider could alleviate this situation by reserving machines to provide resources exclusively to high-priority users.</ns0:p><ns0:p>We can conclude that, in most scenarios, having a good ratio of reserved machines to attend to high-priority users is key to increasing the cloud provider's overall income.</ns0:p><ns0:p>Finally, as future work, we intend to perform a mediation effect analysis <ns0:ref type='bibr' target='#b2'>(Baron and Kenny, 1986;</ns0:ref><ns0:ref type='bibr' target='#b26'>Shrout and Bolger, 2002)</ns0:ref> on the simulation results obtained. The regression sets will be the number of high-priority users requesting resources from the cloud (X), the number of reserved nodes for high-priority users (M), and the cloud service provider's profit (Y). Thus, the goal of this study will be to analyze the impact of M on the causal effect that X has on Y , i.e. to conclude what the effect is of varying the number of nodes reserved for high-priority users on the effect of X on Y .</ns0:p><ns0:p>Note: The data and the results obtained for the cloud configurations considered can be found in Supplemental Data S1.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021) Manuscript to be reviewed Computer Science also considered. However, they do not model the physical infrastructure, the user interactions with the cloud provider or the SLAs. With respect to SLAs, Papadakis-Vlachopapadopoulos et al. (2019) propose a collaborative SLA and reputation-based trust management solution for federated cloud environments. It consists of a hybrid reputation system that takes into account both user ratings and monitoring of SLA violations. Some technical KPIs, such as network latency and CPU utilization, are used to measure SLA violations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>through UML profiles and Bergmayr et al. (2014) via the CAML language support modeling infrastructures. User interaction indicates whether the proposal supports modeling the communications between the users and the cloud 4/27 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1. CloudCost Methodology</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. CloudCost profile: Cloud infrastructure stereotypes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. CloudCost profile: Properties and associations of cloud infrastructure stereotypes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. CloudCost profile: Definition of cloud interaction stereotypes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>stereotype) extends the Component metaclass. Both the users and the cloud service provider have behaviors that follow a lifeline (Lifeline metaclass). All the messages exchanged extend the Message metaclass. The relationships between these components are shown in Figure 5 as stereotype associations. 8/27 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. CloudCost profile: Associations and properties of cloud interaction stereotypes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Main SD: cloud provider and regular users interaction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Execution SD: users and cloud service provider interactions to submit the applications.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Subscription SD: regular users and cloud service provider interactions for deferred execution.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>once each VM for each SLA type Rule Description Cloud service provider must offer each VM only once for each SLA type. Rule OCL Code c o n t e x t SLA i n v s e l f . vmcost −>i s U n i q u e ( v m c o s t | v m c o s t . vm ) Notification Type Error MSCC Recommendation Please, enter each VM only once in each SLA! Rule 2 User must sign a VM SLA offered by the CloudProvider Rule Description A User must always sign an SLA offered by the cloud service provider. Rule OCL Code c o n t e x t U s e r i n v u m l 2 c l o u d : : C l o u d P r o v i d e r . a l l I n s t a n c e s ( ) −> c o l l e c t ( cp | cp . s l a )−> i n c l u d e s ( s e l f . s l a ) Notification Type Error MSCC Recommendation Please, sign an SLA! Rule 3 User request a VM not offered in the signed SLA Rule Description A user cannot request a VM, which is not offered in the signed SLA. Rule OCL Code c o n t e x t R e q u e s t i n v u m l 2 c l o u d : : U s e r . a l l I n s t a n c e s ( ) −> s e l e c t ( u s e r | u s e r . b a s e L i f e l i n e . c o v e r e d B y −> i n c l u d e s ( s e l f . s e n d E v e n t ) ) −> c o l l e c t ( u s e r | u s e r . s l a . v m c o s t −> c o l l e c t ( v m c o s t | v m c o s t . vm ) ) −> i n c l u d e s ( s e l f . v m I n s t a n t a t i o n . vmType )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>c o n t e x t SLA i n v s e l f . v m c o s t −> f o r A l l ( v m c o s t | v m c o s t . b a s e . v a l u e >= 0 ) Notification Type Error MSCC Recommendation Please, enter a positive value for the base cost value! Rule 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>c o n t e x t SLA i n v s e l f . v m c o s t −> f o r A l l ( v m c o s t | v m c o s t . d i s c o u n t >= 0 )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>c o n t e x t SLA i n v s e l f . v m c o s t −> f o r A l l ( v m c o s t | v m c o s t . c o m p e n s a t i o n >= 0 ) Notification Type Error MSCC Recommendation Please, enter a positive value for the compensation cost value! Rule 8 CloudProvider offer must be greater than or equal to zero Rule OCL Code c o n t e x t C l o u d P r o v i d e r i n v s e l f . o f f e r >= 0 Notification Type Error MSCC Recommendation Please, enter a positive value for the offer cost value! Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>c o n t e x t P r i o r i t y U s e r i n v u m l 2 c l o u d : : C l o u d I n f r a s t r u c t u r e . a l l I n s t a n c e s ( ) −> c o l l e c t ( c i | c i . d a t a c e n t e r E l e m e n t s ) −> c o l l e c t ( de | de . d a t a C e n t e r T y p e ) −> c o l l e c t ( dc | dc . r a c k E l e m e n t s ) −> s e l e c t ( r e | r e . r a c k T y p e . o c l I s K i n d O f ( u m l 2 c l o u d : : ComputingRack ) −> s e l e c t ( r e | r e . r a c k T y p e . o c l A s T y p e ( u m l 2 c l o u d : : ComputingRack ) . machineType . t y p e = u m l 2 c l o u d : : MachineType : : R e s e r v e d ) −> c o l l e c t N e s t e d ( r e | r e . numberOfRacks * r e . r a c k T y p e . b o a r d s * r e . r a c k T y p e . m a c h i n e s P e r B o a r d )−>sum ( ) >0 Notification Type Warning MSCC Recommendation Please, reserve some VMs for the high-priority user! Rule 10 ResumeTime is at least twice as long as the renting time Rule Description The initial request time for a VM should be longer than the Renting Time. Rule OCL Code c o n t e x t Resume i n v u m l 2 c l o u d : : R e q u e s t . a l l I n s t a n c e s ( ) −> s e l e c t ( r e q | u m l 2 c l o u d : : R e g u l a r U s e r . a l l I n s t a n c e s ( ) −> s e l e c t ( r u | r u . b a s e L i f e l i n e . c o v e r e d B y −> i n c l u d e s ( s e l f . b a s e M e s s a g e . s e n d E v e n t ) ) −> f o r A l l ( r u | r u . b a s e L i f e l i n e . c o v e r e d B y −> i n c l u d e s ( r e q . b a s e M e s s a g e . s e n d E v e n t ) ) )−> f o r A l l ( r e q | r e q . v m I n s t a n t i a t i o n . r e n t i n g T i m e . v a l u e * 2> s e l f . resumeTime . v a l u e ) that the initial request time should be longer than the renting time! language for UML, which allows us to define object query expressions in our UML models, and to carry out the validation of the CloudCost profile. Tables 2, 3 and 4 present the OCL rules that have been considered in order to check the model's correctness.Tables2 and 3show the OCL rules for detecting errors in the parameterization process. The first rule checks that the cloud service provider does not offer a specific VM more than once in the same SLA,since this could give rise to a situation in which a VM would have two different cost values for the same user. The second rule ensures that a user always signs an SLA offered by the cloud service provider. The third rule validates that the user requests a VM that is actually in the signed SLA. Rules 4 to 8 check 13/27 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>es.uclm.uml2cloud.examples: This plug-in contains the examples that have been used to illustrate the applicability of the MSCC modeling tool.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 13 Figure 11 .Figure 13 .</ns0:head><ns0:label>131113</ns0:label><ns0:figDesc>Figure 13 shows how to create a new interaction diagram with the MSCC tool. For the sake of clarity, we have included annotations in the figure. Annotation 1 shows how to select one of the pre-installed examples of the tool. In Annotation 2 we can see the selection of the interaction diagram. Afterwards, another window allows us to select the type of user. Finally, the end-user only has to set the parameters of 1 MSCC is available at: https://www.dsi.uclm.es/cloud/modeling/uml2cloud/releases/2.1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>shows the configuration of a cloud infrastructure, which is defined by parameterizing the component diagram of the CloudCost profile. This figure in particular shows the configuration of a cloud consisting of one data center with 128 computing machines. Each computing machine in all the clouds modeled has the following characteristics: a 2TB disk, 64GB of RAM, and a quad-core CPU with 60,000 MIPS of computing power. All the 16/27 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 14 .Figure 15 .Figure 16 .</ns0:head><ns0:label>141516</ns0:label><ns0:figDesc>Figure 14. Profile component diagram defining a cloud infrastructure configuration with 128 nodes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. Overall income of the cloud consisting of 128 machines when processing different workloads. (a) NR-first strategy -2k users. (b) R-first strategy -2k users. (c) NR-first strategy -5k users. (d) R-first strategy -5k users. (e) NR-first strategy -7.5k users. (f) R-first strategy -7.5k users. (g) NR-first strategy -10k users. (h) R-first strategy -10k users.</ns0:figDesc><ns0:graphic coords='23,161.66,415.12,149.42,78.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18. Overall income of the cloud consisting of 256 machines when processing different workloads. (a) NR-first strategy -2k users. (b) R-first strategy -2k users. (c) NR-first strategy -5k users. (d) R-first strategy -5k users. (e) NR-first strategy -7.5k users. (f) R-first strategy -7.5k users. (g) NR-first strategy -10k users. (h) R-first strategy -10k users.</ns0:figDesc><ns0:graphic coords='24,161.66,415.12,149.42,78.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19. Overall income of the cloud consisting of 384 machines when processing different workloads. (a) NR-first strategy -2k users. (b) R-first strategy -2k users. (c) NR-first strategy -5k users. (d) R-first strategy -5k users. (e) NR-first strategy -7.5k users. (f) R-first strategy -7.5k users. (g) NR-first strategy -10k users. (h) R-first strategy -10k users.</ns0:figDesc><ns0:graphic coords='25,161.66,415.12,149.42,78.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20. Overall income of the cloud consisting of 448 machines when processing different workloads. (a) NR-first strategy -2k users. (b) R-first strategy -2k users. (c) NR-first strategy -5k users. (d) R-first strategy -5k users. (e) NR-first strategy -7.5k users. (f) R-first strategy -7.5k users. (g) NR-first strategy -10k users. (h) R-first strategy -10k users.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55335:1:1:NEW 22 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Main features of the most relevant approaches</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Proposal</ns0:cell><ns0:cell cols='6'>SLAs Pricing scheme CP profits Services Infrastructure User interaction CP Experiments Cloud modeling</ns0:cell></ns0:row><ns0:row><ns0:cell>Kamali et al. (2014)</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell></ns0:row><ns0:row><ns0:cell>Bergmayr et al. (2014)</ns0:cell><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell></ns0:row><ns0:row><ns0:cell>Soni and Hasan (2017)</ns0:cell><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2018)</ns0:cell><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell></ns0:row><ns0:row><ns0:cell>Cong et al. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Herzfeldt et al. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>✗</ns0:cell><ns0:cell>✗</ns0:cell><ns0:cell /><ns0:cell>✗</ns0:cell></ns0:row><ns0:row><ns0:cell>Cambronero et al. (2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The user requests a VM of the type VM xlarge. If the cloud service provider does not offer this VM type in the SLA signed by the user, the third OCL rule in Table2will be violated. As a consequence, an error message (Constraint User Must Sign A VM SLA Offered By The CloudProvider is violated) will be displayed by the MSCC tool, as can be seen in the figure. The user has several options to solve the error: the user can request another VM that is in the SLA, or sign another SLA including the VM xlarge type, or the cloud service provider could include the VM xlarge type in the existing SLA. In Figure</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "Editor comments (Muhammad Aleem)
Dear Authors,
As per the received reviews, I have to make the decision as major revisions. I hope you will fully
address all the reviews before re-submission. Good luck.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any
edits or clarifications mentioned in the letter are also inserted into the revised manuscript where
appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in
the revised manuscript. If a reviewer raised a question then your readers will probably have the same
question so you should ensure that the manuscript can stand alone without the rebuttal letter.
Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academicrebuttal-letters/ #]
Authors, Response:
Dear Editor (Muhammad Aleem),
We would like to express our sincere gratitude to you and the anonymous reviewers for
giving our paper a major revision. In the revision, the valuable and constructive comments have
helped us to improve the quality of the paper. In detail, we have attempted to address all the
concerns raised by the reviewers. The point-to-point responses are provided below. We look forward
to hearing from you about a further decision on our work.
Best Regards,
M. Emilia Cambronero
Reviewer 1 (Anonymous)
Basic reporting
The writing and structure must be improved
Authors, Response:
As suggested by Reviewer #1, we have improved the writing and structure of the paper. We
have also added two new sections in order to clarify the motivation and contributions of the paper
(new Section 2, lines 76-109) and the methodology used (new Section 4, lines 215-241). The Abstract,
Introduction, Related Work, and Case Study sections have been rewritten according to the reviewer's
suggestions. Finally, a native speaker has revised the document.
Experimental design
Introduction seems too long, it must be shortened and concised with relevant studies only.
It is very hard to find the novelty or any scientific contribution in the paper, please justify it and
provide a separate section.
Authors, Response:
Thank you for the suggestion. We have reduced and rewritten the Introduction section to
make it more concise, and to justify and clarify the novelty and scientific contribution of our paper.
For this purpose, we have also included a new section, called “Motivation and Contribution”, to
highlight the main motivation and contributions of our paper.
Validity of the findings
Which testbed you have used for cloud configuration?
Authors, Response:
In the Introduction section (lines 61-66), we have added a new paragraph to clarify that we
used the Simcan2Cloud simulator as the simulation tool: “As testbed, we use the simulator
Simcan2Cloud (Bernal61et al., 2019a), which makes it possible to load the cloud scenarios created
by the MSCC tool and simulate the execution of the workloads generated, which consist of a large
number of users. It is worth noting that these simulations can be executed on a personal computer,
and therefore no special features are required of the platform to support these executions. The
results provided by the simulator allow us to carry out the performance evaluation and the profit
analysis of our cloud models.”
It would be nice if you can provide validation and state of the art comparison
Authors, Response:
Section 5.2, called “Validation of CloudCost Profile Models”, is devoted to the validation of
the models defined in the profile. The Related Work section has been rewritten to include new
related works and compare them with our work.
Please provide the link of your tool
Authors, Response:
In Section MSCC DESIGN TOOL (Section 6), we have included a link as a footnote (line 394)
to the tool installation site. The tool installation site is:
https://www.dsi.uclm.es/cloud/modeling/uml2cloud/releases/2.1.
Cloud configuration performance and analysis is more theoretical and boring. It is suggested to add
more case studies to justify the study
Authors, Response:
Thank you for this comment. Section 7.- Case Study has been completely rewritten. At the
beginning of this section, we summarize how the experimental study has been carried out. Section
7.1., Experimental Settings, provides a complete description of how each part of the experiments
(cloud models and workloads) was generated. The performance analysis (see Section 7.2) has been
rewritten to offer a clearer presentation of the results. Finally, Section 7.3 includes a discussion of
the results, and we also present some relevant conclusions.
There are many cited online references, I suggest to reduce the citation of online references, if
unavoidable then please give access time/data with them. E.g. Last access on 07-December-2020.
Authors, Response:
Thank you for this useful suggestion. We have included new references, and the Related
Work Section has been rewritten. We have also added the last data access for the online references.
Figures 13, 14 are not readable, please provide clear figures.
Authors, Response:
In accordance with the reviewer's comment, we have improved the readability of Figures 13
and 14. We have also taken the opportunity to improve Figures 9, 10, 11, 14 and 15.
Comments for the Author
Please revise carefully according to my comments above
Authors, Response:
We would like to express gratitude to the reviewer for the useful comments.
Reviewer 2 (Anonymous)
Basic reporting
This study attempts to investigate the role of SLAs on IaaS. In addition, this study also attempts to
understand the benefits afforded to SLA and clouds, provide conceptual insights into it, and test the
proposed model.
In general, the paper was even well written and interesting, but the logic does not flow well and needs
to supplement or enrich.
Authors, Response:
Thank you for the comment. As suggested by Reviewer #2, we have rewritten different parts
of the paper trying to improve its flow and readability. Specifically, we have two new sections to
clarify the motivation and contributions of the paper (new Section 2, lines 76-109) and the
methodology used (new Section 2)). The Abstract, Introduction, Related Work, and Case Study
sections have been rewritten in accordance with the reviewer's suggestions. Finally, a native speaker
has revised the document.
Experimental design
The results part needs to provide more information to audiences, not just pronounce the significant
coefficients in tables. Besides, the simulation results need to show in some other figures, and the
causal relationship need to explain systematically and under your inferences and hypotheses. Authors
should explain it with a standalone paragraph and tell us how to test the mediation effect.
Authors, Response:
In accordance with the reviewer's comment, we have rewritten the Case Study section to
explain in detail the results obtained and the conclusions. Section 7.3 shows the results more clearly
and draws the corresponding conclusions.
Validity of the findings
It seems valid.
Authors, Response:
We would like to thank the reviewer for expressing approval for this part of the paper.
Comments for the Author
* With respect to writing style, the entire paper strongly requires to be reviewed by a native English
speaker. There many grammatical issues, such as missing articles, missing verbs, and mixed-up
singulars/plurals.
Authors, Response:
As suggested by Reviewer #2, we have improved the writing of the paper, and also the
structure, in order to enhance its readability. We have added two new sections to clarify the
motivation and contributions of the paper (new Section 2, lines 76-109) and the methodology used
(new Section 4, lines 215-241). The Abstract, Introduction, Related Work, and Case Study sections
have been rewritten in accordance with the reviewer's suggestions. Finally, a native speaker has
revised the document.
* The motivation of the work is not clearly presented especially in the abstract. They should better
motivate their work so that they could be able to position the work to the concurrent solutions.
Authors, Response:
Thank you for this useful suggestion. We have rewritten the Abstract and Introduction
section to clearly present the paper's motivation. As mentioned above, we have also included a new
section, called “Motivation and Contribution” (new Section 2, lines 76-109), to explain in detail the
motivation and main contributions of the paper.
*There are some style and spelling mistakes. Some sentences are repeated with almost the same
words.
Authors, Response:
We have revised the paper to fix all the spelling mistakes and use the same style. Several
parts of the paper have been rewritten. Finally, a native speaker has revised the whole paper to
improve its quality.
* Font sizes need to be consistent.
Authors, Response:
All the font sizes have been revised and fixed in order to be consistent.
* There is no clear indication of what the model is.
Authors, Response:
Thank you for this pertinent comment. In order to describe the goals of the paper in a better
way and what the CloudCost model is, we have changed the Abstract and the Introduction section.
In addition, there is a new section, called “Motivation and Contribution” (new Section 2, lines 76109), in which we describe more precisely the contributions and the main features of the CloudCost
model.
* The implications of the study should be rearranged according to the implication of theory, methods,
and practice. Thus, it would be easy for the readers to see the volume of the contribution of
the paper. It should explain how this study departs from previous research particularly in
terms of new knowledge and theory.
Authors, Response:
We have rewritten the Abstract and Introduction sections to justify the implications of our
study and clarify the novelty and scientific contribution of our paper. In addition, we have also
included a new section, called “Motivation and Contribution” (new Section 2, lines 76-109), to
highlight the main motivation and contributions of our paper.
*The explanation of the data set is superficial and needs to put more explanation.
Authors, Response:
We have extended the explanation of the Supplemental Data S1 to clarify it. In accordance
with the suggestion of Reviewer #2, the case study has been extended with more experiments. The
results of these experiments have been added to the data set. The results of all the experiments
have been merged in one file to facilitate their processing. Some columns have been added to clarify
the parameters of the simulation and differentiate between simulations.
* The methodology is simplistic and poorly documented.
Authors, Response:
According to the suggestion of Reviewer #2, we have included a new section, namely Section
4 (lines 215-241), which is titled Methodology, in order to clarify and document the methodology
used in our paper.
* The Abstract is absolutely not informative.
Authors, Response:
The Abstract and Introduction have been modified in accordance with the reviewer’s
suggestion.
* The literature is not critically written and more towards a positive tone. Contradiction stands need
to be synthesized as well. Or else the literature review is just merely reporting, It needs to be more
analytical, decisive, and avoid a positive tone all the way. Tabulate the literature to enable the readers
to see the volume of works done in this area to date.
Authors, Response:
The Related Work section has been rewritten to better compare our work with other related
works. Now it is more analytical. A table has also been included to help the reader with this
comparison.
* Also the motivational approach of cloud and SLA would be important to integrate shortly
into the introduction and related works.
Authors, Response:
The Introduction and Related Work sections have been modified to motivate our approach
and the use of the cloud and SLAs.
Just
a
few
studies
to
mention:
- Cloud service composition using an inverted ant colony optimisation algorithm. 13(4), 257-268.
- Human resources ranking in a cloud-based knowledge sharing framework using the quality control
criteria.
- SLA-aware and energy-efficient VM consolidation in cloud data centers using robust linear regression
prediction
model.
7,
9490-9500.
- A new agent-based method for QoS-aware cloud service composition using particle swarm
optimization
algorithm.
10(5),
1851-1864.
- SCelVis: exploratory single cell data analysis on the desktop and in the cloud. 8, e8607.
- Collaborative SLA and reputation-based trust management in cloud federations. 100, 498-512.
- A cloud service composition method using a trust‐based clustering algorithm and honeybee mating
optimization
algorithm.
33(5),
e4259.
- Enforcing trustworthy cloud SLA with witnesses: A game theory–based model using smart contracts.
e5511.
Authors, Response:
In the new Related Work section (lines 130-139 and lines 151-166), we have considered the
most related works suggested by the reviewer.
" | Here is a paper. Please give your review comments after reading it. |
80 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Rainfall prediction is immensely crucial in daily life routine as well as for water resource management, stochastic hydrology, rain run-off modeling and flood risk mitigation.</ns0:p><ns0:p>Quantitative prediction of rainfall time series is extremely challenging as compared to other meteorological parameters due to its variability in local features that involves temporal and spatial scales. Consequently, this requires a highly complex system having an advance model to accurately capture the highly non linear processes occurring in the climate. The focus of this work is direct prediction of multistep forecasting, where a separate time series model for each forecasting horizon is considered and forecasts are computed using observed data samples. Forecasting in this method is performed by proposing a deep learning approach, i.e, Temporal Deep Belief Network (DBN). The best model is selected from several baseline models on the basis of performance analysis metrics. The results suggest that the temporal DBN model outperforms the conventional Convolutional Neural Network (CNN) specifically on rainfall time series forecasting.</ns0:p><ns0:p>According to our experimentation, a modified DBN with hidden layes (300-200-100-10) performs best with 4.59e-05, 0.0068 and 0.94 values of MSE, RMSE and R value respectively on testing samples. However, we found that training DBN is more exhaustive and computationally intensive than other deep learning architectures. Findings of this research can be further utilized as basis for the advance forecasting of other weather parameters with same climate conditions</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Anticipating the future values of an observed time-series phenomena plays a significant role to improve the quality of services. For instance, accurate predictions can greatly revolutionize the performance in the arena of medical, engineering, meteorology, telecommunication, control systems, business intelligence, crypto-currency and most important the financial outcomes. Anticipating adequate predictors and indicators from historical data requires statistical and computational methods for correlating dependencies.</ns0:p><ns0:p>Specifically, between past and future values of observed samples and techniques to cop up with longer horizons <ns0:ref type='bibr' target='#b9'>(Bontempi et al., 2012)</ns0:ref>. Over the last few decades, the research community has shown an increasing interest in the time series analysis, modelling, prediction and forecasting. However, future prediction remains challenging due to the complex nature of problem.</ns0:p><ns0:p>It is important to mention that weather forecasting is significant not only for individual's everyday life schedule, but also for agriculture sector as well as several industries. These forecasts can also help decision-making processes carried out by organizations for disaster prevention. Being specific, rainfall is significant for agriculture, food production plan, water resource management and likewise other natural systems <ns0:ref type='bibr' target='#b12'>(Bushara and Abraham, 2015)</ns0:ref>. The variability of rainfall in space and time, however, renders quantitative forecasting of rainfall extremely difficult <ns0:ref type='bibr' target='#b42'>(Luk et al., 2001)</ns0:ref>. The behaviour and structure of rainfall including its distribution in temporal and spatial dimensions depends on several variables, for instance, humidity, pressure, temperature and possibly wind direction and its speed. Apart from this, PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science a time series of rainfall usually contains local features too, for example, bursts of heavy rain between prolonged low intensity rainfall duration. In real, these local features are not fixed in a time slot which renders the prediction of occurrence more difficult.</ns0:p><ns0:p>Since decades, the atmospheric forecasting was calculated through physical simulations in which the present state of the atmosphere is sampled, and future state is computed by numerically solving the equations of fluid dynamics and thermodynamics. Usually, the temporal and spatial characteristics of rainfall forecast rely heavily on the hydrological requirements. A hydrological model is characterization of a real-world hydrologic features, for example, water surface, soil water, wetland, groundwater, estuary. This type of modelling supports in managing, predicting, evaluating and understanding water resources by developing small-scale computer simulations, constructing physical models and mathematical analogues. This indicates that hydrological process models can be divided into three major categories, specifically, physical, conceptual and metric-based or computational models <ns0:ref type='bibr' target='#b6'>(Beck et al., 1990)</ns0:ref>. Conceptual modelling is simply a graphical visualization and representation of real world model using hydrological components.</ns0:p><ns0:p>It is used to determine what aspects of the real world to include, and exclude, from the model, and at which level of detail, the model will demonstrate. On the other hand, physical models focus more towards the underlying physics using mathematical equations for hydrological process. Despite of their good performance, these hydrological models, conceptual and physical do not perform well when applied to periods with climate conditions that differ from those during model calibration <ns0:ref type='bibr' target='#b5'>(Bai et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b20'>Duethmann et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b64'>Xu et al., 2020)</ns0:ref>. Thus, one of the possible solutions might be to select sitespecific model, which includes non-hydrostatic cloud physics. Nevertheless, the black-box models are an alternative and more successful approach for modeling complex and nonlinear hydrological processes.</ns0:p><ns0:p>Moreover, in order to forecast the rainfall using physical-based process, model becomes unfeasible due to the complexity of the atmospheric processes by which rainfall is generated. In addition, the lack of data on the necessary temporal and spatial scales affects the prediction process <ns0:ref type='bibr' target='#b15'>(Cristiano et al., 2017)</ns0:ref>. Thus, all these factors make rainfall time series prediction more challenging task as compared to other meteorological parameters. Accordingly, we believe that the requirements for such a highly complex system should include an advance model to accurately capture the highly non linear processes occurring in the climate.</ns0:p><ns0:p>The size of forecasting horizon is enormously critical and is considered as one of the important feature in prediction process. One-step forecasting of a time series is already a challenging task, performing multi-step, i.e, h-steps ahead forecasting is more difficult <ns0:ref type='bibr' target='#b38'>(Längkvist et al., 2014a)</ns0:ref> because of additional complications, like accumulation of errors, reduced accuracy, and increased uncertainty <ns0:ref type='bibr' target='#b36'>(Kuremoto et al., 2014a)</ns0:ref>. Generally, on broader spectrum multistep forecasting can be computed through two major strategies. The first is recursive approach and the second one is direct approach. In recursive approach, multi step forecasting is handled iteratively. This means a single time series model is developed and each subsequent forecast is estimated using previously computed forecasts. On the other hand, the direct approach establishes a separate time series model for each forecasting horizon and forecasts are estimated directly by implemented models. However, the choice of selection in between of these two strategies involves a trade-off between bias and variance <ns0:ref type='bibr' target='#b59'>(Taieb and Hyndman, 2014)</ns0:ref>. Conventionally, multistep forecasting has been managed recursively, where a model is setup as one step forecasting model and each forecast is estimated using previous forecasts.</ns0:p><ns0:p>Nevertheless, one cannot ignore the fact that minimization of 1-step forecast errors is not guaranteed to provide the minimum over textbfh-steps ahead errors. In this current research, the emphasis is on direct prediction of multistep forecasting, where a separate time series model for each forecasting horizon is considered and forecasts are computed using the observed data samples. In fact, the direct strategy minimizes the h-step ahead errors instead of considering one-step ahead. Huge number of studies comparing recursive and direct forecasting strategies are present in literature; for further details, see <ns0:ref type='bibr' target='#b61'>(Tiao and Tsay, 1994;</ns0:ref><ns0:ref type='bibr' target='#b14'>Cheng et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b25'>Hamzac ¸ebi et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b33'>Kline, 2004;</ns0:ref><ns0:ref type='bibr' target='#b34'>Kock and Teräsvirta, 2011)</ns0:ref>. It is also apparent from the literature that the simple time series models contain no hidden variables.</ns0:p><ns0:p>In general terms, the fully observed models depend upon two types of variables: the first one is vector autoregressive and the subsequent one is Nth order Markov model. Despite of the simplicity, these models are constrained by their lack of memory <ns0:ref type='bibr' target='#b60'>(Taylor et al., 2011)</ns0:ref>. Initially, classic time series modeling was performed by using autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) <ns0:ref type='bibr' target='#b65'>(Zhang et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b62'>Tseng et al., 2002)</ns0:ref>. However, these models are basically linear models <ns0:ref type='bibr' target='#b66'>(Zhang, 2003)</ns0:ref> and have a limited ability to capture highly nonlinear characteristics of rainfall series. Recent</ns0:p></ns0:div>
<ns0:div><ns0:head>2/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science developments in artificial intelligence and, in particular, those techniques aimed at pattern recognition, however, provide an alternative approach for developing of a rainfall forecasting and run-off models <ns0:ref type='bibr' target='#b63'>(Wu et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b19'>Dounia et al., 2014;</ns0:ref><ns0:ref type='bibr'>Nourani et al., 2019c,a;</ns0:ref><ns0:ref type='bibr' target='#b1'>Ali et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b3'>Ali et al., , 2020b</ns0:ref><ns0:ref type='bibr' target='#b1'>Ali et al., , 2018))</ns0:ref>. Artificial neural networks (ANNs), which perform a nonlinear mapping between inputs and outputs, are one such technique.</ns0:p><ns0:p>In particular, for rain prediction researchers in <ns0:ref type='bibr' target='#b32'>(Kashiwao et al., 2017)</ns0:ref> predicted local rainfall in regions of Japan using data from the Japan Meteorological Agency (JMA). A multi-layer perceptron (MLP) is implemented with a hybrid algorithm composed of back-propagation (BP) and random optimization (RO) methods, and radial basis function network (RBFN) with a least squares method (LSM), and compared the prediction performance of the two models. Similarly, ANN shows superior result in comparison to the traditional modeling approaches in <ns0:ref type='bibr' target='#b31'>(Hung et al., 2009)</ns0:ref>. In their research, results show that ANN forecasts achieved satisfactory results and have superiority over the ones obtained by the persistent model. Emotional artificial neural network (EANN) models have recently been developed and deployed by integrating artificial emotions and the ANN technique as a new generation of traditional ANN-based models. The authors <ns0:ref type='bibr' target='#b53'>(Nourani et al., 2019b)</ns0:ref>, proposed the first ever application of these models for multistep precipitation forecasting. Simultaneously, researchers have also suggested the long-term forecasting of precipitation using threshold-based hybrid data mining approach <ns0:ref type='bibr' target='#b54'>(Nourani et al., 2017)</ns0:ref> and a novel data-intelligent approach <ns0:ref type='bibr' target='#b2'>(Ali et al., 2020a)</ns0:ref>.</ns0:p><ns0:p>It is important to mention that a multilayer ANN usually contains three layers: an input layer, an output layer, and one or more hidden layer. The hidden layer is useful for performing intermediary computations before mapping the input to the output layer <ns0:ref type='bibr' target='#b18'>(Darji et al., 2015)</ns0:ref>. Prior to deep learning, problems involving more than two hidden layers were uncommon. With simple data sets, two or fewer layers are often adequate. ALbeit, additional layers may be useful in complex datasets involving time series or computer vision applications. Artificial neural networks with many hidden layers forms a deep architecture composed of multiple levels of non linear operations. Training those deep architectures comes under the umbrella of Deep Learning. When a neural network is composed of more than one or two hidden layers, contingent upon that situation the higher layers compose abstractions on the top of previous layers. Deep Learning Architectures are able to extract high level abstractions from input distribution of data by means of multiple processing layers, composed of multiple linear and non-linear transformations.</ns0:p><ns0:p>To summarize, a number of forecasting approaches have been reported in literature as recent advancements in computing technologies combined with the growing availability of weather-related data has served to dramatically improve the accuracy of forecasts. Recent literature demonstrates that deep learning models are excelling on the platform of machine learning algorithms for time series prediction <ns0:ref type='bibr' target='#b28'>(Hinton et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bengio et al., 2007;</ns0:ref><ns0:ref type='bibr'>Mohamed et al., 2011b,a;</ns0:ref><ns0:ref type='bibr' target='#b58'>Seide et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b10'>Bordes et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b24'>Glorot et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hernández et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b13'>Busseti et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b41'>Liu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b17'>Dalto et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>He, 2017;</ns0:ref><ns0:ref type='bibr' target='#b37'>Kuremoto et al., 2014b;</ns0:ref><ns0:ref type='bibr' target='#b39'>Längkvist et al., 2014b;</ns0:ref><ns0:ref type='bibr' target='#b48'>Narejo and Pasero, 2017)</ns0:ref>; however, an accurate forecasting of rainfall is still challenging in context of hydrological research <ns0:ref type='bibr' target='#b29'>(Hong, 2008)</ns0:ref>. Accordingly, an attempt is made in this work for multi-step rainfall prediction using deep learning approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Main contribution of this research</ns0:head><ns0:p>DBNs are effective models for capturing complex representations mostly from static and stationary data such as image classification and object recognition. DBNs actually lacks the dynamic modelling and are not accurately adequate for non-stationary environments based on time variant features. In <ns0:ref type='bibr' target='#b47'>(Narejo and Pasero, 2016)</ns0:ref>, researchers have proposed a hybrid approach for time series forecasting of temperature data using DBN and Nonlinear Autoregressive Neural Networks (NARX). The authors employed DBN for feature extraction whereas NARX network was developed and trained for extrapolating the temporal forecasts. On the contrary, in the current research, we propose a simple extension to DBN-RBM model in order to capture temporal dependencies for multi-step ahead rainfall prediction. Additionally, the extended model is now capable to forecast multi-steps ahead, rather than just performing prediction for next one step ahead. The extended model still maintains its most important computational properties, such that, exact inference and efficient approximate learning using contrastive divergence. Comparative analysis is also conducted by comparing the performance metrics with other state of the art deep learning models.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>RESEARCH BACKGROUND</ns0:head><ns0:p>The key focus of deep learning is, to automatically discover the hierarchical representations of data, from lowest level features to high level concepts. This automatic learning of features at multiple levels of abstraction influences to learn complex functions mapping the input to the output directly from data, independent of human-crafted features.. Deep learning is not a single approach, rather it is a class of algorithms and topologies including Recurrent Neural Networks (RNNs), Convolutional Neural networks (CNNs), Deep Belief Networks (DBNs), Long short term memory (LSTMs), and stacked Auto-Encoders (SAEs). These approaches are applied to solve a a broad spectrum of problems. Although, depending upon their architecture, training and nature of the problem , these models achieve breakthrough performance in specific domains as summarized in Table <ns0:ref type='table'>1</ns0:ref> . Apart from this, a number of complex problems can be solved by employing these models in combination. For instance, human activity recognition, document analysis, processing and labelling video frames. In this context, we present theoretical concepts of selected deep learning models and their implementation over time-series data. Moreover, we present our implementation for training these models over time series meteorological data. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Deep Belief Network</ns0:head><ns0:p>DBNs are intended to be one of the foremost non-Convolutional models to successfully admit the training of deep architectures. DBN has played Key role in the revival of deep neural networks. Earlier than the preface of DBN, deep models were hard to optimize <ns0:ref type='bibr' target='#b7'>(Bengio, 2009)</ns0:ref>. The layered structure in DBN can be formed by stacking RBMs which are used to initialize the network in the region of parameter space that finds good minima of the supervised objective. RBM relies on two layer structure comprising on visible and hidden nodes as shown in Figure <ns0:ref type='figure'>1</ns0:ref>. The visible units constitute the first layer and correspond to the components of an observation whereas the hidden units model dependencies between the components of observations. Then the binary states of the hidden units are all computed in parallel using (1). Once binary states are chosen for the hidden units, a 'reconstruction' is achieved by setting each vj to 1 with a probability given in (2).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>. Two layered RBM with hidden and visible units. The visible units are responsible to take the input data. the hidden units working on generating observations of model dependencies. Subsequently, the process is back and forth for understanding the patterns and underlying structure of provided data.</ns0:p><ns0:formula xml:id='formula_0'>p(h i = 1|v) = sigmoid( m ∑ j=1 w i j v j + c i ) (1) p(v j = 1|h) = sigmoid( n ∑ i=1 w i j h i + b j )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>The weight wij can be updated using difference between two measured data dependent and model dependent expectations as expressed in equation ( <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>). Where ε is a learning rate.</ns0:p><ns0:formula xml:id='formula_1'>∆wi j = ε(< v jhi > data − < v jhi > recon )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>The DBN model is trained by training RBM layers using contrastive divergence or stochastic maximum likelihood. The parameters of RBM then designate the parameters of first layer of the DBN. The second RBM is trained to model the distribution defined by sampling the hidden units of the first RBM whose visible layer is also working as an input layer as well. This procedure can be repeated as desired, to add as many layers to DBN.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolutional Neural Network</ns0:head><ns0:p>CNNs are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The convolution operation is typically denoted by asterisk as shown in ( <ns0:ref type='formula' target='#formula_2'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>S(t) = (x * w)(t)<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>The first arguments x to the convolution is often referred to as input and the second argument w as the kernel. The output s(t) is sometimes referred to feature map. In ML applications, the input is usually a multidimensional array of data and the kernel is usually a multidimensional array of parameters that are adapted by learning algorithm. The Next layers can be the same conv-nonlinear-pool or can be fully connected layers before output layer. </ns0:p></ns0:div>
<ns0:div><ns0:head>RESEARCH METHODOLOGY</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> presents the general overview of methodology employed in this work. The real time meteorological data was downloaded from Meteo weather station installed at Neuronica Laboratory, Politecnico Di</ns0:p><ns0:p>Torino <ns0:ref type='bibr' target='#b48'>(Narejo and Pasero, 2017)</ns0:ref> as shown in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>. Recorded data contains several meteorological parameters such as however our primary concern for the current research was rainfall forecasting. In order to compute the accurate forecast, the foremost step was data analysis. This analysis was performed by applying some pre-processing steps over the experimental data i.e data filteration, noise removal and outlier detection as follows.</ns0:p></ns0:div>
<ns0:div><ns0:head>Filtering and Noise Removal</ns0:head><ns0:p>In order to smoothen the time series rain data and to normalize noisy fluctuations, a number of different filters were applied. However, the first step towards filter applications was outlier detection in our rain dataset as shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>. Subsequently, we filtered the rain data using Moving Average, Savitzkey golay and other low pas filters as presented in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. The original and filtered rain data is presented in Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref> to demonstrate the effectiveness of the pre-processing step. However, when we trained our temporal DBN models with these filtered data, it was observed that learning rate was much better for models based upon Moving Average and low pass filtered data. However, it was observed later that the input data with moving average filter introduces some delay in estimated rainfall predictions. Hence, we opted for lowpass filteration for subsequent experiments .</ns0:p></ns0:div>
<ns0:div><ns0:head>6/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Feature Extraction</ns0:head><ns0:p>In order to compute accurate rainfall predictions, we must have some meaningful attributes that provides content contribution and possibly reduced error rates. Both the internal and external characteristics of rainfall field depend on a number of factors including pressure temperature, humidity, meteorological characteristics of catchments <ns0:ref type='bibr' target='#b51'>(Nasseri et al., 2008)</ns0:ref>. However, the rainfall is one of the most difficult variables in the hydrologic cycle. The formation mechanism and forecast of rainfall involve a rather complex physics that has not been completely understood so far <ns0:ref type='bibr' target='#b30'>(Hong and Pai, 2007)</ns0:ref>. In order to resolve this, we put some more efforts while creating significantly relevant feature set particularly for rainfall nowcasting. Accordingly, we investigated a number of different feature sets by adding and deleting the meteorological parameters in sequence. Subsequently, finding the appropriate lagged terms of selected parameters to be used included as features.</ns0:p><ns0:p>Successively, we also calculated some statistical features considering mean, standard deviation, variance, maximum, minimum, skewness and kurtosis. We tested our feature sets by training some models and later on, we found that the DBN models were performing comparatively better if we exclude skewness and kurtosis from the selected features. Hence, the finalized features to predict rainfall at (t+h) were: Manuscript to be reviewed current one in the series or the immediate previous sample in series. If n=2, this suggests 2 previous samples next to the current sample will be chosen. Similarly, if n=3, the three immediate previous samples from the series will be selected. Finally, in summary, in order to forecast rainfall for h steps ahead, the required input attributes as presented in (6) are the 3 previous samples and 1 current value at time 't' of the rain data. Moreover, the mean and standard deviation of earlier mentioned 4 rain samples.Humidity, pressure and temperature at current time t also two previous samples of these variable as time t-1 and t-2.</ns0:p><ns0:formula xml:id='formula_3'>Rain(t + h) = [rain(t</ns0:formula><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Experimental Setup</ns0:head><ns0:p>The RBM models imitate static category of data and it does not integrate any temporal information by default. In order to model time series data, we added the autoregressive information as input by considering the previous lag terms in series. Here, in the proposed method, apart from autoregressive terms, we also incorporate some statistical dependencies from the temporal structure of time series data in the form of input feature set parameters. This was done due to the fact that, multistep forecasting for longer horizon is more challenging task. Therefore, some extra statistical considerations and computations needed to be done in order to understand the proper underlying behaviour of temporal sequence. This additional tapped delay of previous samples as input attributes, is shown in equation ( <ns0:ref type='formula'>6</ns0:ref>). It actually introduces temporal dependency in the model and further transforms the model from static to dynamical form. In general, the dynamic ANN depends on a set of input predictor data. Consequently, the dataset needs to define and represent relevant attributes, to be of good quality and to span comparable period of time with data series <ns0:ref type='bibr' target='#b0'>(Abbot and Marohasy, 2012)</ns0:ref>. The rainfall time series dataset is in total composed of 125691 no. of sample recordings. The dataset is divided into three parts prior to training. We divided 70% of the total data for training of the models, 20% for testing and rest was used to validate the h-step ahead forecasting.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selecting Deep layered Architecture</ns0:head><ns0:p>In machine learning while fitting a model to data, a number of model parameters are needed to be learned from data, which is performed through model training. Moreover, there is another kind of parameters that cannot be directly learned from the legitimate training procedures. They are called hyper parameters. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Deeper layers, or layers with more than two hidden layers, may learn more complex representations (equivalent to automatic feature engineering). The number of neurons in the hidden layers is an important factor in deciding the overall architecture of the neural network. Despite the fact that these layers have no direct interaction with the outside world, they have a tremendous effect on the final outcome. The number of hidden layers as well as the number of neurons in each hidden layer must be considered carefully.</ns0:p><ns0:p>Use of few neurons in the hidden layers will result in underfitting i.e. failure to adequately detect the signals in a complicated data set. On the other hand, using too many neurons in the hidden layers can result in several problems. First, a large number of neurons in the hidden layers may result in overfitting.</ns0:p><ns0:p>Overfitting occurs when the neural network has extraordinary information processing capacity that the limited amount of information contained in the training set is not sufficient to train all of the neurons in the hidden layers. Even when the training data is adequate, a second issue may arise. An inordinately large number of neurons in the hidden layers can increase the time it takes to train the network. The amount of training time can increase to the point that it is impossible to adequately train the neural network.</ns0:p><ns0:p>Obviously, some compromise must be reached between too many and too few neurons in the hidden layers.</ns0:p><ns0:p>The researchers have advocated in <ns0:ref type='bibr' target='#b21'>(Erhan et al., 2010)</ns0:ref> that the reason for setting a large enough hidden layer size is due to the early stopping criteria and possibly other regularizers, for instance, weight decay, sparsity. Apart from this, the greedy layer wise unsupervised pretrainig also acts as data dependent regularizer. In a comparative study <ns0:ref type='bibr' target='#b40'>(Larochelle et al., 2009)</ns0:ref>, authors found that using same size for all layers worked generally better or the same as using a decreasing size (pyramid like) or increasing size (upside down pyramid). They further argued that certainly this must be data dependent. However, in our research task the decreasing size structure worked far better than the other too. Consequently, this architectural topology was chosen as standard for further forecasting models. The authors in <ns0:ref type='bibr' target='#b50'>(Narejo and Pasero, 2018)</ns0:ref> have argued that in most of the conducted experiments it was found that an over-complete first hidden layer in which the dimensions are higher than the input layer dimensions, works better than the under-complete one.</ns0:p><ns0:p>Due to the availability of high performance computing facilities and massive computational resources, the more productive and automated optimization of hyper parameter is possible through grid search or random search methods. We have applied both of the mentioned strategies in our experiments as discussed in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>For each forecasting horizon, a separate model is trained and optimized. However, as explained in earlier in Introduction section, multi-step forecasting is much more challenging than one-step ahead. Because, as the forecasting horizon is increased, it is obvious that the propagation of error in each sample will be raised. Due to this known fact, the performance accuracy for longer horizon is slightly less than that of the short forecasting horizon. While training and selecting the final model for each separate forecasting horizon, multiple models were developed and one for each forecasting horizon was finalized on the basis of performance evaluation in terms of RMSE, MSE and R parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>One step ahead forecasting</ns0:head><ns0:p>The commendable deep RBM model for one step ahead rainfall forecasting was chosen with the architecture of (800-400-100-10) hidden layers resulting the depth of four levels. Apart from this, one input layer consisting of fifteen units and one output layer with one unit for predicting the target. The model performed well with RMSE of 0.0021 on training data and 9.558e-04 on test data set. The actual and the forecasted time series is plotted in figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Four steps ahead forecasting</ns0:head><ns0:p>In order to perform four steps ahead forecasting the model with the following hidden layer dimensions (600-400-100-10) is proposed. Similar to earlier mentioned model, the input layer is created with 15 nodes and an output layer is connected for predictions. It resulted with RMSE of 0.0093 on training and 0.0057 on test data set.The actual and the forecasted rainfall time series can be seen in the figure <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Eight steps ahead forecasting</ns0:head><ns0:p>Eight step ahead forecasting was more troublesome task than the rest two mentioned above. Considerably, more networks were attempted to be trained and to be selected as the optimal one. However, the accuracy Forecasting for longer horizon is an arduous task, therefore a deep CNN model is also introduced for forecasting in this section of our research activity. The latest literature exhibits that the structurally diverse CNN stands out for their pervasive implementation and have led to impressive results <ns0:ref type='bibr' target='#b16'>(Cui et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b35'>Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b57'>Schroff et al., 2015)</ns0:ref>.</ns0:p><ns0:p>In CNN model, the convolution filter or kernel is basically an integral component of the layered architecture. The kernels are then convolved with the input volume to obtain so-called activation maps.</ns0:p><ns0:p>Activation maps indicate activated regions, i.e. regions where features specific to the kernel have been detected in the input data. In general, the kernel used for the discrete convolution is small, this means that the network is sparsely connected. This further reduces the runtime inference and back propagation in the network. CNN also typically include some kind of spatial pooling in their activation functions. This helps to take summary statistics over small spatial regions of input in order to make the final output invariant to small spatial translations of the input. CNNs have been very successful for commercial image processing applications since early.</ns0:p><ns0:p>In contrast to image classification, the modified version of conventional CNN is applied to time series prediction task for eight steps ahead forecasting of rainfall series. The proposed CNN includes four layers as shown in Figure <ns0:ref type='figure'>.</ns0:ref> 11. The first convolutional layer was developed by 3 filters with kernel size of (3,1). Similarly, the second conv layer contained 10 filters with the same size as earlier (3,1). The pooling layer was added by following the 'average' approach for sub-sampling. However, in our case the averaging factor was unity. For fully connected layers, tangent hyperbolic activations were used followed by a linear layer for output predictions. To find out the accurate forecasting model, it is far important to evaluate and compare the performance of trained models. The natural measure of performance for the forecasting problem is the prediction error. MSE defined in equation 7 is the most popular measure used for the performance prediction <ns0:ref type='bibr' target='#b66'>(Zhang, 2003;</ns0:ref><ns0:ref type='bibr' target='#b56'>Ribeiro and dos Santos Coelho, 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Ma et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b4'>Aliev et al., 2018)</ns0:ref>. However, the use of only one error metric (MSE) to evaluate the model performance actually lacks to represents the entire behaviour of the predictions in a clear way. Therefore, more performance measuring criteria should be considered to validate the results Hence, performance for each predictive model is quantified using two additional performance metrics, i.e. Root Mean Squared Error (RMSE) and Regression parameter R on Training and Test sets.</ns0:p><ns0:formula xml:id='formula_4'>MSE = n ∑ i=1 ( E t N ) 2 (7)</ns0:formula><ns0:p>Where, N is the total number of data for the prediction and Et is the difference or error between actual and predicted values of object t. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In weather forecasting, specifically rainfall prediction is one of the most imperatives, demanding, critical operational problem. It is complex and difficult because, in the field of meteorology decisions are taken with a degree of uncertainty. This actually happens due to chaotic nature of the atmosphere which limits the validity of deterministic forecasts. Generally, the required parameters to predict rainfall are extremely complicated and highly variable. This increases the uncertainty in rainfall prediction task even for the shorter horizons <ns0:ref type='bibr' target='#b11'>(Bushara and Abraham, 2013)</ns0:ref>. It is important to mention that it needs much more effort to compare and contrast different types of existing rainfall forecasting models as reported methods usually provides the comparison of their output with observed values. Thus, this evaluation becomes data-dependent due to the difference of data taken for the different regions and time periods.</ns0:p><ns0:p>In this context, we also trained some significant nonlinear autoregressive neural networks on our data.</ns0:p><ns0:p>As our research work is based on time series forecasting of rainfall, the forecasting is done for three different forecasting horizons, next immediate value, the value of rain variable after 1 hour, the value of rain variable for next 2 hours. To develope and to further train the models efficiently, we selected seperate model for each forecasting horizon. This was done due to the data dependency available in the historical samples and also to produce accurate forecasting correspondingly. Despite of training deep learning architectures which automatically extract the meaningful features and patterns, we applied sufficient efforts to compute some statistical features for each forecasting horizon separately prior to giving the input data to the deep learning models. Number of attempts were taken to produce the deep learning model as accurate forecaster based on different architecture and different parameter settings. In partcular,</ns0:p><ns0:p>we have only mentioned some optimal models in the result sections. The performance of each model was computed using MSE, RMSE and R.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>The paper presents rainfall time series forecasting of a specific site in Italy using deep learning architectures.. Deep learning architectures are accelerating rapidly in almost every field of interest and replacing several other machine learning algorithms. Consequently, this gave us direction to further investigate these deep architectures over time series rainfall forecasting. Therefore, the primary focus of this research was to perform multi-step forecasting for rainfall data as its much more challenging than single-step ahead.</ns0:p><ns0:p>During our research, it was observed that the parameters required to predict rainfall were enormously complex and subtle even for a short term period. Thus, different combinations of inputs and statistical features were investigated. The results presented in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> indicate that DBN outperforms the conventional CNN model when larger forecasting horizon was considered.</ns0:p><ns0:p>It is important to mention that error measures play an important role in calibrating or refining a model in order to forecast accurately for a set of time series. Thus, three different performance metrics were considered for comparative analysis over the trained models. Considering the obtained RMSE and MSE values of trained models, it is obvious that deep learning architectures are significantly improving the test errors in contrast with the training errors.</ns0:p><ns0:p>During the training phase of models, it was observed that the Deeper architectures are more exhaustive as far as the computational resources are concerned. Due to this, it took almost more than couple of weeks to well train the deep hierarchical models on High Performance Computing (HPC). However, our major concern was not about acceleration but accurate modelling of data. Albeit, from future perspective the acceleration can be improved by utilizing the GPUs and FPGAs for similar implementations. In context of the future works, we believe that findings of this research can be further utilized as basis for the advance forecasting of weather parameters with same climate conditions.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Layered architecture of CNN. A generalized architecture of CNN model is presented with conventional layers. Apart from the input layer and the next layer, most noteable ones are convolutional layer, detector layer and a pooling or subsampling layer.</ns0:figDesc><ns0:graphic coords='7,263.48,183.76,170.09,184.27' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Research Methodology. Initially the data recorded by sensor through Meteo weather station is downloaded via Neuronica Lab resources.The next step is to apply pre-processing and perform feature extraction accordingly. Finally, training and optimizing the deep learning model as forecasting model. Filter MSE Median 0.0569 Moving Average 0.0447 Low-pass Butterworth 0.0352 Savitzky golay 0.0302</ns0:figDesc><ns0:graphic coords='8,178.44,63.78,340.16,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Meteo Weather station at Politecnic Di Torino. The time series data is recorded from the Meteo weather station mounted at the top of DET building and connected with Neuronica Laboratory, Politecnico Di Torino. (Narejo and Pasero, 2017).</ns0:figDesc><ns0:graphic coords='9,263.48,63.78,170.09,226.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Hyper-parameters are usually selected before the actual training process begins. The hyper-parameters can be fixed by hand or tuned by an algorithm. It is better to adopt its value based on out of sample data, for example, cross-validation error, online error or out of sample data. The classical recommendation of 8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Outlier detection in Rainfall time series data. Outliers are highlighted with red triangles in Rainfall time series data.</ns0:figDesc><ns0:graphic coords='10,206.79,63.78,283.47,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Filtering the rainfall time series. It can be observed from figure that original rain data is highly fluctuated and sharp edges. This is further smoothened and sharp edges are reduced by applying different filters as shown in figure.</ns0:figDesc><ns0:graphic coords='11,206.79,63.78,283.47,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure10 presents the forecasting of 8 steps ahead rainfall time series. It can be observed in the figure that the forecasting samples are not exactly replicating the original data indicated by blue circles. Apart from this there is some sort of delay. This delay is due to the prediction of longer multi-step samples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Rainfall forecasting for next 1 hour.Actual rain samples and model predictions for One step ahead forecasting. The blue line with circle is representing the actual samples. The data in red is highlighting the estimations computed by model as one step ahead .</ns0:figDesc><ns0:graphic coords='13,164.26,319.99,368.52,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Optimal DBN model for rainfall prediction for eight steps ahead forecasting.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. CNN architecture for eight steps ahead rain forecasting.</ns0:figDesc><ns0:graphic coords='14,150.09,348.58,396.86,198.42' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The whole network still expresses a single differentiable score function. Convolution is a mathematical concept used heavily in digital signal processing when dealing with signals that take the form of a time series. To understand, CNN is a deep network where instead of having stacks of matrix multiply layers, we are going to have</ns0:figDesc><ns0:table /><ns0:note>stacks of convolutions. As it can be seen in Figure. 2, three main types of layers are used to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer or Next layers. Convolution is basically combined integration of two functions as equated in (4) S(t) = (a)w(t − a)da (4) 5/17 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>MEAN SQUARE ERROR ON FILTERED RAIN DATA.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table 3 presents the details related with the performance of each model</ns0:figDesc><ns0:table /><ns0:note>11/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>PERFORMANCE MEASURES MSE,RMSE AND R OF PROPOSED DEEP ARCHITECTURES FOR EIGHT STEP AHEAD RAINFALL FORECASTING</ns0:figDesc><ns0:table /><ns0:note>13/17PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54942:1:2:NEW 4 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editor,
We are grateful for all the valuable comments and suggestions that we have taken on board to
improve the quality of the manuscript.
Based on reviewer’s feedback, we have carefully revised the manuscript and addressed all the
points raised. Below is the point by point response to the reviewer’s comments. Blue Bold font
represents the comments of the reviewers, whereas the Black normal font represents the
response of authors to the comments.
In the updated manuscript, the changes are highlighted in blue font so that reviewers can easily
track the new inserted and updated text.
__________________________________________________________________________________
Reviewer 1 Comments
The manuscript presents 'Multi-step Rainfall Forecasting using Deep Learning
Approach', which is interesting. The paper has the potential to be published in this
journal after revision. Appropriate revisions to the following points should be undertaken
in order to justify recommendation for publication.
1- The language of the manuscript should be checked and more proofreading is
necessary. I also recommend that authors should refrain from using long sentences.
Instead, they should use clear and shorter sentences in terms of the readability of the
manuscript. In addition, please correct the grammatical errors throughout the
manuscript.
The updated manuscript has been proofread. We have also attempted to break the longer
sentences in to smaller.
2- Introduction chapter requires the research purposes in detail. Since they are vague
in their present form, please revise them. The authors need to provide a very detailed
statement as to how this paper differs from their other papers of similar titles.
Though our area of research also includes time series forecasting but in our previous research
we have done this with traditional nonlinear autoregressive neural network models and with
simplified version of Deep belief models on other type of applications. This time our data is
more challenging (rainfall time series), as it has high fluctuations and a lot of anomalies. In
this research work , we have added the temporal factor in DBN model which is by nature static
and usually not a good model for forecasting. However, by adding the temporal factor to DBN
we were able to generate the multi step ahead forecasting more accurately. Additionally, our
proposed approach is direct forecasting method not recursive approach.
Mentioned in line number 51, pg #2
Mentioned in line number 88, pg# 2
Mentioned under the heading “Main Contribution..”
3- 3- Full names should be shown for all abbreviations in their first occurrence in texts.
For example, CNN, DBN etc. (Not only in Abstract).
Added in the text wherever it was required, also highlighted with blue font .
4- Authors should give some comparisons about the usage of data-driven models a little
bit more (briefly) in the 'Introduction' section. They should mention about the
performance of these models, shortly. It is well known fact that most of the statistical
models (ANN, ARIMA, SVR, etc.) can predict one day ahead forecast with
considerable accuracy. However, their performance reduces significantly when
forecasting is made multiple days ahead. In this study the authors considered multistep ahead forecast. It is important to see the performance of the model when no
observed rainfall data were used for the entire validation period to forecast the
rainfall data.
Included in introduction section, line number 97, pg# 2
5- There are several conceptual/physically based models available for rainfall
forecasting, which needs to be discussed in the introduction and discussion section.
Added in Introduction section of updated manuscript. line number 56, pg# 2
6- 6- For readers to quickly catch your contribution, it would be better to highlight
major difficulties and challenges, and your original achievements to overcome them,
in a clearer way in abstract and introduction.
Abstract is summarised now
7- 7- It is mentioned in “Main contribution of this research” that the authors employed
DBN for feature extraction whereas NARX network was developed and trained for
extrapolating the temporal forecasts. What are other feasible alternatives? What are
the advantages of adopting this particular method over others in this case? How will
this affect the results? The authors should provide more details on this.
These deep learning models understand the highly nonlinear information available in the data
more adequately than traditional models. It is well known fact that these models are producing
state of the art results. Our domain is oriented towards time series forecasting. We have
implemented DBN for time series forecasting.
8- More focus should be placed on the application of accurate multi-step rainfall
forecasting. You can use specialized articles in this field. Emotional artificial neural
networks (EANNs) for multi-step ahead prediction of monthly precipitation; case study:
Northern Cyprus
Threshold-based hybrid data mining method for long-term maximum precipitation
forecasting
Accurate rainfall forecasting is very effective in predicting runoff, streamflow and flood:
Emotional ANN (EANN) and Wavelet-ANN (WANN) Approaches for Markovian and
Seasonal Based Modeling of Rainfall-Runoff Process
Hybrid Wavelet-M5 Model Tree for Rainfall-Runoff Modeling
Data mining based on wavelet and decision tree for rainfall-runoff simulation
We have incorporated 4 out of these 6 suggested articles, updated in blue font on line number
103, 115, 117
9- Please explain that, how were the time series divided?
Explained on line number 263, pg no 9 as “The rainfall time series dataset is in total composed
of 125691 no. of sample recordings. The dataset is divided into three parts prior to training.
We divided 70\% of the total data for training of the models, 20\% for testing and rest was
used to validate the h-step ahead forecasting”
10- The titles to the figures and tables need to be much improved-more descriptive.
Figure captions are now more descriptive in updated manuscript.
11- Please discuss the results and compare them with previous studies.
For this work, we already conducted number of comparison with baseline models as well as
NARX and LSTM. Few new coding files are now uploaded in supplementary file section.
However, keeping the focus of paper towards deep learning architectures we dint discussed
those trained models in our results.
12- Please examine reference styles from 'Instructions for Authors' in the journal
website and arrange your references according to it.
Acknowledged in the manuscript.
Reviewer 2 Comments
These are my main concerns:
1: The abstract is poor and it should be rewritten completely. Remove the
unnecessary information from the abstract. Stay focus more on your work and
contribution.
The abstract is summarised now and updated accordingly.
2: What is meant by one-step, four step and eight step? Explain this clearly whether
this means by hourly, daily, weekly or monthly time step.
Detailed explanation added in manuscript line number, 239 as “In our case, it was fixed as
1, 4, and 8 indicating for the next sample, for the next one hour and the next two hours
respectively in future. It is highly important to reiterate that the frequency of our time
series recorded data is 15 minutes. Sensor generates the value after every 15 minutes.
Therefore, in next 1 hour, 4 samples being recorded. With in 2 hours, 8 samples.
Also, the subtraction in the equation, let it be considered as '-n” is indicating that previous
n samples in the series. let us suppose that if n=1, this suggests the one sample earlier than
the current one in the series or the immediate previous sample in series. If n=2, this
suggests 2 previous samples next to the current sample will be chosen. Similarly, if n=3,
the three immediate previous samples from the series will be selected. Finally, in summary,
in order to forecast rainfall for h steps ahead, the required input attributes as presented in
\eqref{eq:inputatt} are the 3 previous samples and 1 current value at time 't' of the rain
data. Moreover, the mean and standard deviation of earlier mentioned 4 rain samples.
Humidity, pressure and temperature at current time t also two previous samples of these
variable as time t-1 and t-2. ”
3: The introduction and background literature is not enough on rainfall using ML
models. The authors should clearly work on it to enrich the literature section. For
example the authors can cite the following relevant work.
a): Complete ensemble empirical mode decomposition hybridized with random
forest and kernel ridge regression model for monthly rainfall forecasts; Journal of
Hydrology 584, 124647, 2020.
b): Multi-stage hybridized online sequential extreme learning machine integrated
with Markov Chain Monte Carlo copula-Bat algorithm for rainfall forecasting,
Atmospheric research 213, 450-464, 2018.
c): Forecasting long-term precipitation for water resource management: a new
multi-step data-intelligent modelling approach, Hydrological Sciences Journal,
Volume 65 Issue 16 (2020).
d): An ensemble-ANFIS based uncertainty assessment model for forecasting multiscalar standardized precipitation index, Atmospheric Research 207, 155-180, 2018.
e): Improving SPI-derived drought forecasts incorporating synoptic-scale climate
indices in multi-phase multivariate empirical mode decomposition model
hybridized with simulated. Journal of Hydrology 576, Pages 164-184, 2019.
f): Multi-stage committee based extreme learning machine model incorporating the
influence of climate parameters and seasonality on drought forecasting. Computers
and Electronics in Agriculture, Volume 152, September 2018, Pages 149-16
The suggested literature is now incorporated in updated paper. Out of the above listed, 4
research papers are now referred in our study.
4: The presentation of the paper should improved. Some figures are hardly
readable.
Figures are updated and descriptive captions are added.
5: The data should be explained clearly. What are the input attributes that are used
to predict the rainfall? How many data points are used for model training and
testing purposes?
This paragraph is mentioned in our paper “Successively, we also calculated some
statistical features considering mean, standard deviation, variance, maximum, minimum,
skewness and kurtosis. We tested our feature sets by training some models and later on,
we found that the DBN models were performing comparatively better if we exclude
skewness and kurtosis from the selected features. Hence, the finalized features to predict
rainfall at (t+h) are presented in equation 9.”
Furthermore, we have added the following as well. The rainfall time series dataset is in
total composed of 125691 no. of sample recordings. The dataset is divided into three parts
prior to training. We divided 70\% of the total data for training of the models, 20\% for
testing and rest was used to validate the h-step ahead forecasting}.
6: The experimental setup and model architectures should be discussed in details.
They are very important.
Acknowledged. New codes are added in supplementary files upload section.
7: The results should be used to explain in some other graphical visualization such
as Taylor diagram, scatter plots etc.
Since our data is in time series, therefore the scatter plot was not providing the
visualizations adequately neither the taylor diagram. Therefore, according to reviewer’s
suggestions, in updated manuscript we have added line plots for actual and model
predictions, Figure 7,8 and 10
" | Here is a paper. Please give your review comments after reading it. |
81 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>PhilDB is an open-source time series database that supports storage of time series datasets that are dynamic, that is it records updates to existing values in a log as they occur. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. It implements fast reads to make it practical to select data for analysis. Recent open-source systems have been developed to indefinitely store long-period high-resolution time series data without change logging.</ns0:p><ns0:p>Unfortunately such systems generally require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve highperformance writing of static data in real time. In essence, they have a `big data' approach to storage and access. Other open-source projects for handling time series data that avoid the `big data' approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that changed. Unlike `big data' solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances when a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries.</ns0:p><ns0:p>While some existing systems come close to meeting the needs PhilDB addresses, none cover all the needs at once. PhilDB was written to fill this gap in existing solutions. This paper explores existing time series database solutions, discusses the motivation for PhilDB, describes the architecture and philosophy of the PhilDB software, and performs a simple evaluation between InfluxDB, PhilDB, and SciDB.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>This paper will explore existing time series database solutions. It will examine the need for a liberally licensed, open-source, easily deployed time series database, that is capable of tracking data changes, and look at why the existing systems that were surveyed failed to meet these requirements. This paper will then describe the architecture and features of the new system, PhilDB, that was designed to meet these outlined needs.</ns0:p><ns0:p>Finally, a simple evaluation will be performed to compare PhilDB to the most promising alternatives of the existing open-source systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND: EXISTING SYSTEMS</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Proprietary systems</ns0:head><ns0:p>There are a number of proprietary solutions for storage of time series data that have been around since the mid-nineties to the early 2000s. <ns0:ref type='bibr' target='#b1'>Castillejos (2006)</ns0:ref> identified three proprietary systems of note, FAME, TimeIQ, and DBank, that have references that range from 1995 to 2000. There are other proprietary systems, such as kdb+ 1 , that are commercially available today. This shows that time series data storage is an existing problem. Compared to proprietary systems, open-source systems can generally be used with the scientific Python ecosystem as described by <ns0:ref type='bibr' target='#b10'>Perez et al. (2011)</ns0:ref>. Ready access to open-source systems also make them easier to evaluate and integrate with. Therefore existing proprietary systems were not evaluated any further. Discussion on the need for an open-source system is further covered in section 3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Open-source systems</ns0:head><ns0:p>In recent years the development of open-source time series databases has taken off, with most development beginning within the last five years. This can be seen by the number of projects discussed here along with noting the initial commit dates.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.1'>'Big data' time series databases</ns0:head><ns0:p>Some of the most successful projects in the open-source time series database space are OpenTSDB 2 , Druid 3 , Kairosdb 4 , and InfluxDB 5 . The earliest start to development on these systems was for OpenTSDB with an initial commit in April 2010. These systems are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a 'big data' approach to storage and access. The architectural approach to address big data requirements means a large initial installation investment before use.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>Alternate time series databases</ns0:head><ns0:p>In contrast to the 'big data' time series systems some small dedicated open-source code bases are attempting to address the need for local or single server time series data storage. These systems, however, have stalled in development, are poorly documented, 1 http://kx.com/software.php 2 OpenTSDB initial commit: 2010-04-11; https://github.com/OpenTSDB/opentsdb 3 Druid initial commit: 2012-10-24; https://github.com/druid-io/druid/ 4 Kairosdb initial commit: 2013-02-06; https://github.com/kairosdb/kairosdb 5 InfluxDB initial commit: 2013-04-12; https://github.com/influxdb/influxdb</ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>or require a moderate investment of time to operate. For example Timestore 6 was, at the time of writing, last modified August 2013 with a total development history of 36 commits. Some of the better progressed projects still only had minimal development before progress had ceased, for example tsdb 7 with a development start in January 2013 and the most recent commit at time of writing in February 2013 for a total of 58 commits.</ns0:p><ns0:p>Cube 8 has a reasonable feature set and has had more development effort invested than the other systems discussed here, with a total of 169 commits, but it is no longer under active development according the Readme file. Searching GitHub for 'tsdb' reveals a large number of projects named 'tsdb' or similar. The most popular of these projects (when ranked by stars or number of forks) relate to the 'big data' systems described earlier (in particular, OpenTSDB, InfluxDB, and KairosDB). There are numerous small attempts at solving time series storage in simpler systems that fall short of a complete solutions. Of the systems discussed here only Cube had reasonable documentation, Timestore had usable documentation, and tsdb had no clear documentation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.3'>Scientific time series databases</ns0:head><ns0:p>At present, the only open-source solution that addresses the scientific need to track changes to stored time series data as a central principle is SciDB <ns0:ref type='bibr' target='#b12'>(Stonebraker et al. 2009 and</ns0:ref><ns0:ref type='bibr' target='#b13'>Stonebraker et al. 2011</ns0:ref>). SciDB comes with comprehensive documentation 9 that is required for such a feature rich system. The documentation is however lacking in clarity around loading data with most examples being based around the assumption that the data already exists within SciDB or is being generated by SciDB. While installation on a single server is relatively straight forward (for older versions with binaries supplied for supported platforms) the process is hard to identify as the community edition installation documentation is mixed in with the documentation on installation of the enterprise edition of SciDB. Access to source code is via tarballs; there is no source control system with general access to investigate the history of the project in detail.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>MOTIVATION</ns0:head><ns0:p>The author's interest is derived from a need to handle data for exploratory purposes with the intention to later integrate with other systems, with minimal initial deployment overhead. It is assumed that the smaller time series database systems discussed previously derive from similar needs. The author has found '[m]ost scientists are adamant about not discarding any data' <ns0:ref type='bibr' target='#b2'>(Cudré-Mauroux et al. 2009)</ns0:ref>. In particular, the author's experience in hydrology has found hydrological data requires the ability to track changes to it, since streamflow discharge can be regularly updated through quality control processes or updates to the rating curves used to convert from water level to discharge. Open-source 'big data' time series database offerings don't support the ability to track any changed values out of the box (such support would have to be developed external to the system).</ns0:p><ns0:p>Their design targets maximum efficiency of write-once and read-many operations. When Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>streamflow data is used within forecasting systems, changes to the data can alter the forecast results. Being able to easily identify if a change in forecast results is due to data or code changes greatly simplifies resolving issues during development and testing.</ns0:p><ns0:p>Therefore, both requirements of minimal deployment overhead and logging of any changed values rule out the current 'big data' systems.</ns0:p><ns0:p>While SciDB does address the data tracking need, recent versions of the community edition are complex to install since they require building from source, a process more involved than the usual './configure; make; make install'. Older versions are more readily installed on supported platforms, however the system is still complex to use, requires root access to install, a working installation of PostgreSQL and a dedicated user account for running. Installation difficulty isn't enough to rule out the system being a suitable solution, but it does diminish its value as an exploratory tool. SciDB is also licensed under the GNU Affero General Public License (AGPL) that can be perceived as a problem in corporate or government development environments. In these environments integration with more liberally licensed (e.g. Apache License 2.0 or 3-clause BSD)</ns0:p><ns0:p>libraries is generally preferred with many online discussions around the choice of liberal licences for software in the scientific computing space. For example, it can be argued that a simple liberal license like the BSD license encourages the most participation and reuse of code <ns0:ref type='bibr' target='#b0'>(Brown 2015</ns0:ref><ns0:ref type='bibr' target='#b18'>, VanderPlas 2014</ns0:ref><ns0:ref type='bibr' target='#b5'>, Hunter 2004</ns0:ref>).</ns0:p><ns0:p>Finally, SciDB has a broader scope than just storage and retrieval of time series data, since 'SciDB supports both a functional and a SQL-like query language' <ns0:ref type='bibr' target='#b13'>(Stonebraker et al. 2011)</ns0:ref>. Having SQL-like query languanges does allow for SciDB to readily support many high performance operations directly when handling large already loaded data. These query languages do, however, add additional cognitive load <ns0:ref type='bibr' target='#b14'>(Sweller et al. 2011)</ns0:ref> for any developer interfacing with the system as the query languages are specific to SciDB. If using SciDB for performing complex operations on very large multidimensional array datasets entirely within SciDB, learning these query languages would be well worth the time. The Python API does enable a certain level of abstraction between getting data out of SciDB and into the scientific Python ecosystem.</ns0:p><ns0:p>Of the other existing systems discussed here, none support logging of changed values.</ns0:p><ns0:p>Limited documentation makes them difficult to evaluate, but from what can be seen and inferred from available information, the designs are targeted at the 'write once, read many' style of the 'big data' time series systems at a smaller deployment scale. These systems were extremely early in development or yet to be started at time the author began work on PhilDB in October 2013.</ns0:p><ns0:p>The need of the author is purely to store simple time series of floating point values and extract them again for processing with other systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Use case</ns0:head><ns0:p>To summarise, PhilDB has been created to provide a time series database system that is easily deployed, used, and has logging features to track any new or changed values. It has a simple API for writing both new and updated data with minimal user intervention. This is to allow for revising time series from external sources where the data can change over time, such as streamflow discharge data from water agencies. Furthermore, the simple API extends to reading, to enable easy retrieval of time series, including the ability to read time series as they appeared at a point in time from the logs.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>ARCHITECTURE</ns0:head><ns0:p>PhilDB uses a central 'meta-data store' to track the meta information about time series instances. Relational databases are a robust and reliable way to hold related facts. Since the meta data is simply a collection of related facts about a time series, a relational database is used for the meta-data store. Time series instances are associated with a user chosen identifier and attributes and each time series instance is assigned a UUID <ns0:ref type='bibr' target='#b6'>(Leach et al. 2005</ns0:ref>) upon creation, all of which is stored in the meta-data store. The actual time series data (and corresponding log) is stored on disk with filenames based on the UUID (details of the format are discussed in section 5.2). Information kept in the meta-data store can then be used to look up the UUID assigned to a given time series instance based on the requested identifier and attributes. Once the UUID has been retrieved, accessing the time series data is a simple matter of reading the file from disk based on the expected UUID derived filename.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Architecture Philosophy</ns0:head><ns0:p>The reasoning behind this architectural design is so that:</ns0:p><ns0:p>* A simple to use write method can handle both new and updated data (at the same time if needed).</ns0:p><ns0:p>* Read access is fast and easy for stored time series.</ns0:p><ns0:p>* Time series are easily read as they appeared at a point in time.</ns0:p><ns0:p>* Each time series instance can be stored with minimal initial effort.</ns0:p><ns0:p>Ease of writing data can come at the expense of efficiency to ensure that create, update or append operations can be performed with confidence that any changes are logged without having to make decisions on which portions of the data are current or new.</ns0:p><ns0:p>The expectation is that read performance has a greater impact on use as they are more frequent. Attaching a time series identifier as the initial minimal information allows for data from a basic dataset to be loaded and explored immediately. Additional attributes can be attached to a time series instance to further differentiate datasets that share conceptual time series identifiers. By default, these identifier and attribute combinations are then stored in a tightly linked relational database. Conceptually this meta data store could optionally be replaced by alternative technology, such as flat files. As the data is stored in individual structured files, the meta-data store acts as a minimal index with most of the work being delegated to the operating system and in turn the file system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>IMPLEMENTATION</ns0:head><ns0:p>PhilDB is written in Python because it fits well with the scientific computing ecosystem <ns0:ref type='bibr' target='#b10'>(Perez et al. 2011</ns0:ref>). The core of the PhilDB package is the PhilDB database class 10 , that exposes high level methods for data operations. These high level functions are designed to be easily used interactively in the IPython interpreter <ns0:ref type='bibr' target='#b9'>(Perez and Granger 2007)</ns0:ref> * Log read method to extract a time series as it appeared on a given date.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Database Format</ns0:head><ns0:p>The technical implementation of the database format, as implemented in version 0.6.1</ns0:p><ns0:p>of PhilDB (MacDonald 2015), is described in this section. Due to the fact that PhilDB is still in the alpha stage of development the specifics here may change significantly in the future.</ns0:p><ns0:p>The meta-data store tracks attributes using a relational database, with the current implementation using SQLite <ns0:ref type='bibr' target='#b4'>(Hipp et al. 2015)</ns0:ref>. Actual time series data are stored as flat files on disk, indexed by the meta-data store to determine the path to a given series. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>EVALUATION</ns0:head><ns0:p>Of the open-source systems evaluated (as identified in section 2.2), InfluxDB came the closest in terms of minimal initial installation requirements and feature completeness, however, it doesn't support the key feature of update logging. Contrasting with InfluxDB, SciDB met the requirement of time series storage with update logging but didn't meet the requirement for simplicity to deploy and use. Both these systems were evaluated in comparison to PhilDB.</ns0:p><ns0:p>To simplify the evaluation process and make it easily repeatable, the SciDB 14.3 virtual appliance image 12 was used to enable easy use of the SciDB database. This virtual appliance was based on a CentOS Linux 6.5 install. Write performance was evaluated by writing all time series from the evaluation dataset (described in section 6.1) into the time series databases being evaluated. This first write will be referred to as the initial write for each database. To track the performance of subsequent updates and reading the corresponding logged time series a further four writes were performed. These writes will be referred to as 'first update' through to For both PhilDB and SciDB the evaluation process described in this section was performed four times and the mean of the results analysed. Results between the four runs were quite similar so taking the mean gave results similar to the individual runs.</ns0:p><ns0:p>Analysing and visualising an individual run rather than the mean would result in the same conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Evaluation dataset</ns0:head><ns0:p>The Hydrological Reference Stations <ns0:ref type='bibr' target='#b21'>(Zhang et al. 2014</ns0:ref>) dataset from the Australian Bureau of Meteorology 13 was used for the evaluation. This dataset consists of daily streamflow data for 221 time series with a mean length of 16,310 days, the breakdown of the series lengths are in table 1 and visualised in figure 2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>InfluxDB</ns0:head><ns0:p>Paul Dix (CEO of InfluxDB) found that performance and ease of installation were the main concerns of users of existing open-source time series database systems <ns0:ref type='bibr' target='#b3'>(Dix 2014)</ns0:ref>.</ns0:p><ns0:p>InfluxDB was built to alleviate both those concerns. While InfluxDB is designed for high performance data collection, it is not designed for bulk loading of data. Searching the InfluxDB issue tracker on github 14 , it can be seen that bulk loading has been a recurring problem with improvement over time. Bulk loading performance is, however, still poor compared to SciDB and PhilDB, as seen later in the performance results (section 6.5). A key feature of interest with InfluxDB was the ability to identify time series with tags. This feature is in line with the attributes concept used by PhilDB, thereby allowing multiple time series to be grouped by a single key identifier but separated by additional attributes or tags.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2.1'>Installation</ns0:head><ns0:p>InfluxDB is easily installed compared to the other open-source systems reviewed, as demonstrated by the short install process shown below. Installation of pre-built packages on Linux requires root access 15 . Installation of InfluxDB was performed in the CentOS Linux 6.5 based virtual machine containing the pre-installed SciDB instance.</ns0:p><ns0:p>wget http://influxdb.s3.amazonaws.com/influxdb-0.9.6.1-1.x86_64.rpm sudo yum localinstall influxdb-0.9.6.1-1.x86_64.rpm</ns0:p></ns0:div>
<ns0:div><ns0:head>Starting the InfluxDB service with:</ns0:head><ns0:p>sudo /etc/init.d/influxdb start</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2.2'>Usage</ns0:head><ns0:p>Loading of data into the InfluxDB instance was performed using the InfluxDB Python API that was straight forward to use. However, poor performance of bulk loads lead to a 14 https://github.com/influxdata/influxdb/issues 15 https://influxdb.com/docs/v0.9/introduction/installation.html In addition to experimenting with various API calls, configuration changes were attempted resulting in performance gains by lowering values related to the WAL options (the idea was based on an older GitHub issue discussing batch loading 16 and WAL tuning to improve performance). Despite all this effort, bulk data loading with InfluxDB was impractically slow with a run time generally in excess of one hour to load the 221 time series (compared to the less than 2 minutes for SciDB and PhilDB). Reading was performed using the Python API InfluxDBClient query method: streamflow = influx_client.query('SELECT * FROM Q{0}'.format('410730'))</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3'>PhilDB</ns0:head><ns0:p>PhilDB has been designed with a particular use case in mind as described in section 3.1. Installation of PhilDB is quite easy where a compatible Python environment exists.</ns0:p><ns0:p>Using a Python virtualenv removes the need to have root privileges to install PhilDB and no dedicated user accounts are required to run or use PhilDB. A PhilDB database can be written to any location the user has write access, allowing for experimentation without having to request a database be created or needing to share a centralised install.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3.1'>Installation</ns0:head><ns0:p>Installation of PhilDB is readily performed using pip:</ns0:p><ns0:p>pip install phildb</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3.2'>Usage</ns0:head><ns0:p>The experimental dataset was loaded into a PhilDB instance using a Python script. Using PhilDB to load data can be broken into three key steps.</ns0:p><ns0:p>First, initialise basic meta information: The keyword arguments are optional provided the time series instance can be uniquely identified.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>SciDB</ns0:head><ns0:p>SciDB, as implied by the name, was designed with scientific data in mind. As a result SciDB has the feature of change logging, allowing past versions of series to be retrieved.</ns0:p><ns0:p>Unfortunately SciDB only identifies time series by a single string identifier, therefore storing multiple related time series would require externally managed details about what time series are stored and with what identifier. Due to the sophistication of the SciDB system it is relatively complex to use with two built in languages, AFL and AQL, that allow for two different approaches to performing database operations. This, in turn, increases the amount of documentation that needs to be read to identify which method to use for a given task (such as writing a time series into the database). While the documentation is comprehensive in detailing the available operations, it is largely based on the assumption that the data is already within SciDB and will only be operated on within SciDB, with limited examples on how to load or extract data via external systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4.1'>Installation</ns0:head><ns0:p>SciDB does not come with binary installers for newer versions and the build process is quite involved. Instructions for the build proccess are only available from the SciDB forums using a registered account 17 . Installation of older versions is comparable to InfluxDB with the following steps listed in the user guide: yum install -y https://downloads.paradigm4.com/scidb-14.12-repository.rpm yum install -y scidb-14.12-installer Same as InfluxDB, SciDB requires root access to install and a dedicated user account for running the database. A PostgreSQL installation is also required by SciDB for storing information about the time series data that SciDB stores. Unlike InfluxDB, SciDB has authentication systems turned on by default that requires using dedicated accounts even for basic testing and evaluation.</ns0:p><ns0:p>Only Ubuntu and CentOS/RHEL Linux variants are listed as supported platforms in the install guide.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4.2'>Usage</ns0:head><ns0:p>It took a considerable amount of time to identify the best way to load data into a SciDB instance, however once that was worked out, the actual load was quick and effective consisting of two main steps.</ns0:p><ns0:p>First, a time series needs to be created: It is worth noting that datetime and double need to be specified for time series storage, since SciDB can hold many different array types aside from a simple time series.</ns0:p><ns0:p>Additionally, SciDB identifiers can not start with a numeric character so all time series identifiers were prefixed with a 'Q' (where 'Q' was chosen in this case because it is conventionally used in the hydrological context to represent streamflow discharge).</ns0:p><ns0:p>Second, the data is written using the iquery LOAD method as follows: iquery -n -q 'LOAD Q${station} FROM '/home/scidb/${station}.scidb';' This method required creating data files in a specific SciDB text format before hand using the csv2scidb command that ships with SciDB.</ns0:p><ns0:p>Identifying the correct code to read data back out required extensive review of the documentation, but was quick and effective once the correct code to execute was identified. The SciDB Python code to read a time series back as a Pandas.DataFrame object is as follows: streamflow = sdb.wrap_array('Q' + station_id).todataframe() A contributing factor to the difficulty of identifying the correct code is that syntax errors with the AQL based queries (using the SciDB iquery command or via the Python API) are at times uninformative about the exact portion of the query that is in error.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.5'>Performance</ns0:head><ns0:p>It should be noted that PhilDB currently only supports local write, which is advantageous for performance, compared to InfluxDB that only supports network access. InfluxDB was hosted locally, which prevents network lag, but the protocol design still reduced performance compared to the direct write as done by PhilDB. Although SciDB has network access, only local write performance (using the SciDB iquery command) and network based read access (using the Python API) were evaluated. SciDB was also accessed locally to avoid network lag when testing the network based API. For a comparable network read access comparison the experimental PhilDB Client/Server software was also used.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.5.1'>Write performance</ns0:head><ns0:p>Write performance was measured by writing each of the 221 time series into the database under test and recording the time spent per time series.</ns0:p><ns0:p>As can be seen in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, SciDB and PhilDB have a significant performance advantage over InfluxDB for bulk loading of time series data. SciDB write performance is comparable to PhilDB, so a closer comparison between just SciDB and PhilDB write performance is shown in figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>.</ns0:p><ns0:p>It can be seen that while PhilDB has at times slightly better write performance, SciDB has more reliable write performance with a tighter distribution of write times.</ns0:p><ns0:p>It can also be seen from figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> that write performance for SciDB does marginally decrease as more updates are written. PhilDB write performance while more variable across the dataset is also variable in performance based on how much of the series required updating. Where the fourth update writes the same data as the third update it can be seen that the performance distribution is closer to that of the initial load than the third load, since the data has actually remained unchanged. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6.5.2'>Read performance</ns0:head><ns0:p>InfluxDB read performance is adequate and SciDB read speed is quite good, however PhilDB significantly out-performs both InfluxDB and SciDB in read speed, as can be seen in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Even the PhilDB server/client model, which has yet to be optimised for performance, out-performed both InfluxDB and SciDB. Read performance with PhilDB is consistent as the time series are updated, as shown in figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, due to the architecture keeping the latest version of time series in a single file. Reading from the log with PhilDB does show a decrease in performance as the size of the log grows, but not as quickly as SciDB. While PhilDB maintains consistent read performance and decreasing log read performance, SciDB consistently decreases in performance with each update for reading both current and logged time series.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.5.3'>Disk usage</ns0:head><ns0:p>After the initial load InfluxDB was using 357.21 megabytes of space. This may be due to the indexing across multiple attributes to allow for querying and aggregating multiple time series based on specified attributes. This is quite a lot of disk space being used compared to SciDB (93.64 megabytes) and PhilDB (160.77 megabytes) after the initial load. As can be seen in figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='6.5.4'>Performance summary</ns0:head><ns0:p>Each database has different design goals that results in different performance profiles.</ns0:p><ns0:p>InfluxDB is not well suited to this use case with a design focusing on high performance writing of few values across many time series for metric collection, leading to poor performance for bulk loading of individual time series.</ns0:p><ns0:p>SciDB fares much better with consistent read and write performance, with slight performance decreases as time series are updated, likely due to design decisions that focus on handling large multi-dimensional array data for high performance operations.</ns0:p><ns0:p>Design decisions for SciDB that lead to consistent read and write performance appear to also give the same read performance when accessing historical versions of time series.</ns0:p><ns0:p>Achieving consistent read and write performance (including reading historical time series) seems to have come at the expense of disk space with SciDB consuming more space than PhilDB and increasing linearly as time series are updated.</ns0:p><ns0:p>PhilDB performs quite well for this particular use case, with consistently fast reads of the latest time series. This consistent read performance does come at the expense of reading historical time series from the logs, which does degrade as the logs grow. Write performance for PhilDB, while variable, varies due to the volume of data changing. </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>FUTURE WORK</ns0:head><ns0:p>PhilDB is still in its alpha stage. Before reaching the beta stage, the author shall Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSION</ns0:head><ns0:p>In conclusion, there is a need for an accessible time series database that can be deployed quickly so that curious minds, such as those in our scientific community, can easily analyse time series data and elucidate world-changing information. For scientific computing, it is important that any solution is capable of tracking subsequent data changes.</ns0:p><ns0:p>Although </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>'Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Mean write/read time for 221 daily time series</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016) Manuscript to be reviewed Computer Science lot of experimentation on how to most effectively load large amounts of data quickly, including trying curl and the Influx line protocol format directly. The final solution used was to chunk the data into batches of 10 points using the Pandas groupby functionality before writing into InfluxDB using the InfluxDB Python API DataFrameClient write points method, for example: streamflow = pandas.read_csv(filename, parse_dates=True, index_col=0, header = None) for k, g in streamflow.groupby(np.arange(len(streamflow))//100):influx_client.write_points(g, station_id) </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>db.add_measurand('Q', 'STREAMFLOW', 'Streamflow') db.add_source('BOM_HRS', 'Bureau of Meteorology; Hydrological Reference Stations dataset.')This step only need to be performed once, when configuring attributes for the PhilDB instance for the first time, noting additional attributes can be added later.Second, add an identifier for a time series and a time series instance record based on the identifier and meta information:db.add_timeseries(station_id) db.add_timeseries_instance(station_id, 'D', '', measurand = 'Q', source = 'BOM_HRS') Multiple time series instances, based on different combinations of attributes, can be associated with an existing time series identifier. Once a time series instance has been created it can be written to and read from. Third, load the data from a Pandas time series: 16 https://github.com/influxdata/influxdb/issues/3282 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016) Manuscript to be reviewed Computer Science streamflow = pandas.read_csv(filename, parse_dates=True, index_col=0, header = None) db.write(station_id, 'D', streamflow, measurand = 'Q', source = 'BOM_HRS')In this example the Pandas time series is acquired by reading a CSV file using the Pandas read csv method, but any data acquisition method that forms a Pandas.Series object could be used. Reading a time series instance back out is easily performed with the read method: streamflow = db.read(station_id, 'D', measurand = 'Q', source = 'BOM_HRS')</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>17 http://paradigm4.com/forum/viewtopic.php?f=14&t=1672&sid=6e15284d9785558d5590d335fed0b059 11/18 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016) Manuscript to be reviewed Computer Science iquery -q 'CREATE ARRAY Q${station} <date:datetime, streamflow:double> [i =0: * ,10000,0];'</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distribution of write times for 221 time series</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Distribution of read durations for the 221 time series from the evaluation dataset</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Disk usage after initial data load and each subsequent data update</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>back ends, using alternative data formats, disk paths, and relational databases. * More sophisticated handling of time zone meta-data. * Storage of quality codes or other row level attributes. * Formalisation of UUID usage for sharing of data. 15/18 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>InfluxDB comes close with features like tagging of attributes and a clear API, it lacks the needed change logging feature and presently suffers poor performance for bulk loading of historical data. InfluxDB has clearly been designed with real-time metrics based time series in mind and as such doesn't quite fit the requirements outlined in this paper. While SciDB has the important feature of change logging and performs quite well, it doesn't have a simple mechanism for tracking time series by attributes. SciDB is well suited for handing very large multi-dimensional arrays, which can justify the steep learning curve for such work, but for simple input/output of plain time series such complexity is a little unnecessary. PhilDB addresses this gap in existing solutions, as well as surpassing them for efficiency and usability. Finally, PhilDB's source code has been released on GitHub 18 under the permissive 3-clause BSD open-source license to help others easily extract wisdom from their data.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>The PhilDB and InfluxDB databases were installed into the same virtual machine to enable comparison between systems. The virtual machine host was a Mid-2013 Apple Macbook Air, with a 1.7 GHz Intel Core i7 CPU, 8GB of DDR3 RAM and a 500GB SSD hard drive. VirtualBox 4.3.6 r91406 was used on the host machine for running the virtual appliance image with the guest virtual machine being allocated 2 processors and 4GB of RAM.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Breakdown of length of time series in sample dataset (all values rounded to nearest day)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>13 http://www.bom.gov.au/water/hrs/ 8/18 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016) Manuscript to be reviewed Days Distribution of lengths of time series</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>, SciDB disk usage increases linearly with each update when writing the entire series each time. In contrast, updates with PhilDB only result in moderate increases and depends on how many values are changed. If the time</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='5'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>PhilDB fourth log read PhilDB third log read PhilDB second log read PhilDB first log read SciDB fourth log read SciDB third log read SciDB second log read SciDB first log read PhilDB Server/Client fifth read PhilDB Server/Client fourth read PhilDB Server/Client third read PhilDB Server/Client second read PhilDB Server/Client first read PhilDB fifth read PhilDB fourth read PhilDB third read PhilDB second read PhilDB first read SciDB fifth read SciDB fourth read SciDB third read SciDB second read SciDB first read</ns0:cell><ns0:cell cols='4'>Distribution of time series read durations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>300</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Milliseconds</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>13/18</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016)</ns0:note></ns0:figure>
<ns0:note place='foot' n='18'>/18 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:1:0:NEW 10 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "CS-2015:11:7508:1:0:NEW
PhilDB - The time series database with built-in change logging
Dear Editor,
I would like to thank the reviewers for their constructive feedback and have submitted a revised paper based on their comments. I believe all the key points have been adequately addressed. I have included inline responses to the review comments below to indicate what work was done to address the comment.
Please let me know if any of the comments haven’t been suitably addressed.
Regards,
Andrew MacDonald
Reviewer 2
Basic reporting
The sample dataset used in experiments have not been submitted nor cited as required by journal policy.
Response: This should have been included in the first revision; the paper now includes more details about the dataset and where it comes from including a citation.
The language may be improved, as there are some long sentences difficult to read. For example, '...easier to integrate with (compared with proprietary systems) and they are more fitting...' Isn't it better with the included parenthesis or other phrase structure? By the way, 'are more fitting' -> 'fit better'?
Response: During this revision I have attempted to tighten up the language and have sought feedback from friends who have done technical editing work.
In my humble opinion, the novel system proposed architecture is weakly explained. Several ideas to improve: explain all relevant details, maybe some figures can clarify, what about the use cases in which the log is useful (specially in research).
Response: The architecture section has been expanded with the aim to clarify the nature of the proposed system. Further details on the particular scientific use case PhilDB was written for have been included in the motivation section.
Experimental design
The paper could benefit from a more rigorous experimentation. Some ideas:
- Comparing with the only other TS DB mentioned with logging (Stonebraker).
Response: Both reviewers commented on the lack of comparison to SciDB so the paper now includes an evaluation of SciDB.
- Would not data from table 1 be better represented in a quartile plot (with a note including mean and std)?
Response: A boxplot representing the distribution of series lengths for the dataset has been included.
- What about performing experiments on the use of the logging facility?
Response: The logging facility of SciDB and PhilDB are now both tested in this revision.
- The comment about the problem with dates in InfluxDB deserves a bit of work to actually explain why InfluxDB can not be used. Being open source the code is available and in a time series database this should be something that can be answered.
Response: Using updated versions of InfluxDB and the InfluxDB Python library (updates that were available after the initial submission) has removed this problem.
- The results about the space required by each database could be commented explaining which are the differences in the storage mechanism. is there any functionality in InfluxDB not supported in PhilDB. This may be ok. It is just that it would be good to know when each one fits better.
Response: A section on disk usage has been included, partly to discuss possible reasons for disk usage differences.
Validity of the findings
No comments
Comments for the author
Interesting idea. It seems a good piece of software. However I think the paper needs revision to fit in a research journal.
Reviewer 1
Basic reporting
The document is written in a clear and concise way, perhaps a little shorter than usual. It seems to be written as a report instead of an article. Motivation should be extended and its proposal better justified. I consider that the complexity of the installation of a database system is not reason enough to discard it. It will be one of the many other features to take into consideration. In fact, this list should be added and used to compare PhilDB with other systems in the experimentation section. The organization of the paper is clear and correct but the experimentation should be extended and improved.
Response: More information about the use case for PhilDB has been included in the motivation section. Agreed that complexity isn’t reason enough not to use; have expanded the motivation section to explain why PhilDB was developed. Have expanded to evaluate against SciDB as well.
Experimental design
Regarding the experimentation, Phildb is only compared to InfluxDB because it is the closest in terms of minimal installation requirements and feature completeness but the paper also mentions SciDB as the only open source solution that addresses the scientific need to track changes to stored time data. Why is SciDB not evaluated in the experiment?
Response: Both reviewers commented on the lack of comparison to SciDB so the paper now includes an evaluation against SciDB.
Besides, this provides a SQL-like query language which has been always welcome and necessary in order to make the use of these systems easier.
Response: While I don’t dispute the usefulness of an SQL-like query language, as a developer working with a Python development stack I personally find it more useful to be able to extract time series without having to write an SQL query. While I haven’t specifically addressed this point in the paper I hope the expanded text, particularly in the use case section, helps to clarify the lack of interest in SQL in this context.
Datasets used should be better described and be available; alternatively datasets used in other benchmarks could be also used.
Response: The dataset used in this paper has now been cited and linked to.
The different metrics to be evaluated and the method to do it should be mentioned previously and then discuss the results.
Response: The evaluation section has more details up front on the process for comparing the databases.
The paper does not show clearly how the advantage of including a built-in change logging could affect to other performance parameters of the system. In short, the benchmark should be carefully planned and run in several time series databases and be reproducible in order to community considers its results valid
Response: The paper now compares PhilDB with InfluxDB and SciDB including a more in-depth comparison between PhilDB and SciDB utilising the logging.
Validity of the findings
This issue must be improved as mentioned in the previous section.
" | Here is a paper. Please give your review comments after reading it. |
82 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>PhilDB is an open-source time series database that supports storage of time series datasets that are dynamic, that is it records updates to existing values in a log as they occur. PhilDB eases loading of data for the user by utilising an intelligent data write method. It preserves existing values during updates and abstracts the update complexity required to achieve logging of data value changes. It implements fast reads to make it practical to select data for analysis. Recent open-source systems have been developed to indefinitely store long-period high-resolution time series data without change logging.</ns0:p><ns0:p>Unfortunately such systems generally require a large initial installation investment before use because they are designed to operate over a cluster of servers to achieve highperformance writing of static data in real time. In essence, they have a 'big data' approach to storage and access. Other open-source projects for handling time series data that avoid the 'big data' approach are also relatively new and are complex or incomplete. None of these systems gracefully handle revision of existing data while tracking values that changed. Unlike 'big data' solutions, PhilDB has been designed for single machine deployment on commodity hardware, reducing the barrier to deployment. PhilDB takes a unique approach to meta-data tracking; optional attribute attachment. This facilitates scaling the complexities of storing a wide variety of data. That is, it allows time series data to be loaded as time series instances with minimal initial meta-data, yet additional attributes can be created and attached to differentiate the time series instances when a wider variety of data is needed. PhilDB was written in Python, leveraging existing libraries.</ns0:p><ns0:p>While some existing systems come close to meeting the needs PhilDB addresses, none cover all the needs at once. PhilDB was written to fill this gap in existing solutions. This paper explores existing time series database solutions, discusses the motivation for PhilDB, describes the architecture and philosophy of the PhilDB software, and performs an evaluation between InfluxDB, PhilDB, and SciDB.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>PhilDB was created to store changing time series data, which is of great importance to the scientific community. In hydrology, for example, streamflow discharge can be regularly updated through changes in quality control processes and there is a need to identify when such data has changed. Efficient access to time series information supports effective and thorough analysis. Currently, existing proprietary and open-source database solutions for storing time series fail to provide for effortless scientific analysis.</ns0:p><ns0:p>In practice, the steep learning curves, time-consuming set up procedures, and slow read/write processes are considerable barriers to using these systems. More critically, most fail to provide the ability to store any changes to a time series over time. Most current open-source database systems are designed for handling 'big data', which in turn requires extreme computing power on a cluster of servers. This paper will explore existing time series database solutions. It will examine the need for a liberally licensed, open-source, easily deployed time series database, that is capable of tracking data changes, and look at why the existing systems that were surveyed failed to meet these requirements. This paper will then describe the architecture and features of the new system, PhilDB, that was designed to meet these outlined needs.</ns0:p><ns0:p>Finally, an evaluation will be performed to compare PhilDB to the most promising alternatives of the existing open-source systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND: EXISTING SYSTEMS</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Proprietary systems</ns0:head><ns0:p>There are a number of proprietary solutions for storage of time series data that have been around since the mid-nineties to the early 2000s. <ns0:ref type='bibr' target='#b1'>Castillejos (2006)</ns0:ref> identified three proprietary systems of note, FAME, TimeIQ, and DBank, that have references that range from 1995 to 2000. There are other proprietary systems, such as kdb+ 1 , that are commercially available today. This shows that time series data storage is an existing problem. Compared to proprietary systems, open-source systems can generally be used with the scientific Python ecosystem as described by <ns0:ref type='bibr' target='#b10'>Perez et al. (2011)</ns0:ref>. Ready access to open-source systems also make them easier to evaluate and integrate with. Therefore existing proprietary systems were not evaluated any further. Discussion on the need for an open-source system is further covered in section 3.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Open-source systems</ns0:head><ns0:p>In recent years the development of open-source time series databases has taken off, with most development beginning within the last five years. This can be seen by the number of projects discussed here along with noting the initial commit dates. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.1'>'Big data' time series databases</ns0:head><ns0:p>Some of the most successful projects in the open-source time series database space are OpenTSDB 2 , Druid 3 , Kairosdb 4 , and InfluxDB 5 . The earliest start to development on these systems was for OpenTSDB with an initial commit in April 2010. These systems are designed to operate over a cluster of servers to achieve high-performance writing of static data in real time. In essence, they have a 'big data' approach to storage and access. The architectural approach to address big data requirements means a large initial installation investment before use.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>Alternate time series databases</ns0:head><ns0:p>In contrast to the 'big data' time series systems some small dedicated open-source code bases are attempting to address the need for local or single server time series data storage. These systems, however, have stalled in development, are poorly documented, or require a moderate investment of time to operate. For example Timestore 6 was, at the time of writing, last modified August 2013 with a total development history of 36 commits. Some of the better progressed projects still only had minimal development before progress had ceased, for example tsdb 7 with a development start in January 2013 and the most recent commit at time of writing in February 2013 for a total of 58 commits.</ns0:p><ns0:p>Cube 8 has a reasonable feature set and has had more development effort invested than the other systems discussed here, with a total of 169 commits, but it is no longer under active development according the Readme file. Searching GitHub for 'tsdb' reveals a large number of projects named 'tsdb' or similar. The most popular of these projects (when ranked by stars or number of forks) relate to the 'big data' systems described earlier (in particular, OpenTSDB, InfluxDB, and KairosDB). There are numerous small attempts at solving time series storage in simpler systems that fall short of a complete solutions. Of the systems discussed here only Cube had reasonable documentation, Timestore had usable documentation, and tsdb had no clear documentation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.3'>Scientific time series databases</ns0:head><ns0:p>At present, the only open-source solution that addresses the scientific need to track changes to stored time series data as a central principle is SciDB <ns0:ref type='bibr' target='#b12'>(Stonebraker et al. 2009 and</ns0:ref><ns0:ref type='bibr' target='#b13'>Stonebraker et al. 2011)</ns0:ref>. SciDB comes with comprehensive documentation 9 that is required for such a feature rich system. The documentation is however lacking in clarity around loading data with most examples being based around the assumption that the data already exists within SciDB or is being generated by SciDB. While installation on a single server is relatively straight forward (for older versions with binaries supplied for supported platforms) the process is hard to identify as the community edition installation Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>documentation is mixed in with the documentation on installation of the enterprise edition of SciDB. Access to source code is via tarballs; there is no source control system with general access to investigate the history of the project in detail. While SciDB does address the data tracking need, recent versions of the community edition are complex to install since they require building from source, a process more involved than the usual './configure; make; make install'. Older versions are more readily installed on supported platforms, however the system is still complex to use, requires root access to install, a working installation of PostgreSQL and a dedicated user account for running. Installation difficulty isn't enough to rule out the system being a suitable solution, but it does diminish its value as an exploratory tool. SciDB is also licensed under the GNU Affero General Public License (AGPL) that can be perceived as a problem in corporate or government development environments. In these environments integration with more liberally licensed (e.g. Apache License 2.0 or 3-clause BSD)</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>MOTIVATION</ns0:head><ns0:p>libraries is generally preferred with many online discussions around the choice of liberal licences for software in the scientific computing space. For example, it can be argued that a simple liberal license like the BSD license encourages the most participation and reuse of code <ns0:ref type='bibr' target='#b0'>(Brown 2015</ns0:ref><ns0:ref type='bibr' target='#b18'>, VanderPlas 2014</ns0:ref><ns0:ref type='bibr' target='#b5'>, Hunter 2004</ns0:ref>).</ns0:p><ns0:p>Finally, SciDB has a broader scope than just storage and retrieval of time series data, since 'SciDB supports both a functional and a SQL-like query language' <ns0:ref type='bibr' target='#b13'>(Stonebraker et al. 2011)</ns0:ref>. Having SQL-like query languanges does allow for SciDB to readily support many high performance operations directly when handling large already loaded data. These query languages do, however, add additional cognitive load <ns0:ref type='bibr' target='#b14'>(Sweller et al. 2011)</ns0:ref> for any developer interfacing with the system as the query languages are specific to SciDB. If using SciDB for performing complex operations on very large multidimensional array datasets entirely within SciDB, learning these query languages would be well worth the time. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Of the other existing systems discussed here, none support logging of changed values.</ns0:p><ns0:p>Limited documentation makes them difficult to evaluate, but from what can be seen and inferred from available information, the designs are targeted at the 'write once, read many' style of the 'big data' time series systems at a smaller deployment scale. These systems were extremely early in development or yet to be started at time work began on</ns0:p><ns0:p>PhilDB in October 2013.</ns0:p><ns0:p>The need to be fulfilled is purely to store time series of floating point values and extract them again for processing with other systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Use case</ns0:head><ns0:p>To summarise, PhilDB has been created to provide a time series database system that is easily deployed, used, and has logging features to track any new or changed values. It has a simple API for writing both new and updated data with minimal user intervention.</ns0:p><ns0:p>This is to allow for revising time series from external sources where the data can change over time, such as streamflow discharge data from water agencies. Furthermore, the simple API extends to reading, to enable easy retrieval of time series, including the ability to read time series as they appeared at a point in time from the logs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>ARCHITECTURE</ns0:head><ns0:p>PhilDB uses a central 'meta-data store' to track the meta information about time series instances. Relational databases are a robust and reliable way to hold related facts. Since the meta data is simply a collection of related facts about a time series, a relational database is used for the meta-data store. Time series instances are associated with a user chosen identifier and attributes and each time series instance is assigned a UUID <ns0:ref type='bibr' target='#b6'>(Leach et al. 2005</ns0:ref>) upon creation, all of which is stored in the meta-data store. The actual time series data (and corresponding log) is stored on disk with filenames based on the UUID (details of the format are discussed in section 5.2). Information kept in the meta-data store can then be used to look up the UUID assigned to a given time series instance based on the requested identifier and attributes. Once the UUID has been retrieved, accessing the time series data is a simple matter of reading the file from disk based on the expected UUID derived filename.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Architecture Philosophy</ns0:head><ns0:p>The reasoning behind this architectural design is so that:</ns0:p><ns0:p>* An easy to use write method can handle both new and updated data (at the same time if needed).</ns0:p><ns0:p>* Read access is fast and easy for stored time series.</ns0:p><ns0:p>* Time series are easily read as they appeared at a point in time.</ns0:p><ns0:p>* Each time series instance can be stored with minimal initial effort.</ns0:p><ns0:p>Ease of writing data can come at the expense of efficiency to ensure that create, update or append operations can be performed with confidence that any changes are logged without having to make decisions on which portions of the data are current or new.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The expectation is that read performance has a greater impact on use as they are more frequent. Attaching a time series identifier as the initial minimal information allows for data from a basic dataset to be loaded and explored immediately. Additional attributes can be attached to a time series instance to further differentiate datasets that share conceptual time series identifiers. By default, these identifier and attribute combinations are then stored in a tightly linked relational database. Conceptually this meta data store could optionally be replaced by alternative technology, such as flat files. As the data is stored in individual structured files, the meta-data store acts as a minimal index with most of the work being delegated to the operating system and in turn the file system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>IMPLEMENTATION</ns0:head><ns0:p>PhilDB is written in Python because it fits well with the scientific computing ecosystem <ns0:ref type='bibr' target='#b10'>(Perez et al. 2011</ns0:ref>). Installation of PhilDB is performed easily within the Python ecosystem using the standard Python setup.py process, including installation from PyPI using 'pip'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Features</ns0:head></ns0:div>
<ns0:div><ns0:head>Key features of PhilDB are:</ns0:head><ns0:p>* A single write method accepting a pandas.Series object, data frequency and attributes for writing or updating a time series.</ns0:p><ns0:p>* A read method for reading a single time series based on requested time series identifier, frequency and attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Database Format</ns0:head><ns0:p>The technical implementation of the database format, as implemented in version 0.6.1</ns0:p><ns0:p>of PhilDB (MacDonald 2015), is described in this section. Due to the fact that PhilDB is still in the alpha stage of development the specifics here may change significantly in the future.</ns0:p><ns0:p>The meta-data store tracks attributes using a relational database, with the current implementation using SQLite <ns0:ref type='bibr' target='#b4'>(Hipp et al. 2015)</ns0:ref>. Actual time series data are stored as flat files on disk, indexed by the meta-data store to determine the path to a given series. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>EVALUATION</ns0:head><ns0:p>Of the open-source systems evaluated (as identified in section 2.2), InfluxDB came the closest in terms of minimal initial installation requirements and feature completeness, however, it doesn't support the key feature of update logging. Contrasting with InfluxDB, SciDB met the requirement of time series storage with update logging but didn't meet the requirement for simplicity to deploy and use. Both these systems were evaluated in comparison to PhilDB.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Evaluation dataset</ns0:head><ns0:p>The Hydrological Reference Stations <ns0:ref type='bibr' target='#b21'>(Zhang et al. 2014)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Evaluation method</ns0:head><ns0:p>Three key aspects were measured during the evaluation: * Write performance * Read performance * Disk usage Ease of installation and use, while subjective, is also discussed in the installation and usage sections related to each database.</ns0:p><ns0:p>To simplify the evaluation process and make it easily repeatable, the SciDB 14.3 virtual appliance image 13 was used to enable easy use of the SciDB database. This virtual appliance was based on a CentOS Linux 6.5 install. The PhilDB and InfluxDB databases were installed into the same virtual machine to enable comparison between systems. The virtual machine host was a Mid-2013 Apple Macbook Air, with a 1.7 GHz Intel Core i7 CPU, 8GB of DDR3 RAM and a 500GB SSD hard drive. VirtualBox 4.3.6 r91406 was used on the host machine for running the virtual appliance image with the guest virtual machine being allocated 2 processors and 4GB of RAM.</ns0:p><ns0:p>Write performance was evaluated by writing all time series from the evaluation dataset (described in section 6.1) into the time series databases being evaluated. This first write will be referred to as the initial write for each database. To track the performance of subsequent updates and reading the corresponding logged time series a further four writes were performed. These writes will be referred to as 'first update' through to For both PhilDB and SciDB the evaluation process described in this section was performed four times and the mean of the results analysed. Results between the four runs were quite similar so taking the mean gave results similar to the individual runs.</ns0:p><ns0:p>Analysing and visualising an individual run rather than the mean would result in the same conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3'>Evaluated databases</ns0:head><ns0:p>This section discusses each of the evaluated databases. Firstly, they are introduced and then their installation and usage is considered.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3.1'>InfluxDB</ns0:head><ns0:p>Paul Dix (CEO of InfluxDB) found that performance and ease of installation were the main concerns of users of existing open-source time series database systems <ns0:ref type='bibr' target='#b3'>(Dix 2014)</ns0:ref>.</ns0:p><ns0:p>InfluxDB was built to alleviate both those concerns.</ns0:p><ns0:p>While InfluxDB is designed for high performance data collection, it is not designed for bulk loading of data. Searching the InfluxDB issue tracker on github 14 , it can be seen that bulk loading has been a recurring problem with improvement over time. Bulk loading performance is, however, still poor compared to SciDB and PhilDB, as seen later in the performance results (section 6.4). A key feature of interest with InfluxDB was the ability to identify time series with tags. This feature is in line with the attributes concept used by PhilDB, thereby allowing multiple time series to be grouped by a single key identifier but separated by additional attributes or tags. influx_client.write_points(g, station_id)</ns0:p><ns0:p>In addition to experimenting with various API calls, configuration changes were attempted resulting in performance gains by lowering values related to the WAL options (the idea was based on an older GitHub issue discussing batch loading 16 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science streamflow = influx_client.query('SELECT * FROM Q{0}'.format('410730'))</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3.2'>PhilDB</ns0:head><ns0:p>PhilDB has been designed with a particular use case in mind as described in section 3.1. Installation of PhilDB is quite easy where a compatible Python environment exists.</ns0:p><ns0:p>Using a Python virtualenv removes the need to have root privileges to install PhilDB and no dedicated user accounts are required to run or use PhilDB. A PhilDB database can be written to any location the user has write access, allowing for experimentation without having to request a database be created or needing to share a centralised install.</ns0:p><ns0:p>Installation: Installation of PhilDB is readily performed using pip:</ns0:p><ns0:p>pip install phildb</ns0:p><ns0:p>Usage : The experimental dataset was loaded into a PhilDB instance using a Python script. Using PhilDB to load data can be broken into three key steps.</ns0:p><ns0:p>First, initialise basic meta information: This step only need to be performed once, when configuring attributes for the PhilDB instance for the first time, noting additional attributes can be added later.</ns0:p><ns0:p>Second, add an identifier for a time series and a time series instance record based on the identifier and meta information: In this example the Pandas time series is acquired by reading a CSV file using the Pandas read csv method, but any data acquisition method that forms a Pandas.Series object could be used. Reading a time series instance back out is easily performed with the read method:</ns0:p><ns0:formula xml:id='formula_0'>streamflow = db.read(station_id, 'D', measurand = 'Q', source = 'BOM_HRS')</ns0:formula><ns0:p>The keyword arguments are optional provided the time series instance can be uniquely identified.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3.3'>SciDB</ns0:head><ns0:p>SciDB, as implied by the name, was designed with scientific data in mind. As a result SciDB has the feature of change logging, allowing past versions of series to be retrieved.</ns0:p><ns0:p>Unfortunately SciDB only identifies time series by a single string identifier, therefore storing multiple related time series would require externally managed details about what time series are stored and with what identifier. Due to the sophistication of the SciDB system it is relatively complex to use with two built in languages, AFL and AQL, that allow for two different approaches to performing database operations. This, in turn, increases the amount of documentation that needs to be read to identify which method to use for a given task (such as writing a time series into the database). While the documentation is comprehensive in detailing the available operations, it is largely based on the assumption that the data is already within SciDB and will only be operated on within SciDB, with limited examples on how to load or extract data via external systems.</ns0:p><ns0:p>Installation: SciDB does not come with binary installers for newer versions and the build process is quite involved. Instructions for the build proccess are only available from the SciDB forums using a registered account 17 . Installation of older versions is comparable to InfluxDB with the following steps listed in the user guide:</ns0:p><ns0:p>yum install -y https://downloads.paradigm4.com/scidb-14.12-repository.rpm yum install -y scidb-14.12-installer Same as InfluxDB, SciDB requires root access to install and a dedicated user account for running the database. A PostgreSQL installation is also required by SciDB for storing information about the time series data that SciDB stores. Unlike InfluxDB, SciDB has authentication systems turned on by default that requires using dedicated accounts even for basic testing and evaluation.</ns0:p><ns0:p>Only Ubuntu and CentOS/RHEL Linux variants are listed as supported platforms in the install guide.</ns0:p><ns0:p>Usage: It took a considerable amount of time to identify the best way to load data into a SciDB instance, however once that was worked out, the actual load was quick and effective consisting of two main steps.</ns0:p><ns0:p>First, a time series needs to be created:</ns0:p><ns0:p>iquery -q 'CREATE ARRAY Q${station} <date:datetime, streamflow:double> [i =0: * ,10000,0];'</ns0:p><ns0:p>It is worth noting that datetime and double need to be specified for time series storage, since SciDB can hold many different array types aside from plain time series. Additionally, SciDB identifiers can not start with a numeric character so all time series identifiers were prefixed with a 'Q' (where 'Q' was chosen in this case because it is conventionally used in the hydrological context to represent streamflow discharge).</ns0:p><ns0:p>Second, the data is written using the iquery LOAD method as follows:</ns0:p><ns0:p>iquery -n -q 'LOAD Q${station} FROM '/home/scidb/${station}.scidb';'</ns0:p><ns0:p>This method required creating data files in a specific SciDB text format before hand using the csv2scidb command that ships with SciDB.</ns0:p><ns0:p>Identifying the correct code to read data back out required extensive review of the documentation, but was quick and effective once the correct code to execute was identified. The SciDB Python code to read a time series back as a Pandas.DataFrame object is as follows: streamflow = sdb.wrap_array('Q' + station_id).todataframe()</ns0:p><ns0:p>A contributing factor to the difficulty of identifying the correct code is that syntax errors with the AQL based queries (using the SciDB iquery command or via the Python API) are at times uninformative about the exact portion of the query that is in error.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>Performance</ns0:head><ns0:p>It should be noted that PhilDB currently only supports local write, which is advantageous for performance, compared to InfluxDB that only supports network access. InfluxDB was hosted locally, which prevents network lag, but the protocol design still reduced performance compared to the direct write as done by PhilDB. Although SciDB has network access, only local write performance (using the SciDB iquery command) and network based read access (using the Python API) were evaluated. SciDB was also accessed locally to avoid network lag when testing the network based API. For a comparable network read access comparison the experimental PhilDB Client/Server software was also used. It can be seen that while PhilDB has at times slightly better write performance, SciDB has more reliable write performance with a tighter distribution of write times.</ns0:p><ns0:p>It can also be seen from figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> that write performance for SciDB does marginally decrease as more updates are written. PhilDB write performance while more variable Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>across the dataset is also variable in performance based on how much of the series required updating. Where the fourth update writes the same data as the third update it can be seen that the performance distribution is closer to that of the initial load than the third load, since the data has actually remained unchanged.</ns0:p><ns0:p>Both SciDB and PhilDB perform well at loading datasets of this size with good write performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4.2'>Read performance</ns0:head><ns0:p>InfluxDB read performance is adequate and SciDB read speed is quite good, however Each database has different design goals that results in different performance profiles.</ns0:p><ns0:p>InfluxDB is not well suited to this use case with a design focusing on high performance writing of few values across many time series for metric collection, leading to poor performance for bulk loading of individual time series.</ns0:p><ns0:p>SciDB fares much better with consistent read and write performance, with slight performance decreases as time series are updated, likely due to design decisions that focus on handling large multi-dimensional array data for high performance operations.</ns0:p><ns0:p>Design decisions for SciDB that lead to consistent read and write performance appear to also give the same read performance when accessing historical versions of time series.</ns0:p><ns0:p>Achieving consistent read and write performance (including reading historical time series) seems to have come at the expense of disk space with SciDB consuming more Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>space than PhilDB and increasing linearly as time series are updated.</ns0:p><ns0:p>PhilDB performs quite well for this particular use case, with consistently fast reads of the latest time series. This consistent read performance does come at the expense of reading historical time series from the logs, which does degrade as the logs grow. Write performance for PhilDB, while variable, varies due to the volume of data changing.</ns0:p><ns0:p>The performance of PhilDB (particularly the excellent read performance) compared to SciDB for this use case was unexpected since the design aimed for an easy to use API at the expense of efficiency.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>FUTURE WORK</ns0:head><ns0:p>PhilDB is still in its alpha stage. Before reaching the beta stage, development efforts </ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSION</ns0:head><ns0:p>In conclusion, there is a need for an accessible time series database that can be deployed quickly so that curious minds, such as those in our scientific community, can easily analyse time series data and elucidate world-changing information. For scientific computing, it is important that any solution is capable of tracking subsequent data changes.</ns0:p><ns0:p>Although Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>1 http://kx.com/software.php2/19PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>'</ns0:head><ns0:label /><ns0:figDesc>fourth update'. The update data was created by multiplying some or all of the original time series by 1.1 as follows: * First update: multiplied the last 10 values in the time series by 1.1 leaving the rest of the record the same. * Second update: multiplied the first 10 values by 1.1, resulting in reverting the previously modified 10 values. * Third update: multiplied the entire original series by 1.1 resulting in an update to all values aside from the first 10.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Total write/read time for 221 daily time series</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Installation:</ns0:head><ns0:label /><ns0:figDesc>InfluxDB is easily installed compared to the other open-source systems reviewed, as demonstrated by the short install process shown below. Installation of prebuilt packages on Linux requires root access 15 . Installation of InfluxDB was performed in the CentOS Linux 6.5 based virtual machine containing the pre-installed SciDB instance.wget http://influxdb.s3.amazonaws.com/influxdb-0.9.6.1-1.x86_64.rpm sudo yum localinstall influxdb-0.9.6.1-1.x86_64.rpmStarting the InfluxDB service with:sudo /etc/init.d/influxdb start Usage: Loading of data into the InfluxDB instance was performed using the InfluxDB Python API that was straight forward to use. However, poor performance of bulk loads lead to a lot of experimentation on how to most effectively load large amounts of data quickly, including trying curl and the Influx line protocol format directly. The final solution used was to chunk the data into batches of 10 points using the Pandas groupby functionality before writing into InfluxDB using the InfluxDB Python API DataFrameClient write points method, for example: streamflow = pandas.read_csv(filename, parse_dates=True, index_col=0, header = None) for k, g in streamflow.groupby(np.arange(len(streamflow))//100):</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>db.add_timeseries(station_id) db.add_timeseries_instance(station_id, 'D', '', measurand = 'Q', source = 'BOM_HRS') Multiple time series instances, based on different combinations of attributes, can be associated with an existing time series identifier. Once a time series instance has been created it can be written to and read from. Third, load the data from a Pandas time series: streamflow = pandas.read_csv(filename, parse_dates=True, index_col=0, header = None) db.write(station_id, 'D', streamflow, measurand = 'Q', source = 'BOM_HRS')</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distribution of write times for 221 time series</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Distribution of read durations for the 221 time series from the evaluation dataset</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Disk usage after initial data load and each subsequent data update</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>management to support true arbitrary attribute creation and attachment. * Possible alternative back ends, using alternative data formats, disk paths, and relational databases. * More sophisticated handling of time zone meta-data. * Storage of quality codes or other row level attributes. * Formalisation of UUID usage for sharing of data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>InfluxDB comes close with features like tagging of attributes and a clear API, it lacks the needed change logging feature and presently suffers poor performance for bulk loading of historical data. InfluxDB has clearly been designed with real-time metrics based time series in mind and as such doesn't quite fit the requirements outlined in this paper. While SciDB has the important feature of change logging and performs quite well, it doesn't have a simple mechanism for tracking time series by attributes. SciDB is well suited for handing very large multi-dimensional arrays, which can justify the steep learning curve for such work, but for input/output of plain time series such complexity is a little unnecessary. PhilDB addresses this gap in existing solutions, as well as surpassing them for efficiency and usability. Finally, PhilDB's source code has been released on GitHub 18 under the permissive 3-clause BSD open-source license to help others easily extract wisdom from their data.18 https://github.com/amacd31/phildb 16/19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The Python API does enable a certain level of abstraction between getting data out of SciDB and into the scientific Python ecosystem.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4/19</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>The core of the PhilDB package is the PhilDB database class 10 , that exposes high level methods for data operations. These high level functions are designed to be easily used interactively in the IPython interpreter<ns0:ref type='bibr' target='#b9'>(Perez and Granger 2007)</ns0:ref> yet still work well in scripts and applications. The goal of interactivity and scriptability are to enable exploratory work and the ability to automate repeated tasks<ns0:ref type='bibr' target='#b11'>(Shin et al. 2011</ns0:ref>).</ns0:figDesc><ns0:table /><ns0:note>Utilising Pandas (McKinney 2012) to handle complex time series operations simplifies the internal code that determines if values require creation or updating. Returning Pandas objects from the read methods allows for data analysis to be performed readily without further data munging. Lower level functions are broken up into separate modules for major components such as reading, writing, and logging, that can be easily tested as individual components. The PhilDB class pulls together the low level methods, allowing for the presentation of a stable interface that abstracts away the hard work of ensuring that new or changed values, and only those values, are logged.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>The flat files are implemented as plain binary files that store a 'long', 'double', and 'int' for each record. The 'long' is the datetime stored as a 'proleptic Gregorian ordinal' as determined by the Python datetime.datetime.toordinal method 11 (van Rossum 2015). The 'double' stores the actual value corresponding to the datetime stored in the preceding 'long'. Finally, the 'int' is a meta value for marking additional information about the record. In this version of PhilDB the meta value is only used to flag missing</ns0:figDesc><ns0:table /><ns0:note>data values. Individual changes to time series values are logged to HDF5 files<ns0:ref type='bibr' target='#b15'>(The HDF Group 1997)</ns0:ref> that are kept alongside the main time series data file with every new value written as a row in a table, each row having a column to store the date, value, and meta value as per the file format. In addition, a final column is included to record the date and time the record was written.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>dataset from the Australian Bureau of Meteorology 12 was used for the evaluation. This dataset consists of daily streamflow data for 221 time series with a mean length of 16,310 days, the breakdown of the series lengths are in table 1 and visualised in figure 1. Breakdown of length of time series in the evaluation dataset (all values rounded to nearest day)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distribution of lengths of time series</ns0:cell></ns0:row><ns0:row><ns0:cell>10000 12000 14000 16000 18000 20000 22000</ns0:cell></ns0:row><ns0:row><ns0:cell>Days</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 1. Distribution of time series length for the 221 time series in the evaluation</ns0:cell></ns0:row><ns0:row><ns0:cell>dataset</ns0:cell></ns0:row></ns0:table><ns0:note>11 https://docs.python.org/2/library/datetime.html#datetime.date.toordinal 12 http://www.bom.gov.au/water/hrs/ 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>PhilDB significantly out-performs both InfluxDB and SciDB in read speed, as can be seen in figure2. Even the PhilDB server/client model, which has yet to be optimised for performance, out-performed both InfluxDB and SciDB. Read performance with PhilDB is consistent as the time series are updated, as shown in figure4, due to the architecture keeping the latest version of time series in a single file. Reading from the log with PhilDB does show a decrease in performance as the size of the log grows, but not as quickly as SciDB. While PhilDB maintains consistent read performance and decreasing log read performance, SciDB consistently decreases in performance with each update for reading both current and logged time series.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PhilDB fourth log read PhilDB third log read PhilDB second log read PhilDB first log read SciDB fourth log read SciDB third log read SciDB second log read SciDB first log read PhilDB Server/Client fifth read PhilDB Server/Client fourth read PhilDB Server/Client third read PhilDB Server/Client second read PhilDB Server/Client first read PhilDB fifth read PhilDB fourth read PhilDB third read PhilDB second read PhilDB first read SciDB fifth read SciDB fourth read SciDB third read SciDB second read SciDB first read</ns0:cell><ns0:cell cols='4'>Distribution of time series read durations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>300</ns0:cell><ns0:cell>400</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Milliseconds</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='11'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:note>
<ns0:note place='foot' n='17'>http://paradigm4.com/forum/viewtopic.php?f=14&t=1672&sid=6e15284d9785558d5590d335fed0b059 12/19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016)</ns0:note>
<ns0:note place='foot' n='19'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:11:7508:2:0:CHECK 24 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "CS-2015:11:7508:2:0:NEW
PhilDB: The time series database with built-in change logging
Dear Editor,
I would like to thank both the reviewers for their time and reviewer one for their helpful feedback in addressing the style of the paper. I have submitted a revised paper based on their comments. I believe all the key points have been adequately addressed. I have included inline responses to the review comments below to indicate what work was done to address each comment.
Regards,
Andrew MacDonald
Reviewer 1: The author has taken into account the suggestions proposed in the first review, except the style of writing. It is written as a report. Introduction should have the context of the problem, motivation, the goal of the paper and how it is organised, then, background, architecture, implementation and evaluation.
Response: Introduction was lacking context as pointed out. It has now been expanded. The style of writing has been revisited throughout as per the other comments.
Reviewer 1: In research paper sentences such as “The author’s interest is derived …” or “The need of the author is …” must be avoided, it’s better to write “Phildb aims at handling data for exploratory purposes….”
Response: Direct references to ‘the author’ have been removed and reworded in the style suggested.
Reviewer 1: The adjective “simple” in the last sentence of the summary devalues the paper.
Response: Agreed. In fact the word simple was over used within the paper and a few uses have been replaced with more appropriate words.
Reviewer 1: Evaluation section should be better organised, it would be good to add a paragraph or table where you indicate the metrics to be evaluated before starting to show results.
Response: Added a paragraph on the metrics being measured (at the start of a new ‘Evaluation method’ section encapsulating existing content).
Reviewer 1: Dataset description should be described before the experiment as well as the methodology followed; later, results and discussion.
Response: Brought the section describing the dataset forward in the Evaluation section (so it comes before the new ‘Evaluation method’ section).
Reviewer 1: Sections 6.2,6.3 and 6.4 should be grouped in one subsection since it exposes DB setting preliminaries or moved to an addendum since their description is this section breaks the threat of the evaluation.
Response: Grouped sections 6.2, 6.3, and 6.4 under a single sub section.
Experimental design
Reviewer 1: As previously said, evaluation section should be rewritten. Now the experimentation is acceptable although the evaluation of another dataset with different features would be welcome.
Response: I would love to extend the evaluation to use other datasets, including testing with a large synthetic dataset, however as this project and paper have been developed as personal work in my spare time around a full time job, so I have been unable to make such progress.
Validity of the findings
Correct
Comments for the author
I consider this database tries to fill a gap in the temporal series database market and thus this contribution is valuable. The style of writing must be enhanced.
" | Here is a paper. Please give your review comments after reading it. |
83 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Enriched electronic health records (EHRs) contain crucial information related to disease progression, and this information can help with decision-making in the health care field. Data analytics in health care is deemed as one of the essential processes that help accelerate the progress of clinical research. However, processing and analyzing EHR data are common bottlenecks in health care data analytics.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>The dxpr R package provides mechanisms for integration, wrangling, and visualization of clinical data, including diagnosis and procedure records. First, the dxpr package helps users transform International Classification of Diseases (ICD) codes to a uniform format. After code format transformation, the dxpr package supports four strategies for grouping clinical diagnostic data. For clinical procedure data, two grouping methods can be chosen. After EHRs are integrated, users can employ a set of flexible built-in querying functions for dividing data into case and control groups by using specified criteria and splitting the data into before and after an event based on the record date. Subsequently, the structure of integrated long data can be converted into wide, analysis-ready data that are suitable for statistical analysis and visualization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We conducted comorbidity data processes based on a cohort of newborns from Medical Information Mart for Intensive Care-III (n = 7,833) by using the dxpr package. We first defined patent ductus arteriosus (PDA) cases as patients who had at least one PDA diagnosis (ICD, Ninth Revision, Clinical Modification [ICD-9-CM] 7470*). Controls were defined as patients who never had PDA diagnosis. In total, 381 and 7,452 patients with and without PDA, respectively, were included in our study population. Then, we grouped the diagnoses into defined comorbidities. Finally, we observed a statistically significant difference in 8 of the 16 comorbidities among patients with and without PDA, including fluid and electrolyte disorders, valvular disease, and others.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This dxpr package helps clinical data analysts address the common bottleneck caused by clinical data characteristics such as heterogeneity and sparseness.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>On the basis of the development of electronic health records (EHRs), data analytics in health care is deemed as an essential process for accelerating the progress of clinical research <ns0:ref type='bibr' target='#b11'>(Hersh, 2007;</ns0:ref><ns0:ref type='bibr' target='#b16'>Jensen, Jensen & Brunak, 2012;</ns0:ref><ns0:ref type='bibr' target='#b20'>Miotto & Weng, 2015)</ns0:ref>. Enriched EHRs contain crucial information related to disease progression, and this information can help with decision making in the health care field including for treatment selection and disease diagnosis <ns0:ref type='bibr' target='#b16'>(Jensen, Jensen & Brunak, 2012;</ns0:ref><ns0:ref type='bibr' target='#b26'>Raghupathi & Raghupathi, 2014)</ns0:ref>. However, processing and analyzing EHR data are usually challenging because of their heterogeneity and sparsity. These inherent characteristics create a common bottleneck in health care big data analytics <ns0:ref type='bibr' target='#b38'>(Wu, Roy & Stewart, 2010;</ns0:ref><ns0:ref type='bibr' target='#b13'>Hripcsak & Albers, 2013;</ns0:ref><ns0:ref type='bibr' target='#b36'>Weiskopf & Weng, 2013)</ns0:ref>. Moreover, executing clinical data analysis project across different departments or institutes is difficult because clinical data formats and terminologies used to describe clinical conditions may vary across departments. A method that can standardize and facilitate the sharing of data or analysis pipelines from multiple sources is needed in research on clinical data analysis. Several common data models (CDMs) have been developed for eliminating clinical data format barriers, including the National Patient-Centered Clinical Research Network (PCORnet) <ns0:ref type='bibr' target='#b5'>(Fleurence et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>PCORnet, 2020)</ns0:ref> and Observational Medical Outcomes Partnership (OMOP) CDM (Observational Health Data Sciences and Informatics, 2020). The concept of CDM is to transform data into a CDM and terminology and then allow users to perform systematic analyses by using various sources. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science an analysis-ready dataset from EHRs or claims data is a complex task that requires both medical knowledge and data science skills. The proposed dxpr package simplifies and accelerates the workflow for EHR data extraction and helps clinical data analysts generate simple and clean scripts that can easily be shared and reproduced. The dxpr package enables researchers to explore EHRs or claims data to acquire crucial information, understand disease progression, and analyze outcomes without writing complicated data preprocessing scripts. Moreover, the proposed package can support collaborative research across multiple data sources as long as the data include general diagnosis-or procedure-related information.</ns0:p><ns0:p>The dxpr package has three phases to process and analyze diagnosis codes in EHRs (Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>In the first phase, namely data integration, we transform diagnosis codes into a uniform format and provide four strategies to group diagnoses into clinically meaningful categories before the wrangling process. In the second phase, namely, data wrangling, users can use provided functions to query eligible cases, split data based on the index date, and calculate condition era according to the grouped diagnostic categories of each patients. Furthermore, exploratory data analysis preparation can be performed in this phase. Moreover, the dxpr package provides a function to convert a long format of grouped data into a wide format, which fits other analytical and plotting functions from other packages better. In the last phase, namely visualization, we provide overviews for diagnosis standardization and data integration, such as comorbidity distribution in the study population, comorbidity differences between case and control groups, and the most common diagnoses that failed to be grouped or transformed. The usage details are presented in the Supplementary Data S1 and S2. For processing and analyzing procedure codes, the concept is similar to diagnosis.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science 2 Materials and methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Preparation</ns0:head><ns0:p>The current version of the package is available at Github (https://github.com/DHLab-TSENG/dxpr, Supplementary Data S3) and is accessible through the devtools package that enables installing packages from GitHub (Wickham H). To install the dxpr R package, users can type the following commands in an R session:</ns0:p><ns0:p>devtools::install_github('DHLab-TSENG/dxpr') library(dxpr)</ns0:p><ns0:p>The imported EHR dataset must contain at least three columns as indicated below:</ns0:p><ns0:p> Member ID: a patient identifier, which can be numeric, alphanumeric, or a list of characters.</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>Diagnosis/procedure code: ICD-9 or ICD-10 code assigned to a visit or an admission.</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>Visit or admission date: the date of the visit, admission, or clinical service provided. The date has to be recorded in year-month-day format (YYYY/MM/DD or YYYY-MM-DD).</ns0:p><ns0:p>Column names can be passed in each function by using function arguments.</ns0:p><ns0:p>The data can be imported from files or databases, with packages provide access to databases within R, such as DBI (R Special Interest Group on Databases (R-SIG-DB), <ns0:ref type='bibr' target='#b25'>Wickham & Müller, 2021)</ns0:ref> and odbc <ns0:ref type='bibr'>(Hester, Wickham & Gjoneski, 2020)</ns0:ref>. We illustrate the use of the dxpr package with a diagnostic sample dataset of 10-year admissions of 38 patients, sampleDxFile, and the first five records are shown in Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Data integration</ns0:head></ns0:div>
<ns0:div><ns0:head>Code format transformation</ns0:head><ns0:p>The dxpr package first transforms ICD diagnostic codes into a uniform format before code grouping. ICD-9 and ICD-10 diagnostic codes (U.S. Centers for Medicare & Medicaid Services, b) have two formats, namely decimal (with a decimal place separating the code) and short formats. Different hospitals, grouping methods, or standards coded ICD into different formats.</ns0:p><ns0:p>For example, studies using Clinical Classifications Software (CCS) (Healthcare Cost and Utilization Project (HCUP), 2017, 2019a) and comorbidity measures, such as Elixhauser and</ns0:p><ns0:p>Charlson <ns0:ref type='bibr' target='#b4'>(Elixhauser et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b19'>Menendez et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b21'>Moore et al., 2017)</ns0:ref>, have coded the ICD in a short format, and a phenome-wide association study (PheWAS) <ns0:ref type='bibr' target='#b1'>(Denny et al., 2010)</ns0:ref> coded the ICD in a decimal format. Therefore, format transformation is required before code grouping, and the transformation type is decided by the chosen grouping method. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(icdDxToCustom) and searching for lines containing a match (icdDxToCustomGrep). The grouping functions return two tables of the dataset, one is data with the corresponding grouping categories of each ICD (Table <ns0:ref type='table'>2</ns0:ref>), and the other is summarized data exhibiting the earliest/latest record date and diagnosis counts in the same grouping category for each patient (Table <ns0:ref type='table'>3</ns0:ref>). For example, after executing function icdDxToCCS for the records of patients A and B, two output types are shown in Tables <ns0:ref type='table'>2 and 3</ns0:ref>, respectively. Patient A has three diagnosis records (ICD codes: 78550, 78552, and 785.59), which are all in the 'shock' category of the CCS classification, with the earliest record on September 1, 2013 and the latest one on October 1, 2014. The icdDxToCCS function mapped corresponding CCS categories for these ICD codes and returned the grouping results (Table <ns0:ref type='table'>2</ns0:ref>). Similarly, patient B has two diagnosis records (ICD codes: 78552 and 250.00) in the 'shock' category and 'Diabetes mellitus without complication' category of CCS classification, and the grouping results are also shown in Table <ns0:ref type='table'>2</ns0:ref>. According to these diagnosis records shown in Table <ns0:ref type='table'>2</ns0:ref>, Table <ns0:ref type='table'>3</ns0:ref> shows that icdDxToCCS function can summarize the first and last dates of diagnosis, the total number of diagnoses, and the period between the first and last diagnoses for each category, which can be used for designing the analysis strategy. While icdDxToCCS groups codes into single-level CCS, icdDxToCCSLvl groups codes into multi-level CCS. Multi-level CCS expands single-level CCS into a four-level hierarchical system for diagnoses, which provide the opportunity to examine general aggregations or to assess specific conditions ('HCUP-US Tools & Software Page'). For instance, if a user wishes to group codes into the second level of multi-level CCS, then this task can be performed through simply entering 'ccslvl2' as the assigned grouping type. These </ns0:p></ns0:div>
<ns0:div><ns0:head>Data wrangling</ns0:head></ns0:div>
<ns0:div><ns0:head>Case selection</ns0:head><ns0:p>In clinical data analysis projects, the most crucial step is case definition and selection, such as defining Lyme disease cases from claims data <ns0:ref type='bibr' target='#b30'>(Tseng et al., 2015)</ns0:ref> or defining acute ischemic stroke from EHR <ns0:ref type='bibr' target='#b32'>(Tseng et al., 2020)</ns0:ref> The output dataset of this function provides the start and end dates of the cases, the number of days between them, and the most common ICD codes used in the case definition. Furthermore, a list of people who did not satisfy the required case conditions or practically match the case definition is appended in the returned output table, and these individuals can be defined as a control group or be removed. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science using selectCases or getEligiblePeriod function, respectively. The dxpr package provides a convenient function splitDataByDate that can split data through classifying the data recorded before or after the defined index date and calculating the period between the record date and index date based on a self-defined window. For example, if a user needs to aggregate the data by using a 30-day window, the data recorded on 15 and 45 days after the index date will be defined as window 1 and window 2, respectively. The output of splitDataByDate function helps users to split the data based on the study design, and this can be applied to further time-series multiplemeasurement analysis with period information.</ns0:p><ns0:p>indexDateTable <-data.frame (ID = c('A0','B0','C0','D0'), </ns0:p></ns0:div>
<ns0:div><ns0:head>Condition era generation</ns0:head><ns0:p>Condition era is a means to apply consistent rules for medical conditions to infer distinct episodes in care, generated through integrating distributed clinical records into a single progression record <ns0:ref type='bibr' target='#b27'>(Ryan, 2010)</ns0:ref>. The concept of condition era is committed to the length of the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science persistence gap: when the time interval of any two consecutive admissions for certain conditions is smaller than the length of the persistence gap, then these two admission events will be aggregated into the same condition era. Each condition era consists of one or many events, and differences between any two consecutive admission events are all within the persistence gap. For example, an episode of influenza may include single or multiple outpatient visits, and the length of the influenza course should be the period between the first and last visits of the episode.</ns0:p><ns0:p>getConditionEra function calculates condition era by using the grouped categories or selfdefining groups of each patient and then generates a table with individual IDs, the first and last record of an era, and the sequence number of each episode. Users can easily convert scattered diagnoses into an episode of condition based on the chararistics of target disease progression with the proposed function. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Bar chart of diagnostic categories</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the minimum prevalence of the diagnostic groups in the case or control group.</ns0:p><ns0:p>The 'percentage' column shows the proportion of individuals diagnosed with the diagnostic category in the group. For example, there are 38 patients in the sample file, and 'Renal Failure' defined in Elixhauser comorbidity accounts for 63.16% of the population (24/38). </ns0:p></ns0:div>
<ns0:div><ns0:head>Use case</ns0:head><ns0:p>To illustrate the main features in the dxpr package and the typical workflow, we demonstrated an analysis using the package among newborns who were diagnosed with patent ductus arteriosus (PDA) from Medical Information Mart for Intensive Care-III (MIMIC-III) <ns0:ref type='bibr' target='#b18'>(L. et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b17'>Johnson et al., 2016</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>dataset is used for verifying the comorbidity difference between patients with and without PDA based on the dxpr package. In this study, we defined PDA cases as patients who had at least one PDA diagnosis (ICD-9-CM 7470*). The controls are defined as patients who never had PDA diagnosis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance analysis</ns0:head><ns0:p>The dxpr package is designated to accelerate the process of large EHR data integration and provide the ready-for-analysis dataset from the integrated EHR data. We verified the running time 100 times with a simulated dataset of 953,294 unique patients and 7,948,418 distinct diagnosis records in a standard personal computer with 64 GB DDR4 2133GHz RAM and an Intel® Core™ i7-6700 (CPU @3.40GHz), using Windows 10 (1809), R 4.0.1 (64 bits), and RStudio 1.2.5033.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Result</ns0:head></ns0:div>
<ns0:div><ns0:head>A use case -patients with PDA</ns0:head><ns0:p>We conducted comorbidity analyses based on a cohort of newborns from MIMIC-III (n = 7,833) by using dxpr and tableone (Yoshida K) packages. In the dxpr package, we first use selectCases function to define case (PDA) and control (non-PDA) groups. In total, 381 and 7,452 patients with and without PAD were included in our study, respectively. Then, icdDxToComorbid function was applied to group diagnoses into AHRQ-defined comorbidities. Finally, we analyzed and graphed the AHRQ-defined comorbidities based on plot_groupedData function (Fig. <ns0:ref type='figure'>2</ns0:ref>) by using the chi-square test and Fisher's exact test. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Several software and packages were developed to facilitate clinical data analysis. rEHR <ns0:ref type='bibr' target='#b28'>(Springate et al., 2017)</ns0:ref> established a clinical data analysis workflow to simplify the processing of EHR. The rEHR package simplifies the process of extracting data from EHR databases. It used the database backend that can accelerate data access and process times. However, this design needs database backend, which might not be suitable in many circumstances. Furthermore, the international diagnosis coding standard, such as ICD, were not used in the package. The ICD <ns0:ref type='bibr' target='#b35'>(Wasey & Lang, 2020)</ns0:ref> package is designed for calculating comorbidities and medical risk scores with ICD-9 and ICD-10 codes. It is helpful to group ICD codes according to comorbidities.</ns0:p><ns0:p>However, in clinical data analysis, eligible case selection, data split based on the defined index date, and visualization are also essential. Therefore, we designed and developed the dxpr package to facilitate diagnosis data analysis.</ns0:p><ns0:p>The proposed package has limitations, which come from either the data or package itself.</ns0:p><ns0:p>For analyzing clinical data, the dxpr package highly depends on diagnosis and procedure codes, but these codes may vary in accuracy across different institutions <ns0:ref type='bibr'>[10]</ns0:ref>. Furthermore, the effect of switching diagnosis codes from ICD-9 to ICD-10 should be considered if the analysis period is across the switching date. In addition to diagnosis and procedure data, the other data not included in proposed packages, such as medication data, are important in clinical data analysis projects. In the R ecosystem, the AdhereR <ns0:ref type='bibr' target='#b2'>(Dima & Dediu, 2017)</ns0:ref> package implements a set of functions that are consistent with current adherence guidelines and definitions. Fourth, we provide an easy-touse package that will help analysts process raw data and notify them when potential coding errors exist. However, even with this package, analysts should understand their data precisely.</ns0:p><ns0:p>This easy-to-use package will help analysts process clinical data with its coding error-checking functions, but may also lead naïve analysts to miss opportunities to find other errors in the data.</ns0:p><ns0:p>Finally, the dxpr package is focused on analysis-ready data generation so that the statistic Manuscript to be reviewed</ns0:p><ns0:p>Computer Science method incorporation may be insufficient. However, the R ecosystem's most significant advantage is that many well-developed packages were developed to facilitate statistical analysis.</ns0:p><ns0:p>In the use case demonstration, our package can be used with other packages, such as tableone.</ns0:p><ns0:p>The tableone (Yoshida K) package is developed to ease the construction of the common ' </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Although a CDM can help perform systematic analyses across different sources, the integration of clinical data and the preparation of analysis-ready data are unsolved issues. The proposed open-source dxpr R package is a software tool aimed at expediting general EHR or claims data analyses through incorporating several functions that enable users to standardize, integrate, wrangle, and visualize clinical diagnosis and procedure records. Preparing PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>grouping functions not only facilitate users to convert original diagnosis records from detailed levels into clinically meaningful diagnostic groups for further analysis but also provide aggregated information of each diagnostic group that can help research design and hypothesis PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021) Manuscript to be reviewed Computer Science generation, such as filtering out data based on specified criteria (e.g., first diagnosis dates of specific chronic disease). The usage of code classification function for CCS is as follows: ## ICD to CCS description CCS_description <-icdDxToCCS(dxDataFile = sampleDxFile, idColName = ID, icdColName = ICD, dateColName = Date, icd10usingDate = '2015-10-01', isDescription = TRUE) CCS_description$groupedDT[CCS_description$groupedDT$ID=='A0',] Short ID ICD Date CCS_CATEGORY_DESCRIPTION 1: 5855 A0 5855 2013-12-20 Chronic kidney disease 2: V4511 A0 V4511 2012-04-05 Chronic kidney disease 3: V560 A0 V560 2010-03-28 Chronic kidney disease 4: 5853 A0 5853 2010-10-29 Chronic kidney disease 5: 5856 A0 5856 2009-07-25 Chronic kidney disease 6: 001 A0 001 2014-11-05 <NA> 7: A0.11 A0 A0.11 2017-01-31 <NA> 8: A0.11 A0 A0.11 2023-08-12 <NA> head(CCS_description$summarised_groupedDT, 5) ID CCS_CATEGORY_DESCRIPTION firstCaseDate endCaseDate count period 1: A0 Chronic kidney disease 2009-07-25 2013-12-20 5 1609 days 2: A1 Chronic kidney disease 2006-11-29 2014-09-24 5 2856 days 3: A10 Chronic kidney disease 2007-11-04 2012-07-30 5 1730 days 4: A11 Chronic kidney disease 2008-03-09 2011-09-03 5 1273 days 5: A12 Chronic kidney disease 2006-05-14 2015-06-29 5 3333 days</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>. The analysis results could change based on case definition PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021) Manuscript to be reviewed Computer Science and lead to a different conclusion. The query function selectCases can select cases matching case definitions. Users can select cases based on diagnosis (ICD) or diagnostic categories (CCS,PheWAS, comorbidities, or self-defined diagnostic categories). Moreover, the function provides an option to set the minimum number of diagnoses within a specific duration. For example, users can extract diabetes cases by assigning at least two diagnoses in ICD codes '250.xx' or 'E10.x-E14.x' within 730 days when a user applies the validated diabetes case definition: 'two physician claims within 2 years with diagnosis codes 250.xx or E10.x-E14.x'<ns0:ref type='bibr' target='#b0'>(Chen et al., 2010)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>selectedCase count firstCaseDate endCaseDate period MostCommonICD MostCommonICDCount PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)In some clinicaldata, such as claims data, individuals can join or leave the program on different dates, and the length of available records might affect the analysis completeness. The dxpr package provides a function getEligiblePeriod for researchers to identify the first/last record date for each patient. These outputs can be used as an index date for case exclusion, such as cases without at least 6 months washout or follow-up period, or further data splitting. on index date and moving window In clinical data analysis projects, users usually need to extract data based on a specific clinical event (e.g., extracting data before the first Lyme disease diagnosis in the records (Tseng et al., 2017)). The date of the specific event (index date) can be the first/last record date of the events or patient record, and the table of the index date for each individual can be generated PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>After data integration and wrangling, researchers often need to further analyze these processed data, and function groupedDataLongToWide converts the long format of grouped data PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)Manuscript to be reviewed Computer Science into a wide format, which is fit for other analytical and plotting packages, such as tableone</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Visualization</ns0:head><ns0:label /><ns0:figDesc>Pareto chart of error ICD When code transformation is implemented in the dxpr package, it generates unified data of diagnosis codes with potential errors. Function plotICDError visualizes codes with potential error by using the Pareto chart containing a bar plot where error ICD codes are arranged in descending order, and the cumulative total is represented by the line. Users can sort based on the counts of error ICD codes and set the top selected number of the ordered dataset. For instance, if a user chooses the top 10 ordinal rankings, then the Pareto chart shows a plot of the top 10 common error ICD codes and a list with details of these 10 and other error ICD codes. PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Function plotDiagCat provides an overview of the grouping categories of the diagnoses and summarizes the proportion of individuals diagnosed with grouped diagnostic categories in the whole study population or case and control groups in a bar chart. Users can observe the number and percentage of diagnostic categories in their dataset through this function. Furthermore, this function compares the usage of significantly different diagnostic categories between case and control groups by using the chi-square test or Fisher's exact test when the data does not match the assumptions of the chi-square test. The default level of statistical significance is considered at 5% (p = 0.05). Researchers can set a threshold of the top N significant grouped categories and PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>processing As diagnosis codes, ICD-9-Procedure Coding System (PCS) code also has two formats, PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021) Manuscript to be reviewed Computer Science namely decimal and short, whereas ICD-10-PCS code only has a short format. The functions (icdPrToCCS and icdPrToProcedureClass) provide two strategies (CCS and procedure class) to collapse ICD procedure codes into clinically meaningful categories for further analysis. This procedure has two CCS classifications: single and multiple levels. The usage is similar to the diagnostic CCS classification. A sample file (samplePrFile) is provided with procedure records, including three patients and 170 records. The procedure classes (Healthcare Cost and Utilization Project (HCUP), 2016) are created to facilitate health services research on hospital procedures by using administrative data. The procedure classes provide a standard to categorize individual procedure codes into one of the four broad categories: minor diagnostic, minor therapeutic, major diagnostic, and major therapeutic. The aforementioned classification functions mentioned allow the researcher to readily determine whether a procedure is diagnostic or therapeutic and whether a procedure is minor or major in terms of invasiveness, resource use, or both.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,377.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,301.12,525.00,349.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>by groupedDataLongToWide can be passed to the tableone (Yoshida K) package to create objects summarizing all comorbidities stratified by patients with and without PDA and by performing the statistical chi-square tests. The AHRQ comorbidity table revealed 8 of the 16 statistically significant comorbidities (p < 0.05, Table4) among patients with and without PDA, and the comorbidities are visualized in Fig.2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Performance</ns0:cell></ns0:row><ns0:row><ns0:cell>For a simulated dataset of 953,294 unique patients and 7,948,418 admission records, code</ns0:cell></ns0:row><ns0:row><ns0:cell>grouping with CCS-defined comorbidities required 149±2.48 seconds (including code</ns0:cell></ns0:row><ns0:row><ns0:cell>transformation). Case selection required 238±3.05 seconds to query patients with diseases of the</ns0:cell></ns0:row><ns0:row><ns0:cell>urinary system, eligible period identification required 1.12±0.22 seconds to find the first and last</ns0:cell></ns0:row><ns0:row><ns0:cell>admission date for each patient, data splitting with the first admission date for each patient</ns0:cell></ns0:row><ns0:row><ns0:cell>required 6.50±0.42 seconds, condition era generation required 372±6.39 seconds, and analysis-</ns0:cell></ns0:row><ns0:row><ns0:cell>ready data generation required 3.75±0.27 seconds.</ns0:cell></ns0:row><ns0:row><ns0:cell>4 Discussion and conclusions</ns0:cell></ns0:row><ns0:row><ns0:cell>The dxpr package considerably simplifies the extraction, accelerates the processing of</ns0:cell></ns0:row><ns0:row><ns0:cell>clinical data research, and enables researchers to prepare analysis-ready data with a standard</ns0:cell></ns0:row><ns0:row><ns0:cell>workflow. The package had been developed and tested using structured clinical data, such as</ns0:cell></ns0:row><ns0:row><ns0:cell>critical care data (MIMIC-III (Johnson et al., 2016)), a multi-institutional medical care database</ns0:cell></ns0:row><ns0:row><ns0:cell>(Chang Gung Research Database (Tsai et al., 2017; Tseng et al., 2020)), and claims data</ns0:cell></ns0:row><ns0:row><ns0:cell>(National Health Insurance Research Database (Hsieh et al., 2019)), indicating that the package</ns0:cell></ns0:row></ns0:table><ns0:note>To focus on comorbidities that were essential and recorded in adequate individuals in our study population, we excluded comorbidities recorded in <1% of the patients in the PDA or non-PDA group. The analysis-ready data PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021) Manuscript to be reviewed Computer Science generated can be applied to data from different countries, institutions, and data structures. The available functions are summarized in Table 5. PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:1:0:NEW 4 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 '</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>in research papers, providing patient baseline characteristics table with summary statistics and hypothesis tests.We demonstrated that the dxpr package can play an essential role in complex clinical data preprocessing and analysis-ready data generation through integrating the international standard of clinical data. This package helps clinical data analysts combat the common bottleneck caused by certain clinical data characteristics, such as heterogeneity and sparseness.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The first five diagnosis records of the sample dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>ID</ns0:cell><ns0:cell>ICD</ns0:cell><ns0:cell>Date</ns0:cell></ns0:row><ns0:row><ns0:cell>A2</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-05-22</ns0:cell></ns0:row><ns0:row><ns0:cell>A5</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-01-24</ns0:cell></ns0:row><ns0:row><ns0:cell>A8</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2015-10-27</ns0:cell></ns0:row><ns0:row><ns0:cell>A13</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-04-26</ns0:cell></ns0:row><ns0:row><ns0:cell>A13</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2025-02-02</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Editor
Comment:
Technically speaking, this paper is well-written to design and develop a package for the raw data. At the same time, the motivation is not clearly following the comments from reviewer 1. I strongly recommend the authors to re-organize the contents and highlight the practical application in clinic. It implies that a major revision is needed at least.
Response:
We have edited the manuscript according to the comments and suggestions of the Editor and Reviewer, and addresses the concerns of the Reviewer. The motivation for developing this dxpr package was to simplify and accelerate the workflow for EHR data extraction in R. We agree that understanding the raw data is necessary and analysis bottlenecks are not always bad, so we have emphasized this and discussed potential problems in the Discussion section (Page 23).
Reviewer 1
Comment:
I do not work with R, but exclusively with SAS, so I make no pretense of critiquing the coding per se. The application, as described, appears to be flexible, i.e. it provides the option for CDM categorizations (CCS, Elixhauser, Charlson comorbidities, etc.) as well as user-defined. The standardization, in terms of formatting of ICD dx and procedure codes is a welcome essential feature.
Response:
Thank you for recognizing the functions we provided in the dxpr package.
We chose R because it is available under an open-source license, which means that R is available free of charge and anybody can access the source code and improve it. Furthermore, developers such as our team can easily write a package for R and distribute it. The motivation for developing this dxpr package was to simplify and accelerate the workflow for EHR data extraction in R.
Comment:
I would not use this product--partly because I don’t work with R although I see no reason why my raw data couldn’t be pre-processed in dxpr and then analyzed using SAS. I also would not make use of the point-and-click menu analysis features in SAS Enterprise Guide (I generally despise SAS EG). The fact is having an intimate understanding of the raw data is essential for executing any competent statistical analysis. I would not consider delegating that responsibility, or rather, that opportunity. Occasionally, a well-meaning potential customer, who cannot access raw data, will offer to perform their own analysis. “All I need you to do is clean, manipulate, and synthesize the data.” Just that. They’re happy to do the high level work of pressing the regression button. If there is a perceived analysis bottleneck I suspect it is among practitioners who perform analysis as a secondary function.
Response:
We agree that understanding the raw data is necessary, and we have emphasized this in the Discussion section (Page 23). The revised text is as follows: “Fourth, we provide an easy-to-use package that will help analysts process raw data and notify them when potential coding errors exist. However, even with this package, analysts should understand their data precisely. This easy-to-use package will help analysts process clinical data with its coding error–checking functions, but may also lead naïve analysts to miss opportunities to find other errors in the data.”
Comment:
I am instinctively uneasy with the notion of producing “analysis-ready” data. That said, we employ metrics, imperfect though they all are, to find common ground for comparisons. Knowing that other investigative groups operating under different health care systems (perhaps in different countries) employed the same algorithm to clean and prepare their data would seem to provide a strong starting point for comparison. Nevertheless, analysis bottlenecks are not entirely a bad thing. Making complex clinical data more accessible sounds like a good idea but lowering the requirements for entry may permit access to naïve analysts. Ultimately though, the onus is on the investigator(s) to competently execute their analysis. Dxpr seems like a useful product—if used responsibly. But that is always the caveat.
Response:
We agree that analysis bottlenecks are not always bad. Analysts can learn the characteristics of the data through solving data processing problems. We also understand that there could be errors in the clinical data, and so we provided warning messages for potential coding errors. This easy-to-use package will help analysts process clinical data with its coding error–checking functions, but may also lead naïve analysts to miss opportunities to find other errors in the data. We have discussed this potential problem in the Discussion section (page 23), as previously mentioned.
Comment:
It’s not clear why this dxpr package description was presented as an analysis—which it isn’t. I don’t care for the arbitrary case-control “analysis” described in the results. (Selecting controls from a sample of sick people is not best practice and at any rate, if dxpr is about creating analysis ready data a contrived example of an analysis is not necessary.) There was no mention of it in the Methods. If the aim is to describe the dxpr package then please don’t pretend to do something else. One would expect the editorial staff to address these issues before sending the article out for review.
Response:
Thank you for highlighting the ambiguous use of the word “analysis.” We have edited the abstract and subheadings to clarify that the dxpr package provides data processing functions (Page 18). In the Results section, our intent is to provide a use case to help readers understand the package. We have edited the subheading for clarification (page 20).
Reviewer 2
Comment:
The paper introduces an R packages that seems to do what is supposed to do, but this is relatively hard to asses in detail. That being said, I would have expected more unit tests (now there seems to be just one very general such test). I found the paper well written and I think the package is a useful addition to the computational ecosystem.
Response:
Thank you for the suggestions. We have provided detailed information on the dxpr package in the supplementary documents, including the code we used in the manuscript, another detailed example the dxpr package’s use in markdown format, and the source code of the entire package (https://github.com/DHLab-TSENG/dxpr/). Furthermore, we have provided “Getting Started” documents in English and Mandarin (https://dhlab-tseng.github.io/dxpr/articles/Eng_Diagnosis.html). In the help documents, we have added the expected parameters as you suggested (such as https://dhlab-tseng.github.io/dxpr/reference/getConditionEra.html , https://dhlab-tseng.github.io/dxpr/reference/selectCases.html ).
Comment:
1. abstract: 'combat' -> 'address'?
2. p9, 100-107: this text seems duplicated from the intro and does not really seem to be about methods...
3. p10 112-113: please give a better ref to the devtools package
4. p10 123: so, the date format cannot be changed? why?
5. p16 305-307: the index date must be pre-computed, as it seems to be a fixed date -- is this correct? I do see why this choice but I was wondering if a different approach might be useful as well (say, x days since the first event) or so?
6. condition eras: the gapDate is fixed for all patients and conditions, right? why?
Response:
Thank you for the comments. We have edited the manuscript accordingly.
1. We have used “address” accordingly.
2. The first paragraph on page 6 has been removed due to duplication.
3. The devtools package provides a function that enables packages from GitHub to be installed. We have explained this on page 6.
4. The package wrangles data using patient ID, ICD code, and visit/admission date. Thus, the data format should contain at least these three columns. Because ID format, ICD coding style, and date format can vary between institutions, a function that detects column information is not provided at the current stage. However, we have attempted to maximize flexibility in that only column names must be provided and the order of columns remains flexible.
5. Users can obtain a table of clinical index dates for each patient in advance using the selectCases function to select specific events or the getEligiblePeriod function to obtain the first/last admission date. Subsequently, further self-defined computation is allowed, for example, by simply adding x days of index date.
6. The gapDate in the condition era is fixed per execution. Basically, the gap used in condition era definition should be the same for all patients. In some circumstances, analysts may need to define different gaps for different conditions. For example, for influenza, one may use 7 days as the gap, but for hepatitis C, one may use 90 days or longer. This can easily be done by calling the getConditionEra function for influenza and hepatitis C separately.
Comment:
The code: I looked at the GitHub repo and I do have some general questions:
1. (touched above): I feel that a good set of unit tests are essential, especially as the package will continue to be developed
2. help: I found that the help could be improved and that, in some cases, not info is given about the expected parameters (e.g., 'dxDataFile' is actually *not* a file, but a data.frame-like object but this is not specified)
3. the I/O seems limited to in-memory only (and I fully understand that) but it would be useful to see a discussion (maybe with some code) about how to use the package with, say, and SQL database?
Response:
Thank you for your suggestions for improving the software. As previously mentioned, we have provided “Getting Started” documents in English and Mandarin. In the help documents, we have added the expected parameters as you mentioned. We have also added a description of how to use this package if one’s data are in a database (Page 7 and on the “Getting Started” pages with sample code, https://dhlab-tseng.github.io/dxpr/articles/Eng_Diagnosis.html#data-format ). Within the R ecosystem, DBI (https://cran.r-project.org/package=DBI ), odbc (https://cran.r-project.org/package=odbc ), and other packages provide access to databases. As long as the data are retrieved from databases to a data frame in R, the following processes are the same as the in-memory example.
" | Here is a paper. Please give your review comments after reading it. |
84 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Enriched electronic health records (EHRs) contain crucial information related to disease progression, and this information can help with decision-making in the health care field. Data analytics in health care is deemed as one of the essential processes that help accelerate the progress of clinical research. However, processing and analyzing EHR data are common bottlenecks in health care data analytics.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>The dxpr R package provides mechanisms for integration, wrangling, and visualization of clinical data, including diagnosis and procedure records. First, the dxpr package helps users transform International Classification of Diseases (ICD) codes to a uniform format. After code format transformation, the dxpr package supports four strategies for grouping clinical diagnostic data. For clinical procedure data, two grouping methods can be chosen. After EHRs are integrated, users can employ a set of flexible built-in querying functions for dividing data into case and control groups by using specified criteria and splitting the data into before and after an event based on the record date. Subsequently, the structure of integrated long data can be converted into wide, analysis-ready data that are suitable for statistical analysis and visualization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We conducted comorbidity data processes based on a cohort of newborns from Medical Information Mart for Intensive Care-III (n = 7,833) by using the dxpr package. We first defined patent ductus arteriosus (PDA) cases as patients who had at least one PDA diagnosis (ICD, Ninth Revision, Clinical Modification [ICD-9-CM] 7470*). Controls were defined as patients who never had PDA diagnosis. In total, 381 and 7,452 patients with and without PDA, respectively, were included in our study population. Then, we grouped the diagnoses into defined comorbidities. Finally, we observed a statistically significant difference in 8 of the 16 comorbidities among patients with and without PDA, including fluid and electrolyte disorders, valvular disease, and others.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This dxpr package helps clinical data analysts address the common bottleneck caused by clinical data characteristics such as heterogeneity and sparseness.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>On the basis of the development of electronic health records (EHRs), data analytics in health care is deemed as an essential process for accelerating the progress of clinical research <ns0:ref type='bibr' target='#b11'>(Hersh, 2007;</ns0:ref><ns0:ref type='bibr' target='#b16'>Jensen, Jensen & Brunak, 2012;</ns0:ref><ns0:ref type='bibr' target='#b19'>Miotto & Weng, 2015)</ns0:ref>. Enriched EHRs contain crucial information related to disease progression, and this information can help with decision making in the health care field including for treatment selection and disease diagnosis <ns0:ref type='bibr' target='#b16'>(Jensen, Jensen & Brunak, 2012;</ns0:ref><ns0:ref type='bibr' target='#b24'>Raghupathi & Raghupathi, 2014)</ns0:ref>. However, processing and analyzing EHR data are usually challenging because of their heterogeneity and sparsity. These inherent characteristics create a common bottleneck in health care big data analytics <ns0:ref type='bibr'>(Wu, Roy & Stewart, 2010;</ns0:ref><ns0:ref type='bibr' target='#b13'>Hripcsak & Albers, 2013;</ns0:ref><ns0:ref type='bibr'>Weiskopf & Weng, 2013)</ns0:ref>. Moreover, executing clinical data analysis project across different departments or institutes is difficult because clinical data formats and terminologies used to describe clinical conditions may vary across departments. A method that can standardize and facilitate the sharing of data or analysis pipelines from multiple sources is needed in research on clinical data analysis. Several common data models (CDMs) have been developed for eliminating clinical data format barriers, including the National Patient-Centered Clinical Research Network (PCORnet) <ns0:ref type='bibr' target='#b5'>(Fleurence et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b22'>PCORnet, 2020)</ns0:ref> and Observational Medical Outcomes Partnership (OMOP) CDM (Observational Health Data Sciences and Informatics, 2020). The concept of CDM is to transform data into a CDM and terminology and then allow users to perform systematic analyses by using various sources.</ns0:p><ns0:p>Although a CDM can help perform systematic analyses across different sources, the integration of clinical data and the preparation of analysis-ready data are unsolved issues.</ns0:p><ns0:p>The proposed open-source dxpr R package is a software tool aimed at expediting general EHR or claims data analyses through incorporating several functions that enable users to standardize, integrate, wrangle, and visualize clinical diagnosis and procedure records. Preparing</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science an analysis-ready dataset from EHRs or claims data is a complex task that requires both medical knowledge and data science skills. The proposed dxpr package simplifies and accelerates the workflow for EHR data extraction and helps clinical data analysts generate simple and clean scripts that can easily be shared and reproduced. The dxpr package enables researchers to explore EHRs or claims data to acquire crucial information, understand disease progression, and analyze outcomes without writing complicated data preprocessing scripts. Moreover, the proposed package can support collaborative research across multiple data sources as long as the data include general diagnosis-or procedure-related information.</ns0:p><ns0:p>The dxpr package has three phases to process and analyze diagnosis codes in EHRs (Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>In the first phase, namely data integration, we transform diagnosis codes into a uniform format and provide four strategies to group diagnoses into clinically meaningful categories before the wrangling process. In the second phase, namely, data wrangling, users can use provided functions to query eligible cases, split data based on the index date, and calculate condition era according to the grouped diagnostic categories of each patients. Furthermore, exploratory data analysis preparation can be performed in this phase. Moreover, the dxpr package provides a function to convert a long format of grouped data into a wide format, which fits other analytical and plotting functions from other packages better. In the last phase, namely visualization, we provide overviews for diagnosis standardization and data integration, such as comorbidity distribution in the study population, comorbidity differences between case and control groups, and the most common diagnoses that failed to be grouped or transformed. The usage details are presented in the Supplementary Data S1 and S2. For processing and analyzing procedure codes, the concept is similar to diagnosis.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science 2 Materials and methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Preparation</ns0:head><ns0:p>The current version of the package is available at Github (https://github.com/DHLab-TSENG/dxpr, Supplementary Data S3) and is accessible through the devtools package that enables installing packages from GitHub <ns0:ref type='bibr'>(Wickham, Hester & Chang, 2020)</ns0:ref>. To install the dxpr R package, users can type the following commands in an R session:</ns0:p><ns0:p>devtools::install_github('DHLab-TSENG/dxpr') library(dxpr)</ns0:p><ns0:p>The imported EHR dataset must contain at least three columns as indicated below:</ns0:p><ns0:p> Member ID: a patient identifier, which can be numeric, alphanumeric, or a list of characters.</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>Diagnosis/procedure code: ICD-9 or ICD-10 code assigned to a visit or an admission.</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>Visit or admission date: the date of the visit, admission, or clinical service provided. The date should be in date format. If the date is recorded in a string format, it has to be recorded in year-month-day format (YYYY/MM/DD or YYYY-MM-DD).</ns0:p><ns0:p>Column names can be passed in each function by using function arguments.</ns0:p><ns0:p>The data can be imported from files or databases, with packages provide access to databases within R, such as DBI (R Special Interest Group on Databases (R-SIG-DB), <ns0:ref type='bibr' target='#b23'>Wickham & Müller, 2021)</ns0:ref> and odbc <ns0:ref type='bibr'>(Hester & Wickham, 2021)</ns0:ref>. We illustrate the use of the dxpr package with a diagnostic sample dataset of 10-year admissions of 38 patients, sampleDxFile, and the first five records are shown in Table <ns0:ref type='table' target='#tab_6'>1</ns0:ref>. Charlson <ns0:ref type='bibr' target='#b4'>(Elixhauser et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b18'>Menendez et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Moore et al., 2017)</ns0:ref>, have coded the ICD in a short format, and a phenome-wide association study (PheWAS) <ns0:ref type='bibr' target='#b1'>(Denny et al., 2010)</ns0:ref> coded the ICD in a decimal format. Therefore, format transformation is required before code grouping, and the transformation type is decided by the chosen grouping method. <ns0:ref type='bibr' target='#b4'>(Elixhauser et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b18'>Menendez et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Moore et al., 2017)</ns0:ref>, and self-defining grouping methods includes precise matching PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(icdDxToCustom) and searching for lines containing a match (icdDxToCustomGrep). The grouping functions return two tables of the dataset, one is data with the corresponding grouping categories of each ICD (Table <ns0:ref type='table'>2</ns0:ref>), and the other is summarized data exhibiting the earliest/latest record date and diagnosis counts in the same grouping category for each patient (Table <ns0:ref type='table'>3</ns0:ref>). For example, after executing function icdDxToCCS for the records of patients A and B, two output types are shown in Tables <ns0:ref type='table'>2 and 3</ns0:ref>, respectively. Patient A has three diagnosis records (ICD codes: 78550, 78552, and 785.59), which are all in the 'shock' category of the CCS classification, with the earliest record on September 1, 2013 and the latest one on October 1, 2014. The icdDxToCCS function mapped corresponding CCS categories for these ICD codes and returned the grouping results (Table <ns0:ref type='table'>2</ns0:ref>). Similarly, patient B has two diagnosis records (ICD codes: 78552 and 250.00) in the 'shock' category and 'Diabetes mellitus without complication' category of CCS classification, and the grouping results are also shown in Table <ns0:ref type='table'>2</ns0:ref>. According to these diagnosis records shown in Table <ns0:ref type='table'>2</ns0:ref>, Table <ns0:ref type='table'>3</ns0:ref> shows that icdDxToCCS function can summarize the first and last dates of diagnosis, the total number of diagnoses, and the period between the first and last diagnoses for each category, which can be used for designing the analysis strategy. While icdDxToCCS groups codes into single-level CCS, icdDxToCCSLvl groups codes into multi-level CCS. Multi-level CCS expands single-level CCS into a four-level hierarchical system for diagnoses, which provide the opportunity to examine general aggregations or to assess specific conditions ('HCUP-US Tools & Software Page'). For instance, if a user wishes to group codes into the second level of multi-level CCS, then this task can be performed through simply entering 'ccslvl2' as the assigned grouping type. These </ns0:p></ns0:div>
<ns0:div><ns0:head>Data wrangling</ns0:head></ns0:div>
<ns0:div><ns0:head>Case selection</ns0:head><ns0:p>In clinical data analysis projects, the most crucial step is case definition and selection, such as defining Lyme disease cases from claims data <ns0:ref type='bibr' target='#b28'>(Tseng et al., 2015)</ns0:ref> or defining acute ischemic stroke from EHR <ns0:ref type='bibr' target='#b30'>(Tseng et al., 2020)</ns0:ref> The output dataset of this function provides the start and end dates of the cases, the number of days between them, and the most common ICD codes used in the case definition. Furthermore, a list of people who did not satisfy the required case conditions or practically match the case definition is appended in the returned output table, and these individuals can be defined as a control group or be removed. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science using selectCases or getEligiblePeriod function, respectively. The dxpr package provides a convenient function splitDataByDate that can split data through classifying the data recorded before or after the defined index date and calculating the period between the record date and index date based on a self-defined window. For example, if a user needs to aggregate the data by using a 30-day window, the data recorded on 15 and 45 days after the index date will be defined as window 1 and window 2, respectively. The output of splitDataByDate function helps users to split the data based on the study design, and this can be applied to further time-series multiplemeasurement analysis with period information.</ns0:p><ns0:p>indexDateTable <-data.frame (ID = c('A0','B0','C0','D0'), </ns0:p></ns0:div>
<ns0:div><ns0:head>Condition era generation</ns0:head><ns0:p>Condition era is a means to apply consistent rules for medical conditions to infer distinct episodes in care, generated through integrating distributed clinical records into a single progression record <ns0:ref type='bibr' target='#b25'>(Ryan, 2010)</ns0:ref>. The concept of condition era is committed to the length of the PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science persistence gap: when the time interval of any two consecutive admissions for certain conditions is smaller than the length of the persistence gap, then these two admission events will be aggregated into the same condition era. Each condition era consists of one or many events, and differences between any two consecutive admission events are all within the persistence gap. For example, an episode of influenza may include single or multiple outpatient visits, and the length of the influenza course should be the period between the first and last visits of the episode.</ns0:p><ns0:p>getConditionEra function calculates condition era by using the grouped categories or selfdefining groups of each patient and then generates a table with individual IDs, the first and last record of an era, and the sequence number of each episode. Users can easily convert scattered diagnoses into an episode of condition based on the chararistics of target disease progression with the proposed function. </ns0:p></ns0:div>
<ns0:div><ns0:head>Bar chart of diagnostic categories</ns0:head><ns0:p>Function plotDiagCat provides an overview of the grouping categories of the diagnoses and summarizes the proportion of individuals diagnosed with grouped diagnostic categories in the whole study population or case and control groups in a bar chart. Users can observe the number and percentage of diagnostic categories in their dataset through this function. Furthermore, this function compares the usage of significantly different diagnostic categories between case and control groups by using the chi-square test or Fisher's exact test when the data does not match the assumptions of the chi-square test. The default level of statistical significance is considered at 5% (p = 0.05). Researchers can set a threshold of the top N significant grouped categories and PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the minimum prevalence of the diagnostic groups in the case or control group.</ns0:p><ns0:p>The 'percentage' column shows the proportion of individuals diagnosed with the diagnostic category in the group. For example, there are 38 patients in the sample file, and 'Renal Failure' defined in Elixhauser comorbidity accounts for 63.16% of the population (24/38). </ns0:p></ns0:div>
<ns0:div><ns0:head>Use case</ns0:head><ns0:p>To illustrate the main features in the dxpr package and the typical workflow, we demonstrated an analysis using the package among newborns who were diagnosed with patent ductus arteriosus (PDA) from Medical Information Mart for Intensive Care-III (MIMIC-III) <ns0:ref type='bibr' target='#b17'>(Johnson et al., 2016</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science dxpr package. In this example, we defined PDA cases as patients who had at least one PDA diagnosis (ICD-9-CM 7470*). The controls are defined as patients who never had PDA diagnosis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance analysis</ns0:head><ns0:p>The dxpr package is designated to accelerate the process of large EHR data integration and provide the ready-for-analysis dataset from the integrated EHR data. We verified the running time 100 times with a simulated dataset of 953,294 unique patients and 7,948,418 distinct diagnosis records in a standard personal computer with 64 GB DDR4 2133GHz RAM and an Intel® Core™ i7-6700 (CPU @3.40GHz), using Windows 10 (1809), R 4.0.1 (64 bits), and RStudio 1.2.5033.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Result</ns0:head></ns0:div>
<ns0:div><ns0:head>A use case -patients with PDA</ns0:head><ns0:p>We conducted comorbidity analyses based on a cohort of newborns from MIMIC-III (n = 7,833) by using dxpr and tableone (Yoshida & Bartel, 2020) packages. In the dxpr package, we first use selectCases function to define case (PDA) and control (non-PDA) groups. In total, 381 and 7,452 patients with and without PAD were included in our study, respectively. Then, icdDxToComorbid function was applied to group diagnoses into AHRQ-defined comorbidities.</ns0:p><ns0:p>Finally, we analyzed and graphed the AHRQ-defined comorbidities based on plot_groupedData function (Fig. <ns0:ref type='figure'>2</ns0:ref>) by using the chi-square test and Fisher's exact test. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b26'>(Springate et al., 2017)</ns0:ref> established a clinical data analysis workflow to simplify the processing of EHR. The rEHR package simplifies the process of extracting data from EHR databases. It used the database backend that can accelerate data access and process times. However, this design needs database backend, which might not be suitable in many circumstances. Furthermore, the international diagnosis coding standard, such as ICD, were not used in the package. The ICD <ns0:ref type='bibr' target='#b33'>(Wasey & Lang, 2020)</ns0:ref> package is designed for calculating comorbidities and medical risk scores with ICD-9 and ICD-10 codes. It is helpful to group ICD codes according to comorbidities.</ns0:p><ns0:p>However, in clinical data analysis, eligible case selection, data split based on the defined index date, and visualization are also essential. Therefore, we designed and developed the dxpr package to facilitate diagnosis data analysis.</ns0:p><ns0:p>The proposed package has limitations, which come from either the data or package itself.</ns0:p><ns0:p>For analyzing clinical data, the dxpr package highly depends on diagnosis and procedure codes, but these codes may vary in accuracy across different institutions. Furthermore, the effect of switching diagnosis codes from ICD-9 to ICD-10 should be considered if the analysis period is across the switching date. In addition to diagnosis and procedure data, the other data not included in proposed packages, such as medication data, are important in clinical data analysis projects. In the R ecosystem, the AdhereR <ns0:ref type='bibr' target='#b2'>(Dima & Dediu, 2017)</ns0:ref> package implements a set of functions that are consistent with current adherence guidelines and definitions. Fourth, we provide an easy-touse package that will help analysts process raw data and notify them when potential coding errors exist. However, even with this package, analysts should understand their data precisely.</ns0:p><ns0:p>This easy-to-use package will help analysts process clinical data with its coding error-checking functions, but may also lead naïve analysts to miss opportunities to find other errors in the data.</ns0:p><ns0:p>Finally, the dxpr package is focused on analysis-ready data generation so that the statistic method incorporation may be insufficient. However, the R ecosystem's most significant Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>advantage is that many well-developed packages were developed to facilitate statistical analysis.</ns0:p><ns0:p>In the use case demonstration, our package can be used with other packages, such as tableone package. The tableone <ns0:ref type='bibr'>(Yoshida & Bartel, 2020)</ns0:ref> package is developed to ease the construction of the common ' </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>grouping functions not only facilitate users to convert original diagnosis records from detailed levels into clinically meaningful diagnostic groups for further analysis but also provide aggregated information of each diagnostic group that can help research design and hypothesis PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) Manuscript to be reviewed Computer Science generation, such as filtering out data based on specified criteria (e.g., first diagnosis dates of specific chronic disease). The usage of code classification function for CCS is as follows: ## ICD to CCS description CCS_description <-icdDxToCCS(dxDataFile = sampleDxFile, idColName = ID, icdColName = ICD, dateColName = Date, icd10usingDate = '2015-10-01', isDescription = TRUE) CCS_description$groupedDT[CCS_description$groupedDT$ID=='A0',] Short ID ICD Date CCS_CATEGORY_DESCRIPTION 1: 5855 A0 5855 2013-12-20 Chronic kidney disease 2: V4511 A0 V4511 2012-04-05 Chronic kidney disease 3: V560 A0 V560 2010-03-28 Chronic kidney disease 4: 5853 A0 5853 2010-10-29 Chronic kidney disease 5: 5856 A0 5856 2009-07-25 Chronic kidney disease 6: 001 A0 001 2014-11-05 <NA> 7: A0.11 A0 A0.11 2017-01-31 <NA> 8: A0.11 A0 A0.11 2023-08-12 <NA> head(CCS_description$summarised_groupedDT, 5) ID CCS_CATEGORY_DESCRIPTION firstCaseDate endCaseDate count period 1: A0 Chronic kidney disease 2009-07-25 2013-12-20 5 1609 days 2: A1 Chronic kidney disease 2006-11-29 2014-09-24 5 2856 days 3: A10 Chronic kidney disease 2007-11-04 2012-07-30 5 1730 days 4: A11 Chronic kidney disease 2008-03-09 2011-09-03 5 1273 days 5: A12 Chronic kidney disease 2006-05-14 2015-06-29 5 3333 days</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>. The analysis results could change based on case definition PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) Manuscript to be reviewed Computer Science and lead to a different conclusion. The query function selectCases can select cases matching case definitions. Users can select cases based on diagnosis (ICD) or diagnostic categories (CCS,PheWAS, comorbidities, or self-defined diagnostic categories). Moreover, the function provides an option to set the minimum number of diagnoses within a specific duration. For example, users can extract diabetes cases by assigning at least two diagnoses in ICD codes '250.xx' or 'E10.x-E14.x' within 730 days when a user applies the validated diabetes case definition: 'two physician claims within 2 years with diagnosis codes 250.xx or E10.x-E14.x'<ns0:ref type='bibr' target='#b0'>(Chen et al., 2010)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>selectedCase count firstCaseDate endCaseDate period MostCommonICD MostCommonICDCount PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) In some clinical data, such as claims data, individuals can join or leave the program on different dates, and the length of available records might affect the analysis completeness. The dxpr package provides a function getEligiblePeriod for researchers to identify the first/last record date for each patient. These outputs can be used as an index date for case exclusion, such as cases without at least 6 months washout or follow-up period, or further data splitting. on index date and moving window In clinical data analysis projects, users usually need to extract data based on a specific clinical event (e.g., extracting data before the first Lyme disease diagnosis in the records (Tseng et al., 2017)). The date of the specific event (index date) can be the first/last record date of the events or patient record, and the table of the index date for each individual can be generated PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>After data integration and wrangling, researchers often need to further analyze these processed data, and function groupedDataLongToWide converts the long format of grouped data PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) Manuscript to be reviewed Computer Science into a wide format, which is fit for other analytical and plotting packages, such as tableone (Yoshida & Bartel, 2020) package.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Visualization</ns0:head><ns0:label /><ns0:figDesc>Pareto chart of error ICD When code transformation is implemented in the dxpr package, it generates unified data of diagnosis codes with potential errors. Function plotICDError visualizes codes with potential error by using the Pareto chart containing a bar plot where error ICD codes are arranged in descending order, and the cumulative total is represented by the line. Users can sort based on the counts of error ICD codes and set the top selected number of the ordered dataset. For instance, if a user chooses the top 10 ordinal rankings, then the Pareto chart shows a plot of the top 10 common error ICD codes and a list with details of these 10 and other error ICD codes. PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>processing As diagnosis codes, ICD-9-Procedure Coding System (PCS) code also has two formats, PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) Manuscript to be reviewed Computer Science namely decimal and short, whereas ICD-10-PCS code only has a short format. The functions (icdPrToCCS and icdPrToProcedureClass) provide two strategies (CCS and procedure class) to collapse ICD procedure codes into clinically meaningful categories for further analysis. This procedure has two CCS classifications: single and multiple levels. The usage is similar to the diagnostic CCS classification. A sample file (samplePrFile) is provided with procedure records, including three patients and 170 records. The procedure classes (Healthcare Cost and Utilization Project (HCUP), 2016) are created to facilitate health services research on hospital procedures by using administrative data. The procedure classes provide a standard to categorize individual procedure codes into one of the four broad categories: minor diagnostic, minor therapeutic, major diagnostic, and major therapeutic. The aforementioned classification functions mentioned allow the researcher to readily determine whether a procedure is diagnostic or therapeutic and whether a procedure is minor or major in terms of invasiveness, resource use, or both.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,377.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,301.12,525.00,349.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>reviewed Computer Science Data integration Code format transformation</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>The dxpr package first transforms ICD diagnostic codes into a uniform format before code</ns0:cell></ns0:row><ns0:row><ns0:cell>grouping. ICD-9 and ICD-10 diagnostic codes (U.S. Centers for Medicare & Medicaid Services,</ns0:cell></ns0:row><ns0:row><ns0:cell>b) have two formats, namely decimal (with a decimal place separating the code) and short</ns0:cell></ns0:row><ns0:row><ns0:cell>formats. Different hospitals, grouping methods, or standards coded ICD into different formats.</ns0:cell></ns0:row><ns0:row><ns0:cell>For example, studies using Clinical Classifications Software (CCS) (Healthcare Cost and</ns0:cell></ns0:row><ns0:row><ns0:cell>Utilization Project (HCUP), 2017, 2019a) and comorbidity measures, such as Elixhauser and</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)Manuscript to be</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>The dxpr package supports four strategies to group EHR diagnosis</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='4'>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>sampleDxFile$Decimal <-ICD_Decimal$ICD</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>head(sampleDxFile)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ID ICD</ns0:cell><ns0:cell cols='2'>Date Decimal</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>1: A2 Z992 2020-05-22 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>2: A5 Z992 2020-01-24 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>3: A8 Z992 2015-10-27 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>4: A13 Z992 2020-04-26 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>5: A13 Z992 2025-02-02 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>6: A15 Z992 2023-05-12 Z99.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>tail(ICD_Decimal$Error)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>ICD count IcdVersionInFile</ns0:cell><ns0:cell cols='2'>WrongType Suggestion</ns0:cell></ns0:row><ns0:row><ns0:cell>1: 75.52</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>ICD 9 Wrong format</ns0:cell></ns0:row><ns0:row><ns0:cell>2: E03.0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>ICD 9 Wrong version</ns0:cell></ns0:row><ns0:row><ns0:cell>3: 650</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>ICD 10 Wrong version</ns0:cell></ns0:row><ns0:row><ns0:cell>4: 123.45</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>ICD 10 Wrong format</ns0:cell></ns0:row><ns0:row><ns0:cell>5: 755.2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>ICD 9 Wrong format</ns0:cell><ns0:cell>755.29</ns0:cell></ns0:row><ns0:row><ns0:cell>6: 7552</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell cols='2'>ICD 9 Wrong format</ns0:cell><ns0:cell>75529</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Code grouping</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='5'>The code grouping functions collapse clinical diagnostic data (ICD-9/ICD-10 codes) (U.S.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>The transformation function (icdDxShortToDecimal) converts ICD-9 and ICD-10 codes into</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Centers for Medicare & Medicaid Services, b) into a smaller number of clinically meaningful</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>a uniform decimal format because a decimal format is needed for grouping diagnostic codes in</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>categories that are more useful for presenting descriptive statistics than using individual</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>PheWAS classification. Similar to icdDxShortToDecimal, icdDxDecimalToShort function converts</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>diagnostic codes (Healthcare Cost and Utilization Project (HCUP), 2019b).</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>diagnostic codes into a uniform short format, which can be used for grouping to CCS,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Elixhauser, or other classifications. These transformative functions not only convert ICD codes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>into uniform format codes but also check for potential coding errors. We provide two types of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>warning messages: wrong ICD format and wrong ICD version. Additional suggestions are</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>generated to help users adjust potential incorrect ICD codes if available.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>ICD_Decimal <-icdDxShortToDecimal(dxDataFile = sampleDxFile,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>icdColName = ICD,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>dateColName = Date,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>icd10usingDate = '2015/10/01')</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:cell></ns0:row></ns0:table><ns0:note>codes, namely CCS (Healthcare Cost and Utilization Project (HCUP), 2017, 2019a), PheWAS<ns0:ref type='bibr' target='#b1'>(Denny et al., 2010)</ns0:ref> (icdDxToPheWAS), comorbidity measures<ns0:ref type='bibr' target='#b4'>(Elixhauser et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b18'>Menendez et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Moore et al., 2017)</ns0:ref>, and self-defining grouping methods. The CCS grouping strategies includes single-level CCS (icdDxToCCS) and multiple-level CCS (icdDxToCCSLvl) (Healthcare Cost and Utilization Project (HCUP), 2017, 2019a), comorbidity measures (icdDxToComorbid) includes Elixhauser, Agency for Healthcare Research and Quality (AHRQ) and Charlson</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>). MMIC-III is a publicly available database comprising deidentified healthrelated data associated with the admissions of approximately 60,000 patients who stayed in the critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012.We provided a sample file sampleFile_MIMIC obtained from MIMIC-III<ns0:ref type='bibr' target='#b17'>(Johnson et al., 2016)</ns0:ref>, a medical dataset of 7,833 newborn patients with 45,674 admissions. This dataset is used for verifying the comorbidity difference between patients with and without PDA based on the</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>To focus on comorbidities that were essential and recorded in adequate individuals in our study population, we excluded ) package to create objects summarizing all comorbidities stratified by patients with and without PDA and by performing the statistical chi-square tests. The AHRQ comorbidity table revealed 8 of the 16 statistically significant comorbidities (p < 0.05, Table4) among patients with and without PDA, and the comorbidities are visualized in Fig.2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Performance</ns0:cell></ns0:row><ns0:row><ns0:cell>For a simulated dataset of 953,294 unique patients and 7,948,418 admission records, code</ns0:cell></ns0:row><ns0:row><ns0:cell>grouping with CCS-defined comorbidities required 149±2.48 seconds (including code</ns0:cell></ns0:row><ns0:row><ns0:cell>transformation). Case selection required 238±3.05 seconds to query patients with diseases of the</ns0:cell></ns0:row><ns0:row><ns0:cell>urinary system, eligible period identification required 1.12±0.22 seconds to find the first and last</ns0:cell></ns0:row><ns0:row><ns0:cell>admission date for each patient, data splitting with the first admission date for each patient</ns0:cell></ns0:row><ns0:row><ns0:cell>required 6.50±0.42 seconds, condition era generation required 372±6.39 seconds, and analysis-</ns0:cell></ns0:row><ns0:row><ns0:cell>ready data generation required 3.75±0.27 seconds.</ns0:cell></ns0:row><ns0:row><ns0:cell>4 Discussion and conclusions</ns0:cell></ns0:row><ns0:row><ns0:cell>The dxpr package considerably simplifies the extraction, accelerates the processing of</ns0:cell></ns0:row><ns0:row><ns0:cell>clinical data research, and enables researchers to prepare analysis-ready data with a standard</ns0:cell></ns0:row><ns0:row><ns0:cell>workflow. The package had been developed and tested using structured clinical data, such as</ns0:cell></ns0:row><ns0:row><ns0:cell>critical care data (MIMIC-III (Johnson et al., 2016)), a multi-institutional medical care database</ns0:cell></ns0:row><ns0:row><ns0:cell>(Chang Gung Research Database (Tsai et al., 2017; Tseng et al., 2020)), and claims data</ns0:cell></ns0:row><ns0:row><ns0:cell>(National Health Insurance Research Database (Hsieh et al., 2019)), indicating that the package</ns0:cell></ns0:row><ns0:row><ns0:cell>can be applied to data from different countries, institutions, and data structures. The available</ns0:cell></ns0:row><ns0:row><ns0:cell>functions are summarized in Table 5.</ns0:cell></ns0:row></ns0:table><ns0:note>comorbidities recorded in <1% of the patients in the PDA or non-PDA group. The analysis-ready data generated by groupedDataLongToWide can be passed to the tableone (Yoshida & Bartel, PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021) Manuscript to be reviewed Computer Science 2020 Several software and packages were developed to facilitate clinical data analysis. rEHR PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55320:2:0:NEW 2 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table 1' in research papers, providing patient baseline characteristics table with summary statistics and hypothesis tests.We demonstrated that the dxpr package can play an essential role in complex clinical data preprocessing and analysis-ready data generation through integrating the international standard of clinical data. This package helps clinical data analysts combat the common bottleneck caused by certain clinical data characteristics, such as heterogeneity and sparseness. The first five diagnosis records of the sample dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ID</ns0:cell><ns0:cell>ICD</ns0:cell><ns0:cell>Date</ns0:cell></ns0:row><ns0:row><ns0:cell>A2</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-05-22</ns0:cell></ns0:row><ns0:row><ns0:cell>A5</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-01-24</ns0:cell></ns0:row><ns0:row><ns0:cell>A8</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2015-10-27</ns0:cell></ns0:row><ns0:row><ns0:cell>A13</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2020-04-26</ns0:cell></ns0:row><ns0:row><ns0:cell>A13</ns0:cell><ns0:cell>Z992</ns0:cell><ns0:cell>2025-02-02</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Editor
Based on the reviewer's comments, a minor revision is still needed for the current version, while the structure and format of the paper should be updated and the contribution is believed to be acceptable.
Response:
We have edited the manuscript according to the comments and suggestions of Reviewer 2.
Reviewer 2
Comment
- please give a better ref to the devtools package: what I meant is to give a proper reference to the paper in the academic sense. Running `citation('devtools')`
gives:
@Manual{,
title = {devtools: Tools to Make Developing R Packages Easier},
author = {Hadley Wickham and Jim Hester and Winston Chang},
year = {2020},
note = {R package version 2.3.2},
url = {https://CRAN.R-project.org/package=devtools},
}
which should be used instead of the current '(Wickham H)'
Response:
We have corrected the reference of the devtools package. Also, we updated the references for the other R packages cited in the manuscript. Thank you for the suggestion.
Comment
- so, the date format cannot be changed? why?: now the documentation seems to specify that 'As for date column, the data format should be YYYY/MM/DD or YYYY-MM-DD' -- I still wonder why not letting the user specify the date format (see, for example, the `lubridate` package)
Response:
We have extended the method to let users either put the date in string format (YYYY/MM/DD or YYYY-MM-DD) or using date type directly. With this update, users can easily process any date format outside of the dxpr package, and we believe that this update provides the flexibility of the usage of date in the package. Thank you for the kind suggestion.
Comment
- gapDate: ok, thanks for the clarification; while I agree that it makes sense in most cases to fix it per condition, I am wondering if there might be particular cases where you still want to let vary *also* between patients?
Response:
In the clinical data analysis pipeline, we usually want to keep the rule of data preprocessing simple and straightforward. For getting the condition era, the gapDate may vary due to the clinical characteristics. As the example we mentioned in our previous reply, for influenza, one may use 7 days as the gap, but for hepatitis C, one may use 90 days or longer. However, for different patients, the reason and theoretical foundation of using different gapData for the same condition is not clear at this stage. We may provide the function that calculates the condition era with various gapDate for different patients in the future.
Comment
- unit tests: what I meant is the stuff that resides in the `dxpr/tests/testthat` folder, which now is just a skeleton `test-icdToCCS.R`; it is always a good idea to have a battery of unit tests to make sure that evolutions/bug fixes do not break things (e.g., https://towardsdatascience.com/unit-testing-in-r-68ab9cc8d211)
Response:
We agree that having unit tests is a good idea for ensuring that evolutions/bug fixes do not break things. We have added unit tests for all the functions provided in the packages, except the plot functions, and updated the GitHub repository. Thank you for the suggestion that makes the package completed.
" | Here is a paper. Please give your review comments after reading it. |
85 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The problem of determining the likelihood of the existence of a link between two nodes in a network is called link prediction. This is made possible thanks to the existence of a topological structure in most real-life networks. In other words, the topologies of networked systems such as the World Wide Web, the Internet, metabolic networks, and human society are far from random, which implies that partial observations of these networks can be used to infer information about undiscovered interactions. Significant research efforts have been invested into the development of link prediction algorithms, and some researchers have made the implementation of their methods available to the research community. These implementations, however, are often written in different languages and use different modalities of interaction with the user, which hinders their effective use. This paper introduces LinkPred, a high-performance parallel and distributed link prediction library that includes the implementation of the major link prediction algorithms available in the literature. The library can handle networks with up to millions of nodes and edges and offers a unified interface that facilitates the use and comparison of link prediction algorithms by researchers as well as practitioners.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The field of complex networks, and more generally that of network science, aims at studying networked systems, that is, systems composed of a large number of interacting components <ns0:ref type='bibr' target='#b4'>(Albert and Barabási, 2002)</ns0:ref>. Under this umbrella fall many seemingly disparate networks, but which share common underlying topological properties that constitute a fertile ground for analyzing and ultimately understanding these systems. Networks of interest can be social, biological, informational, or technological. Link prediction is the task of identifying links missing from a network <ns0:ref type='bibr' target='#b52'>(Lü and Zhou, 2011;</ns0:ref><ns0:ref type='bibr' target='#b55'>Martínez et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Guimerà and Sales-Pardo, 2009;</ns0:ref><ns0:ref type='bibr' target='#b3'>Al Hasan et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b34'>Guimerà and Sales-Pardo, 2009;</ns0:ref><ns0:ref type='bibr' target='#b20'>Clauset et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b52'>Lü and Zhou, 2011;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cannistraci et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b22'>Daminelli et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b3'>Al Hasan et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b71'>Wang et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b81'>Zhang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b7'>Beigi et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b67'>Sajadmanesh et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b54'>Makarov et al., 2019)</ns0:ref>, a problem with important applications, such as the reconstruction of networks from partial observations <ns0:ref type='bibr' target='#b34'>(Guimerà and Sales-Pardo, 2009)</ns0:ref>, recommendation of items in online shops and friends in social networks <ns0:ref type='bibr' target='#b3'>(Al Hasan et al., 2006)</ns0:ref>, and the prediction of interactions in biological networks <ns0:ref type='bibr' target='#b20'>(Clauset et al., 2008)</ns0:ref>. This paper introduces LinkPred, a C++ high-performance link prediction library that includes the implementation of the major link prediction algorithms available in the literature by development from scratch and wrapping or translating existing implementations. The library is designed with three guiding principles, ease of use, extensibility, and efficiency. To facilitate its use, LinkPred borrows heavily from the STL design to offer an elegant and powerful interface. C++ users with minimum experience using STL will find the library's programming and usage style to be very familiar. Moreover, the use of templates allows for greater flexibility when using LinkPred and allows for integration within various contexts.</ns0:p><ns0:p>The library contains bindings to Java and Python, providing access to its main functionalities through easy-to-use classes. LinkPred is aimed not only at practitioners but also at researchers in the field. It is designed to allow developers of new link prediction algorithms to easily integrate their code into the library and evaluate its performance. Efficiency-wise, the data structures used and implemented in LinkPred are all chosen and designed to achieve high performance. Additionally, most code in LinkPred is PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science parallelized using OpenMp, which allows taking advantage of shared memory architectures. Furthermore, a significant portion of the implemented predictors supports distributed processing using MPI, allowing the library to handle very large networks with up to hundreds of thousands to millions of nodes.</ns0:p><ns0:p>In the rest of this paper, an overview of related software packages is presented first, followed by a description of the library's architecture and main functionalities. Example use cases with fully working code samples are presented next. The paper is then concluded by showing performance results and a comparison against existing link prediction packages.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Several researchers in the area of link prediction have released implementations of their methods <ns0:ref type='bibr' target='#b20'>(Clauset et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b34'>Guimerà and Sales-Pardo, 2009;</ns0:ref><ns0:ref type='bibr' target='#b51'>Liu et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b62'>Papadopoulos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b56'>Muscoloni and Cannistraci, 2017)</ns0:ref>. These implementations are, naturally but inconveniently, written in different languages and offer diverse modalities of interaction with the user, which complicates their effective use. There is also a limited number of packages that provide unified interfaces to implementations of topological ranking methods. The R package linkprediction <ns0:ref type='bibr' target='#b11'>(Bojanowski and Chrol, 2019)</ns0:ref>, for instance, includes the implementation of the most important topological similarity algorithms. It offers a single method to compute the score of negative links using a specified similarity index. This package has several limitations, however. First, it only accepts connected undirected networks, which may be highly constraining as most real networks are disconnected. Since the package computes all negative links' scores, the size of networks that the package can handle is also limited. Furthermore, linkprediction does not offer any performance evaluation or test data generation functionalities. Other available packages include the commercial graph platform Neo4J (Neo4J, 2019), and NetworkX <ns0:ref type='bibr' target='#b35'>(Hagberg et al., 2019)</ns0:ref>, which both contain the implementation of a limited number of topological ranking methods. The Python package linkpred <ns0:ref type='bibr'>(linkpred, 2020)</ns0:ref> contains the implementation of a number of topological similarity methods and also global methods, including rooted PageRank, Katz index <ns0:ref type='bibr' target='#b38'>(Katz, 1953)</ns0:ref>, and SimRank <ns0:ref type='bibr' target='#b37'>(Jeh and Widom, 2002)</ns0:ref>. The library does not, however, support parallel and distributed implementations, nor does it support performance evaluation functionalities.</ns0:p><ns0:p>GEM <ns0:ref type='bibr'>(Goyal and Ferrara, 2018b,a)</ns0:ref> is a Python package that implements many state-of-the-art graph embedding techniques, including Locally Linear Embedding <ns0:ref type='bibr' target='#b66'>(Roweis and Saul, 2000)</ns0:ref>, Laplacian</ns0:p><ns0:p>Eigenmaps <ns0:ref type='bibr' target='#b8'>(Belkin and Niyogi, 2001)</ns0:ref>, Graph Factorization <ns0:ref type='bibr' target='#b41'>(Koren et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b2'>Ahmed et al., 2013)</ns0:ref>, Higher-Order Proximity preserved Embedding (HOPE) <ns0:ref type='bibr' target='#b59'>(Ou et al., 2016)</ns0:ref>, Structural Deep Network Embedding (SDNE) <ns0:ref type='bibr' target='#b73'>(Wang et al., 2016)</ns0:ref>, and node2vec <ns0:ref type='bibr' target='#b31'>(Grover and Leskovec, 2016)</ns0:ref>. It also includes several similarity measures that can be used in combination with these embedding algorithms to predict links. GEM is, however, more focused on graph embedding techniques than link prediction and, as such, does not include other types of link prediction methods such as topological similarity and probabilistic methods. SNAP (Stanford Network Analysis Platform) <ns0:ref type='bibr' target='#b46'>(Leskovec and Sosič, 2016)</ns0:ref>, which is a generalpurpose network analysis library, also includes an implementation of node2vec and GraphWave <ns0:ref type='bibr' target='#b24'>(Donnat et al., 2018)</ns0:ref>. Like GEM, SNAP is not dedicated to link prediction, and apart from its graph embedding algorithms, it includes only a limited number of topological similarity measures as part of its experimental components (snap-exp). Another general-purpose network analysis library is the Python package scikitnetwork <ns0:ref type='bibr' target='#b14'>(Bonald et al., 2020)</ns0:ref>, which contains the implementation of a number of local methods and graph embedding algorithms.</ns0:p><ns0:p>Given the importance of link prediction and the wide range of existing and potential applications, the currently available software packages clearly lack functionality and performance. Arguably, this stateof-affairs limits the successful application of existing algorithms to real-life problems and the rigorous testing of newly proposed methods. LinkPred aims at filling this gap that separates existing research from efficient software implementations. </ns0:p></ns0:div>
<ns0:div><ns0:head>ARCHITECTURE AND FUNCTIONALITIES</ns0:head><ns0:p>LinkPred aims at filling the existing gap between research and efficient software implementations of link prediction algorithms. As shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, it offers functionalities at various levels to help use, </ns0:p></ns0:div>
<ns0:div><ns0:head>The network data structures</ns0:head><ns0:p>The life cycle of a network has two distinct phases. In the pre-assembly phase, it is possible to add nodes and edges to the network. It is also possible to access nodes and translate external labels to internal IDs and vice versa. However, most functionalities related to accessing edges are not yet available. As a result, the network at this stage is practically unusable. To be able to use the network, it is necessary to assemble it first. Once assembled, no new nodes or edges can be added (or removed) to the network. The network is then fully functional and can be passed as an argument to any method that requires so.</ns0:p><ns0:p>To build a network, an empty network is first created by calling the default constructor:</ns0:p><ns0:formula xml:id='formula_0'>UNetwork<> net;</ns0:formula><ns0:p>Most classes in LinkPred manipulate networks through smart pointers for efficient memory management.</ns0:p><ns0:p>To create a shared pointer to a UNetwork object:</ns0:p><ns0:formula xml:id='formula_1'>a u t o net = std::make_shared<UNetwork<>>();</ns0:formula><ns0:p>Notice that the class UNetwork is a class template, which is instantiated with the default template arguments.</ns0:p><ns0:p>In this default setting, the labels are of type std::string, whereas internal IDs are of type unsigned int, but UNetwork can be instantiated with several other data types if wanted. For instance, the labels can be of type unsigned int, which may reduce storage size in some situations.</ns0:p><ns0:p>Adding nodes is achieved by calling the method addNode, which takes as parameter the node label and returns an std::pair containing, respectively, the node ID and a Boolean which is set to true if the node is newly inserted, false if the node already exists. The nodes IDs are guaranteed to be contiguous in The method addEdge is used to create an edge between two nodes specified by their IDs (not their labels):</ns0:p><ns0:p>net.addEdge(i, j); subsection The last step in building the network is to assemble it:</ns0:p><ns0:p>net.assemble();</ns0:p><ns0:p>The method assemble initializes the internal data structures and makes the network ready to be used.</ns0:p><ns0:p>Nodes can be accessed through iterators provided by nodesBegin() and nodesEnd(). For convenience, the iterator points to a pair, the first element of which is the internal ID, whereas the second is the external label. respectively. Oftentimes, one would want to iterate over a random sample of nodes instead of the whole set. This can be easily done using the two methods rndNodesBegin and rndNodesEnd.</ns0:p><ns0:p>Information on edges can only be accessed after assembling the network. One way to access edges is to iterate over all edges in the network. This can be done using the method edgesBegin() and edgesEnd().</ns0:p><ns0:p>As it is the case with nodes, it is possible to access a random sample of edges using rndEdgesBegin and rndEdgesEnd. LinkPred offers the possibility to iterate over negative links in the same way one iterates over positive edges. This can be done using the method nonEdgesBegin() and nonEdgesEnd(): std::cout << 'Start\tEnd' << std::endl; It is also possible to iterate over a randomly selected sample of negative links using rndNonEdgesBegin and rndNonEdgesEnd.</ns0:p><ns0:formula xml:id='formula_2'>f o</ns0:formula><ns0:p>To represent directed networks, LinkPred offers the class DNetwork, which offers a very similar interface to UNetwork.</ns0:p></ns0:div>
<ns0:div><ns0:head>Maps</ns0:head><ns0:p>Maps are a useful way to associate data with nodes and edges. Two types of maps are available in LinkPred: node maps (class NodeMap) and edge maps (class EdgeMap), both member of UNetwork. The first assigns data to the network nodes, whereas the latter maps data to edges (see Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> for an example).</ns0:p><ns0:p>Creating a node map is achieved by calling the method createNodeMap on the network object. This is a template method with the mapped data type as the only template argument. For example, to create a node map with data type double over the network net: a u t o nodeMap = net. t e m p l a t e createNodeMap<double >();</ns0:p><ns0:p>Creating an edge map can be done in a similar way: If a node map is sparse, that is, has non-default values only on a small subset of the elements, it is better to use a sparse node map. To create a sparse node map: a u t o nodeSMap = net. t e m p l a t e createNodeSMap<double >(0.0);</ns0:p><ns0:p>Notice that the method takes as input one parameter that specifies the map's default value (in this case, it is 0.0). Hence, any node which is not explicitly assigned a value is assumed to have the default value 0.0.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graph algorithms</ns0:head><ns0:p>To facilitate the implementation of link prediction algorithms, LinkPred comes with a set of graphalgorithmic tools such as efficient implementations of graph traversal, shortest path algorithms, and graph embedding methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graph traversal and shortest paths algorithms</ns0:head><ns0:p>LinkPred provides two classes for graph traversal: BFS, for Breadth-First traversal, and DFS for Depth-First traversal. They both inherit from the abstract class GraphTraversal, which declares one virtual method traverse. It takes as parameter the source node from where the traversal starts and a reference to a NodeProcessor object, which is in charge of processing nodes sequentially as they are visited.</ns0:p><ns0:p>In addition to graph traversal routines, LinkPred contains an implementation of Dijkstra's algorithm for solving the shortest path problem. To use it, it is first necessary to define a length (or weight) map that specifies the length associated with every edge in the graph. A length map is simply a map over the set of edges, that is, an object of type EdgeMap which can take integer or double values. The class Dijkstra offers two methods for computing distances:</ns0:p><ns0:p>• The method getShortestPath, which computes and returns the shortest path between two nodes and its length.</ns0:p><ns0:p>• The method getDist, which returns the distance between a source node and all other nodes. The returned value is a node map, where each node is mapped to a pair containing the distance from the source node and the number of edges in the corresponding shortest path.</ns0:p><ns0:p>Both methods run Dijkstra's algorithm, except that getShortestPath stops once the destination node is reached, whereas getDist continues until all reachable nodes are visited.</ns0:p><ns0:p>Computing shortest-path distances in large networks requires not only considerable time but also significant space resources. Consequently, efficient management of memory is necessary to render the task feasible in such situations. The abstract class NetDistCalculator provides an interface for an additional layer over the class Dijkstra which facilitates its use and can serve to manage memory usage. A NetDistCalculator object is associated with a single length map and provides two methods for computing distances: <ns0:ref type='table' target='#tab_13'>-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>6/23 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• getDist(i, j): Computes and returns the distance between the two nodes i and j. The returned value is an std::pair, with the first element being the distance, whereas the second is the number of hops in the shortest path joining the two nodes.</ns0:p><ns0:p>• getDist(i): Computes and returns a node map containing the distances from node i to all other nodes in the network.</ns0:p><ns0:p>LinkPred has two implementations of NetDistCalculator: ESPDistCalculator, an exact shortest path distance calculator which caches distances according to different strategies to balance memory usage and computation, and ASPDistCalculator, an approximate shortest path distance calculator. The approximation used in ASPDistCalculator works as follows. A set L of nodes called landmarks is selected, and the distance from each landmark to all other nodes is pre-computed and stored in memory. The distance between any two nodes i, j is then approximated by:</ns0:p><ns0:formula xml:id='formula_4'>d i j ≃ min k∈L [d ik + d k j ].</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>The landmarks are passed to ASPDistCalculator object using the method setLandmarks. Naturally, by increasing the number of landmarks, more precision can be obtained, be it though at a higher computational and memory cost.</ns0:p></ns0:div>
<ns0:div><ns0:head>Graph embedding algorithms</ns0:head><ns0:p>Graph embedding consists in transforming the graph's nodes and edges into elements of a low-dimensional vector space while preserving, as much as possible, its structural properties <ns0:ref type='bibr' target='#b30'>(Goyal and Ferrara, 2018c)</ns0:ref>. It is a problem with important applications in various fields, including link prediction <ns0:ref type='bibr' target='#b30'>(Goyal and Ferrara, 2018c;</ns0:ref><ns0:ref type='bibr' target='#b39'>Kazemi and Poole, 2018;</ns0:ref><ns0:ref type='bibr' target='#b5'>Alharbi et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b73'>Wang et al., 2016)</ns0:ref>, product recommendation <ns0:ref type='bibr' target='#b41'>(Koren et al., 2009)</ns0:ref>, data visualization (van der <ns0:ref type='bibr'>Maaten and Hinton, 2008;</ns0:ref><ns0:ref type='bibr' target='#b69'>Tang et al., 2016b;</ns0:ref><ns0:ref type='bibr' target='#b16'>Cao et al., 2016)</ns0:ref>, and node classification <ns0:ref type='bibr' target='#b9'>(Bhagat et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b68'>Tang et al., 2016a)</ns0:ref>.</ns0:p><ns0:p>LinkPred contains several state-of-the-art graph embedding algorithms, some of which are implemented from scratch, whereas others are based on publicly available implementations. These include methods based on matrix decomposition, namely Locally Linear Embedding <ns0:ref type='bibr' target='#b66'>(Roweis and Saul, 2000)</ns0:ref> implemented in the class LLE, Laplacian Eigenmaps <ns0:ref type='bibr' target='#b8'>(Belkin and Niyogi, 2001)</ns0:ref> implemented in the class LEM, and Matrix Factorization <ns0:ref type='bibr' target='#b41'>(Koren et al., 2009)</ns0:ref> (also referred to as Graph Factorization in <ns0:ref type='bibr' target='#b30'>(Goyal and Ferrara, 2018c;</ns0:ref><ns0:ref type='bibr' target='#b2'>Ahmed et al., 2013)</ns0:ref>) implemented in the class MatFact. Also available are methods based on random walks, including DeepWalk <ns0:ref type='bibr' target='#b63'>(Perozzi et al., 2014)</ns0:ref> implemented in the class DeepWalk, Large</ns0:p><ns0:p>Information Networks Embedding (LINE) <ns0:ref type='bibr' target='#b70'>(Tang et al., 2015)</ns0:ref>, implemented in the class LINE, LargeVis <ns0:ref type='bibr' target='#b69'>(Tang et al., 2016b)</ns0:ref> implemented in the class LargeVis, and node2vec <ns0:ref type='bibr' target='#b31'>(Grover and Leskovec, 2016)</ns0:ref>, which is implemented in the class Node2Vec. Additionally, the librray includes the implementation of the Hidden the Metric Space Model (HMSM) embedding method <ns0:ref type='bibr' target='#b5'>(Alharbi et al., 2016)</ns0:ref> available through the class HMSM.</ns0:p><ns0:p>To provide a uniform interface, all embedding algorithms implemented in LinkPred inherit from the abstract class Encoder, which declares the following methods:</ns0:p><ns0:p>• The method init, which is first called to initialize the internal data structures of the encoder. This is a pure virtual method of the class Encoder and must be implemented by derived classes.</ns0:p><ns0:p>• Once the encoder is initialized, the method encode, also a pure virtual method, is called to perform the embedding. This step typically involves solving an optimization problem, which can be computationally intensive both in terms of memory and CPU usage, especially for very large networks. The dimension of the embedding space can be queried and set using getDim and setDim respectively.</ns0:p><ns0:p>• The node embedding or the node code, which is the vector of coordinates assigned to the node, can be obtained by calling the method getNodeCode. The edge code is by default the concatenation of its two nodes' codes and can be obtained using getEdgeCode. Hence, in the default case, the edge code dimension is double that of a node. Classes that implement the Encoder interface may change this default behavior if desired. The user can query the dimension of the edge code using the method getEdgeCodeDim.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In the class UESMPredictor, this is followed by a similarity measure to predict link scores, whereas UESMPredictor uses a classifier to make the prediction.</ns0:p><ns0:p>Having a unified interface for encoders allows embedding algorithms to be easily combined with different classifiers and similarity measures to obtain various link prediction methods, as explained in the next sections. It also allows users to use their own embedding algorithms to build and test new link prediction methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Machine learning algorithms</ns0:head><ns0:p>The library contains the implementations of several classifiers and similarity measures that can be combined with graph embedding algorithms (see the previous section) to build a variety of link prediction methods. Available classifiers, most of which are derived from mlpack <ns0:ref type='bibr' target='#b21'>(Curtin et al., 2013)</ns0:ref>, include logistic regression, feed-forward neural networks, linear support vector machine, and Naive Bayes classifier. All binary classifiers in LinkPred implement the interface Classifier, which provides two important methods: the method learn which trains the classifier on a training set, and the method predict which predicts the output for a given input.</ns0:p><ns0:p>Similar to classifiers, all similarity measures in LinkPred inherit from the abstract class SimMeasure, which defines one method, sim, which computes the similarity between two input vectors. Implemented similarity measures include cosine similarity, dot product similarity, L 1 , L 2 and L p similarity, and Pearson similarity.</ns0:p></ns0:div>
<ns0:div><ns0:head>Link predictors</ns0:head><ns0:p>LinkPred includes a large selection of link prediction algorithms which can be broadly classified into three categories: topological similarity methods, global methods, and graph-embedding techniques. In terms of topological similarity predictors, the library contains the implementations of the most known algorithms existing in the literature, including Common Neighbors, Adamic-Adard, Resource Allocation, Cannistraci</ns0:p><ns0:p>Resource Allocation, and Jackard Index, among other predictors. <ns0:ref type='bibr' target='#b47'>(Liben-Nowell and Kleinberg, 2007;</ns0:ref><ns0:ref type='bibr' target='#b58'>Newman, 2001;</ns0:ref><ns0:ref type='bibr' target='#b36'>Jaccard, 1901;</ns0:ref><ns0:ref type='bibr' target='#b0'>Adamic and Adar, 2003;</ns0:ref><ns0:ref type='bibr' target='#b65'>Ravasz et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b60'>Papadimitriou et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b50'>Liu and Lü, 2010;</ns0:ref><ns0:ref type='bibr' target='#b48'>Lichtenwalter et al., 2010;</ns0:ref><ns0:ref type='bibr'>Yang et al., 2015b,c;</ns0:ref><ns0:ref type='bibr' target='#b82'>Zhu and Xia, 2015;</ns0:ref><ns0:ref type='bibr' target='#b56'>Muscoloni and Cannistraci, 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cannistraci et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b22'>Daminelli et al., 2015)</ns0:ref>. Due to their local nature, these algorithms can scale to very large networks, especially when executed on distributed architectures.</ns0:p><ns0:p>Addiitonally, the library includes several state-of-the-art global link predictors, such as SBM <ns0:ref type='bibr' target='#b34'>(Guimerà and Sales-Pardo, 2009)</ns0:ref>, HRG <ns0:ref type='bibr' target='#b20'>(Clauset et al., 2008)</ns0:ref>, FBM <ns0:ref type='bibr' target='#b51'>(Liu et al., 2013)</ns0:ref>, HyperMap <ns0:ref type='bibr' target='#b61'>(Papadopoulos et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b62'>(Papadopoulos et al., , 2015) )</ns0:ref> and the popularity-similarity method proposed in <ns0:ref type='bibr' target='#b40'>(Kerrache et al., 2020)</ns0:ref>.</ns0:p><ns0:p>LinkPred also supports link prediction algorithms based on graph embedding, where the network is first embedded into a low dimensional vector space, whereby nodes are assigned coordinates in that space while preserving the network's structural properties. These coordinates can be used either to compute the similarity between nodes or as features to train a classifier to discriminate between existing edges (the positive class) and non-existing edges (the negative class) <ns0:ref type='bibr' target='#b30'>(Goyal and Ferrara, 2018c)</ns0:ref>. LinkPred provides two classes that can be used to build link prediction algorithms based on graph embedding: the class UECLPredictor, which combines an encoder (a graph embedding algorithm) and a classifier, and the class UESMPredictor, which pairs the encoder with a similarity measure as illustrated in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ULPredictor for predictors in undirected networks and DLPredictor for those in directed networks.</ns0:p><ns0:p>Most implemented predictors support shared-memory parallelism, and a large number of them support distributed memory parallelism, allowing LinkPred to take advantage of the power of HPC clusters to handle very large networks.</ns0:p></ns0:div>
<ns0:div><ns0:head>The predictor interface</ns0:head><ns0:p>As stated above, all link predictors for undirected networks must inherit from the abstract class ULPredictor</ns0:p><ns0:p>. It declares three important pure virtual methods that the derivative classes must implement:</ns0:p><ns0:p>• The method void init(): This method is used to initialize the predictor's state, including any internal data structures.</ns0:p><ns0:p>• The method void learn(): In algorithms that require learning, it is in this method that the model is built. The learning is separated from prediction because, typically, the model is independent of the set of edges to be predicted.</ns0:p><ns0:p>• The method double score(Edge const & e): returns the score of the edge e (usually a non-existing edge).</ns0:p><ns0:p>In addition to these three basic methods, ULPredictor declares the following three virtual methods, which by default use the method score to assign scores to edges, but which can be redefined by derived classes to achieve better performance:</ns0:p><ns0:p>• The method void predict(EdgeRndIt begin, EdgeRndIt end, ScoreRndIt scores): In this method, the edges to be predicted are passed to the predictor in the form of a range (begin, end) in addition to a third parameter (scores) to which the scores are written. This is a virtual method that uses the method score to assign scores to edges and can be redefined by derived classes to provide better performance.</ns0:p><ns0:p>• The method std::pair<NonEdgeIt, NonEdgeIt> predictNeg(ScoreRndIt scores) predicts the score for all negative (non-existing) links in the network. The scores are written into the random output iterator scores. The method returns a pair of iterators begin and end to the range of non-existing links predicted by the method.</ns0:p><ns0:p>• The method std::size_t top(std::size_t k, EdgeRndOutIt eit, ScoreRndIt sit) finds the k negative edges with the top scores. The edges are written to the output iterator eit, whereas the scores are written to sit.</ns0:p><ns0:p>The class ULPredictor offers default implementations for the methods top, predict and predictNeg.</ns0:p><ns0:p>Sub-classes may use these implementations or redefine them to achieve better performance.</ns0:p><ns0:p>The abstract class DLPredictor plays the same role as ULPredictor but for link predictors in directed networks. It offers the same interface as the latter but with different default template arguments and methods implementation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>LinkPred offers a set of tools that help to streamline the performance evaluation procedure. This includes data setup functionalities, which can be used to create test data by removing and adding edges to ground truth networks. The library also includes efficient implementations of the most important performance measures used in link prediction literature, including the area under the receiver operating characteristic (ROC) curve, the area under the precision-recall (PR) curve, and top precision. The area under the PR curve can be computed using two integration methods: the trapezoidal rule, which uses a linear interpolation between the PR points, and the more accurate nonlinear interpolation method proposed in <ns0:ref type='bibr' target='#b23'>(Davis and Goadrich, 2006)</ns0:ref>. In addition to performance measures implementations, LinkPred contains helper classes, namely PerfEvaluator and PerfEvalExp, that facilitate the comparative evaluation of multiple link prediction algorithms using multiple performance measures.</ns0:p><ns0:p>All performance measures inherit from the abstract class PerfMeasure. The most important method in this class is eval which evaluates the value of the performance measure. PerfResults is defined as std::map<std::string, double>, which allows the possibility of associating several result values with a single performance measure.</ns0:p><ns0:p>An important class of performance measures is performance curves such as ROC and PR curves. They are represented by the abstract class PerfCurve, which inherits from the class PerfMeasure. The class</ns0:p><ns0:p>PerfCurve defines a new virtual method getCurve, which returns the performance curve in the form of an std::vector of points. In the remainder of this section, more details of the performance measures implemented in LinkPred are presented.</ns0:p></ns0:div>
<ns0:div><ns0:head>Receiver operating characteristic curve (ROC)</ns0:head><ns0:p>One of the most important performance measure used in the field of link prediction is the receiver operating (ROC) curve, in which the true positive rate (recall) is plotted against the false positive rate. The ROC curve can be computed using the class ROC. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> (A) shows an example ROC curve obtained using this class.</ns0:p><ns0:p>The default behavior of the ROC performance measure is to compute the positive and negative edge scores and then compute the area under the curve, which may lead to memory issues with large networks.</ns0:p><ns0:p>To compute the area under the curve without storing both types of scores, the class ROC offers a method that streams scores without storing them. To enable this method, call setStrmEnabled(bool) on the ROC object. To specify which scores to stream use the method setStrmNeg(bool). By default, the negative scores are streamed, while the positive scores are stored. Passing false to setStrmNeg switches this. In addition to consuming little memory, the streaming method supports distributed processing (in addition to shared memory parallelism), making it suitable for large networks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Precision-recall curve</ns0:head><ns0:p>The precision-recall (PR) curve is also a widely used measure of link prediction algorithms' performance.</ns0:p><ns0:p>In this curve, the precision is plotted as a function of the recall. The PR curve can be computed using the class PR. The area under the PR curve can be computed using two integration methods:</ns0:p><ns0:p>• The trapezoidal rule which assumes a linear interpolation between the PR points.</ns0:p><ns0:p>• Nonlinear interpolation as proposed by Jesse Davis and Mark Goadrich <ns0:ref type='bibr' target='#b23'>(Davis and Goadrich, 2006)</ns0:ref>.</ns0:p><ns0:p>The second method is more accurate, as linear integration tends to overestimate the area under the curve <ns0:ref type='bibr' target='#b23'>(Davis and Goadrich, 2006)</ns0:ref>. Furthermore, the implementation of Davis-Goadrich nonlinear interpolation in LinkPred ensures little to no additional cost compared to the trapezoidal method. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>General performance curves</ns0:head><ns0:p>LinkPred offers the possibility of calculating general performance curves using the class GCurve. A performance curve is, in general, defined by giving the x and y coordinates functions. These are passed as parameters, in the form of lambdas, to the constructor of the class GCurve. The associated performance value is the area under the curve computed using the trapezoidal rule (linear interpolation). For example, the ROC curve can be defined as:</ns0:p><ns0:p>GCurve<> cur(fpr, rec, 'ROC');</ns0:p><ns0:p>The two first parameters of the constructors are lambdas having the signature: </ns0:p></ns0:div>
<ns0:div><ns0:head>Top precision</ns0:head><ns0:p>The top precision measure is defined as the ratio of true positives within the top l scored edges, l > 0 being a parameter of the measure (usually l is set to the number of links removed from the network). Top precision is implemented by the class TPR, and since it is not a curve measure, this class inherits directly from PerfMeasure. The class TPR offers two approaches for computing top-precision. The first approach requires computing the score of all negative links, whereas the second approach calls the method top of the predictor. The first approach is, in general, more precise but may require more memory and time.</ns0:p><ns0:p>Consequently, the second approach is the performance measure of choice for very large networks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Simplified interface and bindings</ns0:head><ns0:p>The simplified interface provides the essential functionalities available in LinkPred via a small number of easy-to-use classes. These classes are very intuitive and can be used with a minimum learning effort. They are ideal for initial use of the library and exploring its main functionalities. Java and Python bindings for the simplified interface are also available, facilitating the library's use by users who are more comfortable using these languages. The simplified interface contains two main classes: Predictor, which allows computing the scores for an input network using all available link prediction algorithms, and the class</ns0:p><ns0:p>Evaluator, which can be used for performance evaluation. Also included are simple structures to store prediction and performance results. These classes are designed in a simple way that allows uniform usage across different programming languages.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXAMPLE USE CASES</ns0:head><ns0:p>This section describes four main use scenarios of the library. The first use case demonstrates the working of the simplified interface in different languages, which is typical for first-time use of the library or for users who prefer to use the library in Python or Java. The second scenario consists in computing the scores of all non-existing links in a network, which is the typical use case for a practitioner working on networked data. Researchers in link prediction are typically interested in implementing new link prediction algorithms, which is presented as the third use case, and evaluating their performance, which is use case number four.</ns0:p></ns0:div>
<ns0:div><ns0:head>Using the simplified interface</ns0:head><ns0:p>The first example program shows how to use the simplified interface to obtain the top k ranked edges using Adamic Adar index in c++: p r i n t (es.i + '\t' + es.j + '\t' + '{:.4f}'.format(es.score));</ns0:p><ns0:p>and finally, in Java; The output of these three programs is as follows: In the second example, the performance measure of several link prediction algorithms is evaluated by removing 10% of the links from the network and using it as a test set. The performance is assessed using two performance measures, area under the ROC curve, and top-precision. The code for this example in C++: The output of these three programs is as follows: </ns0:p><ns0:formula xml:id='formula_5'>#</ns0:formula></ns0:div>
<ns0:div><ns0:head>Predicting missing links</ns0:head><ns0:p>When dealing with networked data, a data scientist may be interested in reconstructing a network from partial observations or predicting future interactions. LinkPred offers two ways to solve such problems,</ns0:p></ns0:div>
<ns0:div><ns0:head>13/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>computing the scores of all non-existing links and computing top k edges, which may be more efficient for large networks. This section demonstrates how to perform both tasks.</ns0:p><ns0:p>The following code excerpt shows how to compute and print the scores of all non-existing links in a network using SBM. The observed network is passed as an argument to the constructor of the algorithm, which is then initialized by calling the method init. The learning process, if any, is triggered by a call to the method learn. The simplest way to obtain the score of a non-existing link is to call the method score, though other methods of the predictor interface may result in better performance. We can also use an embedding-classifier predictor. In the following code, the graph is embedded using node2vec and logistic regression is used to predict scores. Instead of computing the scores of all non-existing links, it is possible to extract the top k ranked edges only. Besides convenience, this approach may be the only viable option for very large networks due to memory considerations. Furthermore, for many prediction algorithms, particularly topological similarity methods, finding the top k edges is much faster than computing the scores of all non-existing links. The following code shows how to find the top k edges using Resource Allocation index. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Implementing a new link prediction algorithm</ns0:head><ns0:p>The first step in implementing a new link prediction algorithm is to inherit from ULPredictor and implement the necessary methods. For a minimal implementation, the three methods init, learn and score must at least be defined. To achieve better performance one may want to redefine the three other methods (top, predict and predictNeg).</ns0:p><ns0:p>Suppose one wants to create a very simple link prediction algorithm that assigns as score to (i, j)</ns0:p><ns0:p>the score κ i + κ j , the sum of the degrees of the two nodes. In a file named sdpredictor.hpp, write the following code: } Upon compiling and executing this code, the output will be as follows (for compilation instructions, the reader is invited to consult the library user guide): New link prediction algorithms can also be easily integrated into the library source code, as explained in detail in the library user guide.</ns0:p><ns0:formula xml:id='formula_6'>#Start</ns0:formula></ns0:div>
<ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>Another use case scenario is evaluating and comparing the performance of link prediction algorithms.</ns0:p><ns0:p>LinkPred offers several ways to achieve this, offering various degrees of control on the evaluation process.</ns0:p><ns0:p>One such method is shown in the code sample below. Here, the user defines a factory class used to instantiate the prediction algorithms and performance measures. The parameters of the experiment, including the ratio of removed edges and the number of test runs, are passed through an object of type</ns0:p><ns0:p>PerfeEvalExpDescp. The evaluation is finally conducted by passing the factory and parameter objects to an object of type PerfEvalExp then calling the method run. The output results for the first few iterations is a s follows: </ns0:p><ns0:formula xml:id='formula_7'>#</ns0:formula></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>In addition to providing an easy interface to use, create and evaluate link prediction algorithms, LinkPred is designed to handle very large networks, which is a quality that is essential for most practical applications.</ns0:p><ns0:p>To demonstrate the performance of LinkPred, its time performance is compared to that of the R package linkprediction and the Python packages linkpred, NetworkX and scikit-network. To conduct a fair and meaningful comparison, two issues are to be resolved. First, these packages do not implement the same set of algorithms, and only a limited number of topological similarity methods are implemented by all five libraries. Accordingly, the Resource Allocation index is chosen as the comparison task, since it is implemented by all five packages and exhibits the same network data access patterns as most local methods.</ns0:p><ns0:p>The second issue that needs to be addressed is that the libraries under consideration offer programming interfaces with different semantics. For instance, scikit-network computes the score for edges given as input, whereas the R package linkprediction and Python packages Linkpred and NetworkX do not require input and instead return the scores of non-existing links. Furthermore, the Python package linkpred returns the scores of only candidate edges that have a non-zero score. To level the field, the comparison shall consist in computing the scores of all non-existing links, even those with zero scores. All networks used in this experiment are connected due to the restriction imposed by the package linkprediction. A description of these networks is given in Table <ns0:ref type='table' target='#tab_13'>4</ns0:ref> of the appendix. For the sake of fairness, parallelism is disabled in LinkPred, and all experiments are conducted on a single core of an Intel Core i7-4940MX CPU with 32GB of memory. The time reported in Table <ns0:ref type='table' target='#tab_10'>2</ns0:ref> is the average execution time over ten runs, excluding the time required to read the network from file. The time for LinkPred is reported for C++ code and the Java and Python bindings. The results show that LinkPred is typically one to two orders of magnitudes faster than the other packages. This, of course, can in part be explained by the interpreted nature of Python and R, but it also highlights the fact that link prediction is a computationally intensive task that is best handled by high-performance software that uses efficient data structures and algorithms. As shown in the table, the Java binding of LinkPred introduces a small overhead compared to its Python binding due to more complex data marshaling in the latter. Nevertheless, the Python binding is significantly faster than the Python packages and, except for a couple of networks, is also faster than linkprediction.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_11'>3</ns0:ref> shows the time taken by LinkPred to complete different link prediction tasks on various hardware architectures. It shows that the library can handle very large networks in relatively small amounts of time, even when the available computational resources are limited.</ns0:p></ns0:div>
<ns0:div><ns0:head>18/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>LinkPred is a distributed and parallel library for link prediction in complex networks. It contains the implementation of the most important link prediction algorithms found in the literature. The library is designed not only to achieve high performance but also to be easy-to-use and extensible. The experiments show that the library can handle very large networks with up to millions of nodes and edges and is one to two orders of magnitude faster than existing Python and R packages. LinkPrted components interact through clearly defined and easy interfaces, allowing users to plug their own components into the library by implementing these interfaces. In particular, users can integrate their own link prediction algorithms and performance measures seamlessly into the library. This makes LinkPred an ideal tool for practitioners A APPENDIX: DATA <ns0:ref type='bibr'>1,965,206 2,766,607</ns0:ref> Diseasome <ns0:ref type='bibr' target='#b27'>(Goh et al., 2007)</ns0:ref> A network of genes' disorders and disease linked by known disorder-gene associations. The data is available at http://gephi.org/ datasets/diseasome.gexf.zip.</ns0:p></ns0:div>
<ns0:div><ns0:head>1,419 2,738</ns0:head><ns0:p>Email <ns0:ref type='bibr' target='#b32'>(Guimerà et al., 2003)</ns0:ref> The symmetrized network of email communication at the University Rovira i Virgili (Tarragona, Spain). </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Architecture of LinkPred.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>. . , n − 1, where n is the number of nodes. a u t o res = net.addNode(label); a u t o id = res.first; // This the node ID b o o l inserted = res.second; // Was the node inserted or did it already exist?</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. A node map associate values to nodes (A), whereas an edge map associates values to edges (B).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. The first stage in a graph embedding method is accomplished by an encoder class which uses a graph embedding algorithm to . In the class UESMPredictor, this is followed by a similarity measure to predict link scores, whereas UESMPredictor uses a classifier to make the prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>In addition to algorithms for undirected networks, several adaptations of topological similarity methods to directed networks are available as well. The library offers a unified interface for all link prediction algorithms, simplifying the use and comparison of different prediction methods. The interface is called8/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Example of performance curves generated by LinkPred (the plots are created using an external tool). The area under the curve (shown in gray) is the value associated with the performance curve.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4 (B) shows an example PR curve obtained using this class PR. 10/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>d o u b l e (std::size_t tp, std::size_t fn, std::size_t tn, std::size_t fp, std::size_t P, std::size_t N)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>network from file p.loadnet('Zakarays_Karate_Club.edges'); // Predict the top k edges using Adamic Adar index std::vector<EdgeScore> esv = p.predTopADA(k); // Print the scores f o r ( a u t o it = esv.begin(); it != esv.end(); ++it) { 11/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)Manuscript to be reviewed Computer Science std::cout << it->i << '\t' << it->j << '\t' << it->score << std:es i n esv:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>p u b l i c c l a s s PredictorExp { s t a t i c { // Load the library System.loadLibrary('LinkPredJava'); } p u b l i c s t a t i c v o i d main(String[] args) { i n t k = 10; // Create a prtedictor object Predictor p = new Predictor(); // Load network from file p.loadnet('Zakarays_Karate_Club.edges'); // Predict the top k edges using Adamic Adar index EdgeScoreVec esv = p.predTopADA(k); // Print the scores f o r ( i n t i = 0; i < esv.size(); i++) { EdgeScore es = esv.get(i);System.out.println(es.getI() + '\t' + es.getJ() + '\t' + es.getScore(</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>##</ns0:head><ns0:label /><ns0:figDesc>i n c l u d e <linkpred.hpp> # i n c l u d e <iostream> u s i n g namespace LinkPred::Simp; i n t main() { i n t nbRuns = 10; d o u b l e edgeRemRatio = 0.1; // Create an evaluator object Evaluator eval;// Add predictors to be evaluated eval.addCNE(); Run experiment on the specified network ev.run('Zakarays_Karate_Club.edges', nbRuns, edgeRemRatio); experiment on the specified network eval.run('Zakarays_Karate_Club.edges', nbRuns, edgeRemRatio); } }</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>the network from file a u t o net = UNetwork<>::read('Zakarays_Karate_Club.edges'); // Create a node2vec encoder (777 is a seed) a u t o encoder = std::make_shared<Node2Vec<>>(net, 777); // Create a logistric regresser (0.001 is the regularization coefficient, and 888 is a seed) a u t o classifier = std::make_shared<LogisticRegresser<>>(0.001, 888); // Create an encoder-classifier predictor (999 is a seed) UECLPredictor<> predictor(net, encoder, classifier, 999scores std::cout << '#Start\tEnd\tScore\n'; f o r ( a u t o it=net->nonEdgesBegin();it!=net->nonEdgesEnd();++it){ a u t o i = net->getLabel(net->start( * it)); a u t o j = net->getLabel(net->end( * it)); 14/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science d o u b l e sc = predictor.score( * it); std::cout << i << '\t' << j << '\t' << sc << std::endl; } r e t u r n 0; } The following is partial output of this program:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>vector<typename UNetwork<>::Edge> edges(k); std::vector<double > scores(k); // Find top k edges k = predictor.top(k, edges.begin(), scores.begin()); // Print edges and scores std::cout << '#Start\tEnd\tScore\n'; f o r ( i n t l = 0; l < k; l++) { a u t o i = net->getLabel(net->start(edges[l])); a u t o j = net->getLabel(net->end(edges[l])); std::cout << i << '\t' << j << '\t' << scores[l] <<std:Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>#</ns0:head><ns0:label /><ns0:figDesc>i f n d e f SDPREDICTOR_HPP_ # d e f i n e SDPREDICTOR_HPP_ # i n c l u d e <linkpred.hpp> c l a s s SDPredictor: p u b l i c LinkPred::ULPredictor<> { u s i n g LinkPred::ULPredictor<>::net; u s i n g LinkPred::ULPredictor<>::name; p u b l i c : u s i n g Edge = typename LinkPred::ULPredictor<>::Edge; SDPredictor(std::shared_ptr<LinkPred::UNetwork<> c o n s t > net) : LinkPred::ULPredictor <>(net) { name = 'SD'; } v i r t u a l v o i d init(); v i r t u a l v o i d learn(); v i r t u a l d o u b l e score(Edge c o n s t & e); v i r t u a l ˜SDPredictor() = d e f a u l t ; }; # e n d i f In a file named sdpredictor.cpp write the implementation of the inherited methods (note that this predictor does not require initialization or learning): # i n c l u d e 'sdpredictor.hpp' // No init required v o i d SDPredictor::init() {} // No training required v o i d SDPredictor::learn() {} // Here, we compute the score d o u b l e SDPredictor::score(Edge c o n s t & e) { a u t o i = net->start(e); a u t o j = net->end(e); // Return the sum of degrees r e t u r n net->getDeg(i) + net->getDeg(j); } This predictor is now ready to be used with LinkPred classes and methods including performance evaluating routines. For instance, it is possible to write a code that extracts the edges with the top scores as follows: # i n c l u d e 'sdpredictor.hpp' std::vector<typename UNetwork<>::Edge> edges(k); std::vector<double > scores(k); // Find top k edges k = predictor.top(k, edges.begin(), scores.begin()); // Print edges and scores std::cout << '#Start\tEnd\tScore\n'; f o r ( i n t l = 0; l < k; l++) { 16/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science a u t o i = net->getLabel(net->start(edges[l])); a u t o j = net->getLabel(net->end(edges[l])); std::cout << i << '\t' << j << '\t' << scores[l] <<std::endl; } r e t u r n 0;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>#</ns0:head><ns0:label /><ns0:figDesc>i n c l u d e <linkpred.hpp> u s i n g namespace LinkPred; // This class is used to create predictors and performance measures c l a s s Factory: p u b l i c PEFactory<> { p u b l i c : // Create predictors v i r t u a l std::vector<std::shared_ptr<ULPredictor<>>> getPredictors(std::shared_ptr< UNetwork<> c o n s t > obsNet) { std::vector<std::shared_ptr<ULPredictor<>>> prs; // Add predictors prs.push_back(std::make_shared<URALPredictor<>>(obsNet)); prs.push_back(std::make_shared<UKABPredictor<>>(obsNet)); reference network from file a u t o refNet = UNetwork<>::read('Zakarays_Karate_Club.edges'); // Description of the experiment PerfeEvalExpDescp<> ped; ped.refNet = refNet; ped.nbTestRuns = 10; ped.seed = 777;</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Table 1 contains a comparison in terms of functionality between LinkPred and the main open-source packages used for link prediction. The architecture of LinkPred and the functionalities shown in Table 1 are discussed in detail in the next section.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of LinkPred against the most important free/open-source link prediction software packages.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Functionality LinkPred</ns0:cell><ns0:cell /><ns0:cell>NetworkX</ns0:cell><ns0:cell cols='2'>linkprediction GEM</ns0:cell><ns0:cell>SNAP</ns0:cell><ns0:cell /><ns0:cell>linkpred</ns0:cell><ns0:cell>scikit-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell>Supported</ns0:cell><ns0:cell>C++,</ns0:cell><ns0:cell /><ns0:cell>Python</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>Python</ns0:cell><ns0:cell>C++,</ns0:cell><ns0:cell /><ns0:cell>Python</ns0:cell><ns0:cell>Python</ns0:cell></ns0:row><ns0:row><ns0:cell>languages</ns0:cell><ns0:cell>Python</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Python</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>(a subset of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(a subset of</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>the function-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>the function-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>alities), Java</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>alities)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>(a subset of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>the function-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>alities)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Topological</ns0:cell><ns0:cell cols='2'>Yes (with</ns0:cell><ns0:cell>Yes (no par-</ns0:cell><ns0:cell>Yes (no par-</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>(A</ns0:cell><ns0:cell>Yes (no par-</ns0:cell><ns0:cell>Yes (no par-</ns0:cell></ns0:row><ns0:row><ns0:cell>similarity</ns0:cell><ns0:cell>shared</ns0:cell><ns0:cell /><ns0:cell>allelism)</ns0:cell><ns0:cell>allelism)</ns0:cell><ns0:cell /><ns0:cell>limited</ns0:cell><ns0:cell /><ns0:cell>allelism)</ns0:cell><ns0:cell>allelism)</ns0:cell></ns0:row><ns0:row><ns0:cell>methods</ns0:cell><ns0:cell cols='2'>memory and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>number of</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>distributed</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>algorithms</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>parallelism)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>is included</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>as an ex-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>perimental</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>component)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Global link</ns0:cell><ns0:cell cols='2'>Yes (with</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell /><ns0:cell>Yes (Rooted</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell>prediction</ns0:cell><ns0:cell>shared</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>PageRank,</ns0:cell></ns0:row><ns0:row><ns0:cell>methods</ns0:cell><ns0:cell>memory</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>SimRank,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>parallelism</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Katz, short-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>and</ns0:cell><ns0:cell>for</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>est path)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>some pre-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>dictors also</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>distributed</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>parallelism)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Graph em-</ns0:cell><ns0:cell>LLE,</ns0:cell><ns0:cell /><ns0:cell>None</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>LLE,</ns0:cell><ns0:cell>node2vec</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>bedding</ns0:cell><ns0:cell cols='2'>Laplacian</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Laplacian</ns0:cell><ns0:cell cols='2'>and Graph-</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>algorithms</ns0:cell><ns0:cell cols='2'>Eigenmaps,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Eigenmaps,</ns0:cell><ns0:cell>Wave</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Graph Fac-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Graph Fac-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>torization,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>torization,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>DeepWalk,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>HOPE,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LINE,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>SDNE, and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>LargeVis,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>node2vec</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>node2vec,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>and HMSM</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>It is also possible to translate labels to IDs and vice versa using getID(label) and getLabel(id)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>std::cout << 'ID\tLabel' << std::endl;</ns0:cell></ns0:row><ns0:row><ns0:cell>f o r ( a u t o it = net.nodesBegin(); it != net.nodesEnd(); ++it) {</ns0:cell></ns0:row><ns0:row><ns0:cell>std::cout << it->first << '\t' << it->second << std::endl;</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row><ns0:row><ns0:cell>Alternatively, one can iterate over labels in a similar way using the iterators labelsBegin() and labelsEnd</ns0:cell></ns0:row><ns0:row><ns0:cell>():</ns0:cell></ns0:row><ns0:row><ns0:cell>std::cout << 'Label\tID' << std::endl;</ns0:cell></ns0:row><ns0:row><ns0:cell>f o r ( a u t o it = net.labelsBegin(); it != net.labelsEnd(); ++it) {</ns0:cell></ns0:row><ns0:row><ns0:cell>std::cout << it->first << '\t' << it->second << std::endl;</ns0:cell></ns0:row><ns0:row><ns0:cell>}</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>More use case examples can be found in the library documentation. These include using other link prediction algorithms, computing the scores of a specific set of edges, and other methods for computing the performance of one or several link prediction algorithms.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>ratio ROCKAB ROCRAL TPRKAB TPRRAL</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.8615 0.8028 0.1250 0.1250</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.7943 0.7823 0.1250 0.1667</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.6945 0.6712 0.0000 0.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.6417 0.6219 0.2500 0.1250</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.5817 0.5487 0.0000 0.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.8527 0.8386 0.3750 0.3438</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.5705 0.5167 0.0000 0.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.8834 0.8359 0.1250 0.1250</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.8962 0.8617 0.2500 0.1250</ns0:cell></ns0:row><ns0:row><ns0:cell>0.10</ns0:cell><ns0:cell>0.7650 0.7433 0.2500 0.2500</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Time (in seconds) required to compute the score of all non-existing links using Resource Allocation index on a single core.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>LinkPred</ns0:cell><ns0:cell>LinkPred</ns0:cell><ns0:cell>LinkPred</ns0:cell><ns0:cell>Python</ns0:cell><ns0:cell>R package</ns0:cell><ns0:cell>Python pack-</ns0:cell><ns0:cell>Python</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(C++)</ns0:cell><ns0:cell>(Java)</ns0:cell><ns0:cell>(Python)</ns0:cell><ns0:cell>package</ns0:cell><ns0:cell>linkpredic-</ns0:cell><ns0:cell>age linkpred</ns0:cell><ns0:cell>package</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>NetworkX</ns0:cell><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell>scikit-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell>Political</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>0.14</ns0:cell><ns0:cell>1.83</ns0:cell><ns0:cell>3.70</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>3.13</ns0:cell></ns0:row><ns0:row><ns0:cell>Blogs</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Diseasome</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>6.33</ns0:cell><ns0:cell>2.53</ns0:cell><ns0:cell>1.26</ns0:cell><ns0:cell>14.98</ns0:cell></ns0:row><ns0:row><ns0:cell>Email</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>7.78</ns0:cell><ns0:cell>6.88</ns0:cell><ns0:cell>1.60</ns0:cell><ns0:cell>9.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Web Edu</ns0:cell><ns0:cell>0.14</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>4.16</ns0:cell><ns0:cell>36.92</ns0:cell><ns0:cell>8.67</ns0:cell><ns0:cell>5.31</ns0:cell><ns0:cell>68.71</ns0:cell></ns0:row><ns0:row><ns0:cell>Java</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.23</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>17.08</ns0:cell><ns0:cell>55.54</ns0:cell><ns0:cell>8.95</ns0:cell><ns0:cell>17.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Power</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>1.83</ns0:cell><ns0:cell>11.05</ns0:cell><ns0:cell>80.55</ns0:cell><ns0:cell>3.80</ns0:cell><ns0:cell>11.16</ns0:cell><ns0:cell>183.71</ns0:cell></ns0:row><ns0:row><ns0:cell>Erdos 02</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>3.62</ns0:cell><ns0:cell>21.71</ns0:cell><ns0:cell>179.75</ns0:cell><ns0:cell>44.15</ns0:cell><ns0:cell>30.42</ns0:cell><ns0:cell>358.37</ns0:cell></ns0:row><ns0:row><ns0:cell>World Air</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>1.10</ns0:cell><ns0:cell>5.79</ns0:cell><ns0:cell>81.06</ns0:cell><ns0:cell>55.06</ns0:cell><ns0:cell>11.71</ns0:cell><ns0:cell>97.91</ns0:cell></ns0:row><ns0:row><ns0:cell>Oregon</ns0:cell><ns0:cell>2.32</ns0:cell><ns0:cell>9.62</ns0:cell><ns0:cell>56.96</ns0:cell><ns0:cell>525.76</ns0:cell><ns0:cell>573.47</ns0:cell><ns0:cell>157.60</ns0:cell><ns0:cell>936.84</ns0:cell></ns0:row><ns0:row><ns0:cell>PGP</ns0:cell><ns0:cell>2.42</ns0:cell><ns0:cell>9.12</ns0:cell><ns0:cell>51.31</ns0:cell><ns0:cell>603.75</ns0:cell><ns0:cell>35.74</ns0:cell><ns0:cell>57.32</ns0:cell><ns0:cell>862.56</ns0:cell></ns0:row><ns0:row><ns0:cell>Spam</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>2.33</ns0:cell><ns0:cell>10.33</ns0:cell><ns0:cell>318.16</ns0:cell><ns0:cell>199.83</ns0:cell><ns0:cell>42.80</ns0:cell><ns0:cell>171.68</ns0:cell></ns0:row><ns0:row><ns0:cell>Indochina</ns0:cell><ns0:cell>2.48</ns0:cell><ns0:cell>10.04</ns0:cell><ns0:cell>59.16</ns0:cell><ns0:cell>1086.26</ns0:cell><ns0:cell>91.95</ns0:cell><ns0:cell>74.61</ns0:cell><ns0:cell>1003.82</ns0:cell></ns0:row><ns0:row><ns0:cell>2004</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Time achieved by LinkPred on different prediction tasks. Column n contains the number of nodes in the network, whereas m shows the number of edges.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>m Task</ns0:cell><ns0:cell>Hardware</ns0:cell><ns0:cell>Time (sec.)</ns0:cell></ns0:row><ns0:row><ns0:cell>Brightkite</ns0:cell><ns0:cell>58,228</ns0:cell><ns0:cell>214,078 Compute ROC using 10% removed</ns0:cell><ns0:cell>1 node, 6 cores (Core i7-</ns0:cell><ns0:cell>32.92</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>edges for ADA.</ns0:cell><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yahoo IM</ns0:cell><ns0:cell>100,001</ns0:cell><ns0:cell>587,964 Find the top 10 4 edges using RAL.</ns0:cell><ns0:cell>1 node, 1 core (Core i7-</ns0:cell><ns0:cell>6.70</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Twitter</ns0:cell><ns0:cell>404,719</ns0:cell><ns0:cell>713,319 Find the top 10 5 edges using RAL.</ns0:cell><ns0:cell>1 node, 1 core (Core i7-</ns0:cell><ns0:cell>16.93</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Youtube</ns0:cell><ns0:cell cols='2'>1,134,890 2,987,624 Find the top 10 5 edges using CNE.</ns0:cell><ns0:cell>1 node, 6 cores (Core i7-</ns0:cell><ns0:cell>79,41</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>CA Roads</ns0:cell><ns0:cell cols='2'>1,965,206 2,766,607 Find the top 10 5 edges using CNE.</ns0:cell><ns0:cell>1 node, 6 cores (Core i7-</ns0:cell><ns0:cell>7.08</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Wiki Talks</ns0:cell><ns0:cell cols='2'>2,394,385 4,659,565 Find the top 10 5 edges using CNE.</ns0:cell><ns0:cell>1 node, 6 cores (Core i7-</ns0:cell><ns0:cell>470.04</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8750H)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Internet</ns0:cell><ns0:cell>124,651</ns0:cell><ns0:cell>193,620 Compute top-precision using 10%</ns0:cell><ns0:cell>8 nodes, 16 cores in each</ns0:cell><ns0:cell>3.73</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>removed edges for eight algorithms.</ns0:cell><ns0:cell>node (Xeon E5-2650)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Amazon</ns0:cell><ns0:cell>334,863</ns0:cell><ns0:cell>925,872 Compute top-precision using 10%</ns0:cell><ns0:cell>8 nodes, 16 cores in each</ns0:cell><ns0:cell>24.17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>removed edges for eight algorithms.</ns0:cell><ns0:cell>node (Xeon E5-2650)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head /><ns0:label /><ns0:figDesc>The library can be improved and extended in several ways, such as adding R and Octave/Matlab bindings. Another possibility for improvement is implementing further graph embedding algorithms, particularly those based on deep neural networks. Also important is handling dynamic (time-evolving) networks. Finally, sampling-based methods such as SBM and FBM, although producing good results, are only usable with small networks because they are computationally intensive. Distributed implementations of these algorithms will allow using them in practical situations on large networks.ACKNOWLEDGMENTSThis research work is supported by the Research Center, CCIS, King Saud University, Riyadh, Saudi Arabia. Part of the computational experiments reported in this work were conducted on the SANAM supercomputer at King Abdulaziz City for Science and Technology (hpc.kacst.edu.sa).</ns0:figDesc><ns0:table /><ns0:note>19/23PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)Manuscript to be reviewedComputer Scienceas well as researchers in link prediction.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Table4below contains the description of all networks used in the experimental evaluation section. Description of the networks used in the experimental analysis. Columns n and m represent the number of nodes and edges in the network, respectively. Amazon product co-purchasing network. An edge indicates that two products have been co-purchased. Data available at https://snap. stanford.edu/data/com-Amazon.html.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head /><ns0:label /><ns0:figDesc>The nodes represent users, and edges indicate an email communication took place between the two uses. The dataset is available at http://deim.urv.cat/ ˜alexandre. arenas/data/welcome.htm. at http://vlado.fmf.uni-lj.si/pub/ networks/data/Erdos/Erdos02.net. Network of Internet routers. The network is available at https:// sparse.tamu.edu/Pajek/internet. Java The symmetrized version of a network where nodes represent Java classes and edges represent compile-time dependencies between two classes. The dataset can be found at http://vlado.fmf.uni-lj. si/pub/networks/data/GD/GD.htm. AS) peering network inferred from Oregon routeviews on May 26, 2001.The data is available at https://snap. stanford.edu/data/oregon1_010526.txt.gz. Boguñá et al., 2004) A social network of users using Pretty Good Privacy (PGP) algorithm. The network is available at http://deim.urv.cat/ ˜alexandre.arenas/data/welcome.htm. The Western States Power Grid of the United States. Data available at http://www-personal.umich.edu/ ˜mejn/netdata/. Castillo et al., 2008) A WWW network available at http://networkrepository. com/web-spam.php. Twitter network of follow relationship. Data available at http:// networkrepository.com/soc-twitter-follows.php. A symmetrized version of the Wikipedia talk network. A node represents a user, and an edge indicates that one user edited the talk age of another user. Data available at https://snap.stanford.edu/data/ wiki-Talk.html. Network of sample Yahoo! Messenger communication events. The data is available at https://webscope.sandbox.yahoo.com/ catalog.php?datatype=g. Youtube friendship network. Data available at https://snap. stanford.edu/data/com-Youtube.html. A friendship network among members of a karate club at an American university. The data was collected in the 1970s by Wayne Zachary and is available at http://konect.cc/networks/ ucidata-zachary 34 78</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PowerWatts and Strogatz</ns0:cell><ns0:cell /><ns0:cell>4,941</ns0:cell><ns0:cell>6,594</ns0:cell></ns0:row><ns0:row><ns0:cell>(1998)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Spam(4,767</ns0:cell><ns0:cell>37,375</ns0:cell></ns0:row><ns0:row><ns0:cell>Twitter (Gleich and Rossi,</ns0:cell><ns0:cell cols='2'>A 404,719</ns0:cell><ns0:cell>713,319</ns0:cell></ns0:row><ns0:row><ns0:cell>2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Web Edu(Gleich et al.,</ns0:cell><ns0:cell>A WWW network available at http://networkrepository.</ns0:cell><ns0:cell>3,031</ns0:cell><ns0:cell>6,474</ns0:cell></ns0:row><ns0:row><ns0:cell>2004)</ns0:cell><ns0:cell>com/web-edu.php.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Wiki Talks (Leskovec et al.,</ns0:cell><ns0:cell /><ns0:cell cols='2'>2,394,385 4,659,565</ns0:cell></ns0:row><ns0:row><ns0:cell>2010b,a)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>World Transport (Guimerà</ns0:cell><ns0:cell>A worldwide airport network. Nodes represent cities, and edges</ns0:cell><ns0:cell>3,618</ns0:cell><ns0:cell>14,142</ns0:cell></ns0:row><ns0:row><ns0:cell>et al., 2005)</ns0:cell><ns0:cell>indicate a flight connecting two cities. The data is available at</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>http://seeslab.info/media/filer_public/63/97/</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>63979ddc-a625-42f9-9d3d-8fdb4d6ce0b0/airports.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>zip.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yahoo IM (Yahoo! Web-</ns0:cell><ns0:cell /><ns0:cell>100,001</ns0:cell><ns0:cell>587,964</ns0:cell></ns0:row><ns0:row><ns0:cell>scope, 2008)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Youtube (Yang et al.,</ns0:cell><ns0:cell cols='3'>A 1,134,890 2,987,624</ns0:cell></ns0:row><ns0:row><ns0:cell>2015a)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Zakary's Karate Club</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Zachary, 1977)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1,133</ns0:cell><ns0:cell>5,451</ns0:cell></ns0:row><ns0:row><ns0:cell>Erdos 02</ns0:cell><ns0:cell cols='2'>The 2002 version of Erdös' co-authorship network. work is available 6,927 The net-</ns0:cell><ns0:cell>11,850</ns0:cell></ns0:row><ns0:row><ns0:cell>Indochina 2004(Boldi and</ns0:cell><ns0:cell>A WWW network available at http://networkrepository.</ns0:cell><ns0:cell>11,358</ns0:cell><ns0:cell>47,606</ns0:cell></ns0:row><ns0:row><ns0:cell>Vigna, 2004; Boldi et al.,</ns0:cell><ns0:cell>com/web_indochina_2004.php.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2011)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Internet(Batagelj and Mr-</ns0:cell><ns0:cell /><ns0:cell>124,651</ns0:cell><ns0:cell>193,620</ns0:cell></ns0:row><ns0:row><ns0:cell>var, 2006)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1,538</ns0:cell><ns0:cell>7,817</ns0:cell></ns0:row><ns0:row><ns0:cell>Oregon(Leskovec et al.,</ns0:cell><ns0:cell cols='2'>Autonomous Systems (11,174</ns0:cell><ns0:cell>23,409</ns0:cell></ns0:row><ns0:row><ns0:cell>2005)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>PGP (10,680</ns0:cell><ns0:cell>24,316</ns0:cell></ns0:row><ns0:row><ns0:cell>Political Blogs (Adamic</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>and Glance, 2005)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>A network of hyperlinks among political web blogs. The data is available at http://networkrepository.com/web-polblogs.php.643 2,280 20/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54602:1:1:NEW 25 Mar 2021)Manuscript to be reviewedComputer Science</ns0:note></ns0:figure>
</ns0:body>
" | "Said Kerrache
Department of Computer Science
King Saud University
P.O. Box 51178
Riyadh 11543
March 8, 2021
E-mail: skerrache@ksu.edu.sa
URL: staff.ksu.edu.sa/skerrache
Dear Editors,
I would like to express my appreciation for the interest you and the reviewers have taken in
my paper and the insightful comments and suggestions included in the review. I have addressed
the concerns raised by the reviewers through a major revision of the manuscript.
Specifically, I extended the functionalities and the documentation of the library as per the
reviewers’ suggestions and included further experimental results to demonstrate its efficiency.
I believe that the manuscript is now suitable for publication in PeerJ. Thank you for your
consideration!
Sincerely,
Said Kerrache, PhD
Answer to Reviewers’ Comments on:
”LinkPred: A High Performance Library for
Link Prediction in Complex Networks”
Said Kerrache1*
1 King
KSA
Saud University, College of Computer and Information Sciences, Riyadh, 11543,
I would like first to thank the reviewers for their valuable comments and suggestions. In what follows, I give the details of the changes made based on the reviewers’
suggestions and a point-by-point answer to the comments.
Editor
1. Although your library seems useful and with potential, there are some weaknesses
that should be fixed before publishing, especially documentation.
Answer: This revised submission contains several important modifications to
the library (and the manuscript) based on the reviewers’ comments. I believe
that the changes made address all the reviewers’ concerns and tackle all the
weaknesses that existed in the initial submission.
Regarding the documentation, I worked on expanding the documentation of the
library, which now consists of three components:
• User guide: This document contains seven chapters, each dealing with
one component/aspect of the library. It includes a detailed explanation of
the library with code snippets and fully working examples.
• Tutorials: This document contains 60 independent tutorials on how to
use the different functionalities of the library. Each tutorial contains commented and fully working code examples along with detailed instructions
on how to compile and run the code.
• Reference manual: containing detailed API documentation generated
by Doxygen available in HTML and PDF format.
Reviewer 1
Basic reporting
2. This paper proposed a library with ease of use interfaces for link prediction. It
reported and reviewed briefly the current existing related works’ limitations that
1
lead to LinkPred development. It gave a detail explanation on how the LinkPred
was produced and deployed. The report clearly explained how the development of
the LinkPred and showed the result of its time execution performance on eight
different networks as depicted in Table 1. However some questions arise - From
Figure 1, how ’user-defined’ can be deployed together with the LinkPred library?
Answer: User-defined prediction algorithms and performance measures refer
to the user’s classes created by inheriting from the appropriate LinkPred classes.
These are added in the architecture diagram to emphasize the strategy used in
the library design that consists of defining clear and simple interfaces that the
library users can implement. This allows researchers to easily add their own
link prediction methods and/or performance measures with a minimal coding
effort. I agree that the wording used in the diagram was unclear, and to remedy
this, I updated the diagram to make the wording clearer (Figure 1 on page 4).
3. How different is between ’user defined’ in link prediction algorithm with performance measures?
Answer: The first refers to link prediction algorithms defined by the user,
whereas the second refers to user-defined performance measures. As explained
in the previous comment’s answer, I updated the diagram to make this distinction clear.
4. How the work prove ease of use, extensibility, and efficiency as claimed in line
37?
Answer: LinkPred components interact through clearly defined and easy interfaces, allowing users to plug their own components into the library by
implementing these interfaces. I believe that the quality of extensibility is best
demonstrated through examples rather than quantified through experiments. To
this end, an example of how to write a user-defined link prediction algorithm
that can be used with LinkPred classes is included in the paper in the section
titled ”Implementing a new link prediction algorithm” (Page 5m line 667). Integrating such algorithms with the library source code is a bit more technical and
is explained in detail in the user guide (page 130). Also included are examples
of defining new performance measures (page 115 of the user guide).
5. Why Djikstra’s algo is chosen?
Answer: Most networks used in link prediction are unweighted, which means
that a simple BFS can be used to compute shortest paths. However, many
link prediction algorithms pre-weight the network with weight maps that serve
different purposes (see for example [1] and [2] below). In such situations,
Dijkstra’s algorithm remains the standard solution. Link prediction typically
involves all disconnected couples in the network. This the all-couples shortest
path problems, which can be solved in O(n3 ) using Floyd-Warshall’s algorithm.
2/6
Since most real networks are sparse, however, it is in general more efficient to
run Dijkstra’s algorithm n times, which results in an overall running time of
O(nm + n2 log m), where m is the number of edges.
[1] Muscoloni, A., Thomas, J. M., Ciucci, S., Bianconi, G. & Cannistraci, C. V.
Machine learning meets complex networks via coalescent embedding in the
hyperbolic space. Nat. Commun. 8, 1615 (2017).
[2] Kerrache, S., Alharbi, R., and Benhidour, H. (2020). A scalable similaritypopularity link prediction method. Scientific reports, 10(1):6394–6394.
6. ULPredictor must be explained further as what method has been used to predict?
why it has been chosen? How accurate the prediction result is?
Answer: I agree with the reviewer that this part was not clear. ULPredictor is
actually an abstract class. It only defines an interface and does not implement
any prediction algorithm. In the revised version of the manuscript, I clearly
mentioned (Page 8, line 292) that the methods init, learn and score are pure
virtual and must be defined by inheriting classes (the other methods have default
implementations that use the method score) .
Experimental design
7. The experimental design started by showing the architectural design in general
explanation followed by the development of the library. Research questions are
not provided with that may lead to many questions on the development of the
library. However, the elaboration of the development is very useful for researchers
to understand how the library works. A section on how users apply the library in
their scenario will be very beneficial.
Answer: I agree. To highlight the research gap targeted by this work, I added a
comparative table to the literature review (Table 1 on page 3), which compares
the functionalities available in LinkPred to those available in existing packages.
This clearly shows the need for LinkPred as a more comprehensive package
dedicated to link prediction research and practice.
To answer the second point, I split the section previously titled ”Examples
and results” into two sections. The first one is dedicated to example use cases
and contains several examples demonstrating the use of the library in different
scenarios. The second titled ”Experimental results” is dedicated to performance
results.
Validity of the findings
8. Thorough experiments to prove the library’s ease of use, extensibility and efficiency is needed to improve the paper further, despite its clear explanation on the
library development. How the LinkPred library can be used in different programming languages that affect effectiveness? It is believed the conclusion will be well
stated when most of these questions answered.
3/6
Answer: I agree. For the remark on extensibility, kindly see the answer to
comment 4 above.
To demonstrate the efficiency of LinkPred, I added a new experiment where I
compare LinkPred to the two main free/open-source packages dedicated to link
prediction; NetworkX and the R package linkprediction. The results are shown
in Table 2 (page 18) in the section “Experimental results.”. This is in addition
to the previous time performance results included in Table 3. The results show
clearly that LinkPred offers much better performance than existing packages.
To allow the use of LinkPred in different languages, bindings to other languages
have been introduced. Since the library makes extensive use of templates,
creating bindings to the whole library would not be practical. To remedy
this, a simplified interface has been added, which offers access to the main
functionalities of LinkPred via a small number of simple-to-use classes (Line
382-391). This interface is translated to Java and Python with bindings to R
and Octave/Matlab planned in the future. The results show that Java bindings
introduce less overhead than those for Python. However, in both cases, the
performance of LinkPred remains superior to that of the other packages (Line
830-833).
Comments for the author
9. The work proposed a library for link prediction which is very beneficial and
relevant in the current network situation. The library will benefit practitioners
and researchers to fasten up their work process as the network is scaled every day.
However, the proposed library needs more work for improvements as questions
arise especially in term of its reliability.
Answer: I would like first to appreciate the interest in the proposed library.
In terms of reliability, all implemented methods are thoroughly tested against
theoretical results as well as against other implementations. Of course, as any
other software, the library may contains bugs. These will be corrected when
discovered, and the best way to achieve this is for the library to be used by
the research community which, in addition to bug detection, can suggest new
functionalities.
Reviewer 2
Basic reporting
10. References to key libraries doing link prediction is missing including SNAP
(http://snap.stanford.edu/snap/description.html) and GEM
(https://github.com/palash1992/GEM).
Answer: Done. I included GEM and SNAP in the related work section, and
I listed link prediction related functionalities available in these two libraries
(Line 69-81).
11. The authors have not considered graph embedding/representation learning based
approaches for link prediction which have been very popular recently.
4/6
Answer: I agree with the review that this was an important missing functionality from the library, and to remedy it, I added implementations of the following
graph embedding algorithms (212-249):
• Locally linear embedding (LLE)
• Laplacian eigenmaps
• Graph factorization (AKA matrix factorization)
• DeepWalk
• Large Information Networks Embedding (LINE)
• LargeVis
• node2vec
• Hidden the Metric Space Model (HMSM)
To use these algorithms for link prediction, I added two link prediction classes,
UESMPredictor, which combines a graph embedding algorithm with a similarity measure, and UECLPredictor, which combines graph embedding with
a classifier. The following similarity measures have been implemented in the
library (Line 258-261):
• Cosine similarity
• Dot product similarity,
• L1, L2, and Lp similarity
• Pearson similarity.
In terms of classifiers, the following have been included with the library (Line
250-257):
• Logistic regression
• Feed-forward neural networks (from mlpack)
• linear support vector machine (from mlpack)
• Bayes classifier (from mlpack)
The two classes UESMPredictor and UECLPredictor, can also use any userdefined graph embedding algorithm, similarity measure, or classifier as long as
they implement the required interface.
12. The documentation in the library is very limited.
5/6
Answer: Done (kindly see answer to comment 1 above)
Experimental design
13. The approaches included in the library are not comprehensive missing several stateof-the-art link prediction methods including those based on graph representation
learning which are included in libraries such as SNAP an GEM.
Answer: Done (kindly see answer to comment 10 above)
6/6
" | Here is a paper. Please give your review comments after reading it. |
86 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Android is a free open-source operating system (OS), which allows an in-depth understanding of its architecture. Therefore, many manufacturers are utilizing this OS to produce mobile devices (smartphones, smartwatch, and smart glasses) in different brands, including Google Pixel, Motorola, Samsung, and Sony. Notably, the employment of OS leads to a rapid increase in the number of Android users. However, unethical authors tend to develop malware in the devices for wealth, fame, or private purposes. Although practitioners conduct intrusion detection analyses, such as static analysis, there is an inadequate number of review articles discussing the research efforts on this type of analysis. Therefore, this study discusses the articles published from 2009 until 2019 and analyses the steps in the static analysis (reverse engineer, features, and classification) with taxonomy. Following that, the research issue in static analysis is also highlighted.</ns0:p><ns0:p>Overall, this study serves as the guidance for novice security practitioners and expert researchers in the proposal of novel research to detect malware through static analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Mobile devices, such as smartphones, iPads, and computer tablets, have become everyday necessities to perform important tasks, including education, paying bills online, bank transactions, job information, and leisure. Based on the information from an online mobile device production website, Android is one of the popular operating systems (OS) used by manufacturers <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. The open-source platform in Android has facilitated the smartphone manufacturers in producing Android devices of various sizes and types, such as smartphones, smartwatches, smart televisions, and smart glasses. In the most recent decades, the quantity of remarkable Android gadgets accessible worldwide has increased from 38 in 2009 to over 20,000 in 2016 <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. As a result of the demand for this Android OS, the recent statistics from Statista revealed that the number of Android malware increase to 26.6 million in March 2018 <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Moreover, McAfee discovered a malware known as Grabos, which compromises the Android and breaches Google Play Store security <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. It was also predicted that 17.5 million Android smartphones had downloaded this Grabos mobile malware before they were taken down.</ns0:p><ns0:p>Mobile malware is designed to disable a mobile device, allow malicious acts to remotely control the device, or steal personal information <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. Moreover, these malicious acts able to run stealthily and bypass permission if the Android kernel is compromised by mobile malware <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>. In September 2019, a total of 172 malicious applications were detected on Google Play Store, with approximately 330 million installations. According to researchers, the malicious components were hidden inside the functional applications. When the applications are downloaded, it leads to the appearance of popup advertisements, which remain appear even when the application was closed <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. To detect this malware, security practitioners conducting malware analysis, which aims to study the malware characteristics and behaviour. There are dynamic, static, and hybrid analysis.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> shows comparison for static, dynamic and hybrid analysis done from previous researches. Specifically, dynamic analysis is an analysis, which studies the execution and behaviour of the malware <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. However, dynamic analysis is incapable of identifying several parts of the code operating outside the monitoring range. Besides, provided that the dynamic analysis is a high resource-consuming analysis with a high specification for hardware <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>, static analysis is another alternative to detect malware. It is an analysis, which examines malware without executing or running the application. Additionally, this analysis able to identify malware more accurately, which would act under unusual conditions <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. This is due to static analysis examine overall parts of a program including parts that excluded in dynamic analysis. Furthermore, static analysis is able to detect unknown malware just as dynamic analysis could <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> and requiring low resources.</ns0:p><ns0:p>To integrate the characteristics of the static and dynamic method, three-layer detection model called SAMAdroid has been proposed by <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref> which combines static and dynamic characteristics. Mobile Sandbox by <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> which proposed to use the results of static analysis to guide the dynamic analysis and finally realize classification. The hybrid analysis technique is great to help in improving the accuracy, but it also has a major drawback such as the waste of time and space for the huge number of malware samples to be detected and analyzed <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>.</ns0:p><ns0:p>There were 430 records identified through database searching. These journals and conferences are mainly from Computer and Security and IEEE Access, which are listed in Table <ns0:ref type='table'>3</ns0:ref>. Collections of the studies that are related to Android malware detection using static analysis in the reference section, where studies take up a small proportion in the primary studies. All the studies related to search terms are taken into account, and the searching range is from January 2009 to December 2019.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.'>Screening</ns0:head><ns0:p>Experiment articles were identified in the static analysis, omitting other unrelated articles. Initially, the searching of articles was specified into a journal article and excluded review articles, books, and conference proceedings. To focus specifically on static analysis, the articles, which combined both static and dynamic analyses, were removed. Another criterion focused on the selection of the articles was the use of English, which therefore removed all non-English articles to avoid any difficulty in translating in the future. The selection of articles took place from 2009 to 2019, totaling the duration to 10 years. This duration was suitable for exploring the evolution of research in security areas. Apart from that, the Android platform was the focus of this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.'>Eligibility</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>1</ns0:ref> depicts the review that process involved four steps; identification, screening, eligibility, and analysis. The review was performed in mid of 2019. Based on previous studies, the process used similar keywords related to malware detection, static analysis, and security. After the identification process, we remove any duplicated articles. During the screening process, we discover 375 documents and remove a few articles and left 172 articles. This is because the articles were unrelated to the interested area. Lastly, we used 123 articles for review <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.'>Data analysis included</ns0:head><ns0:p>Then we analyzed the remaining articles, extracted the abstract, and downloaded the full articles. This is to find the appropriate topic in-depth and to have a strong justification for the research. Then, this process organized the topic and subtopic accordingly based on the static analysis. Qualitative analysis was performed based on content analysis to identify issues related to this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Static analysis</ns0:head><ns0:p>Mobile malware compromises Android devices (smartphone, smartwatch, and smart television) for wealth, stealing data, and personal purposes. The examples of mobile malware include root exploit, botnets, worms, and Trojan. To detect malware, most of security practitioners perform two types of analysis; dynamic and static. Specifically, dynamic analysis is an experiment, which detects malware by executing malware and benign applications to monitor and differentiate their behaviours. However, the monitoring of all behaviours is costly and requires high specifications in terms of device memory, CPU, and storage. Furthermore, the malware is inflicted on a device at a certain time or whenever the attacker decides on it. Accordingly, as the dynamic analysis only monitors behaviours at a certain range of time based on the research period, numerous malware activities outside the research period might be omitted <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>[25] <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. Furthermore, dynamic analysis requires a separate and closed virtual environment to run a malware and observe its behaviour on the system. However, an isolated setup dynamic leads to an impractical analysis in the Android platform due to the increase in power and memory consumption. While power and memory are the most concerning constraints of Android devices, static analysis is the alternative for the dynamic analysis.</ns0:p><ns0:p>Static analysis is a category of analysis, which investigates the malware application code and examine full activities in an application within an unlimited range of time, by without executing the application <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>. The main step of static analysis procedure is the reverse engineer process, which retrieves the whole code and further scrutinises the structure and substance within the application <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>- <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>. Therefore, this analysis can examine the overall code with low requirement for memory resources and minimum CPU processes. Additionally, the analysis process is prompt due to the absence of the application. With this analysis, unknown malware is also identified using enhanced detection accuracy through machine learning approaches <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref> <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>. Table <ns0:ref type='table'>4</ns0:ref> presents the advantages and disadvantages of dynamic and static analyses. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A lot of researchers publish their works using static approaches for malware detection on the Android platform. Even in this static approach, in its turn, contains a number of approaches. For example, there are signature-based approach and other approach are depending on detection and classification of the source code. Signature-based detection utilizes its specification by having an information of malware signatures determined and arranged in advance inspection <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. However, signature-based approach are not able to detect unknown malware even though this approach is a set of features that uniquely differentiate the executable code <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref>.</ns0:p><ns0:p>Obfuscation is one of the obstacles in the static analysis, which is used by malware authors in their malicious software to evade the intrusion detection or antivirus system <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. The examples of the obfuscation methods are renaming the code, adding unnecessary codes, and encrypting the string. Therefore, security practitioners need to overcome obfuscation to increase their detection results. Accordingly, the alternatives performed by the security practitioners are presented in Section 3.2. Table <ns0:ref type='table'>4</ns0:ref> shows that both static and dynamic analyses have similar limitations despite the selection of the ideal features in minimal amount. In detecting malware, features refer to the attributes or elements to differentiate an application, which may either be malware or benign. Security practitioners are faced with obstacles in investigating various features in all types of categories (e.g., permission, API, directory path, and code-based) and the need to simultaneously reduce these features. Notably, determining the ideal features in minimal amount is crucial to enhance the accuracy of the analyses (e.g., the accuracy of the predictive model) and reduce data and model complexity <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> illustrates the static analysis operation, which consisted of several steps. The first step was the acquirement of the benign and malware datasets in the Android application, each with the (.apk) filename extension. This was followed by the reverse engineering performed on these applications to retrieve the code by extracting a few folders from one .apk file, which consisted of nested files with codes (Java or smali). Furthermore, one .apk would comprise approximately a thousand lines of codes. Therefore, with a total of 1000 applications in one dataset, the security practitioners were required to scrutinise millions of lines of code. With the completion of the reverse engineering, an analysis would be conducted, which involved features. Features consist of a series of application characteristics for the detection of malware, while classification is an approach used to differentiate between malware and benign (normal) application. The following section thoroughly discusses the static analysis, which specifically begins with a focus on the dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.'>Dataset</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the Android malware dataset from different places. Notably, the majority of the datasets were obtained from universities. The datasets were in the form of an Android application package, which was followed by an .apk filename extension. Malgenome <ns0:ref type='bibr' target='#b34'>[34]</ns0:ref> is the name of Android malware dataset, which was made to be publicly available with permission from their administrator. These malware samples, which were collected by North Carolina State University (NCSU) from August 2010 to October 2011, covered multiple families of malware consisting of botnet and root exploit. The characterization of the malware families was based on the method of the installation, the way the malware carried the malicious payloads, and its method of activation.</ns0:p><ns0:p>Androzoo <ns0:ref type='bibr' target='#b35'>[35]</ns0:ref> <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> is another dataset consisting of approximately more than three million of Android applications (.apk). This dataset originates from the University of Luxembourg to contribute to the community for research purposes and further explore the notable development in the detection of malware, which damages the Android. Drebin <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref> dataset also presents Android malware publicly with strict requirements. A university from Germany (University in Braunschweig, Germany) collected 5560 samples with 179 families. The time range provided for the malware was from August 2010 to October 2012. The university project, which was known as MobileSandbox, was an initiative for the acquirement of samples for academia and industry.</ns0:p><ns0:p>Android malware dataset (AMD) [39] is a public Android malware dataset from the University of South Florida, which consists of 24,650 samples with 71 categorised families. To obtain this dataset, the user is required to acquire permission from the university and provide authentic information with evidence. The academia and the industry are allowed to use these samples for research purposes.</ns0:p><ns0:p>Contagio <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref> dataset presents the malware, which focuses on mobile malware, with a condition that the user should omit one sample to obtain another sample. It provides a dropbox for the user to share their mobile malware samples. According to their blogspot <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref>, the name of the administrator of this dataset is Mila Parkour, who is reachable only through emails. Based on Table <ns0:ref type='table'>5</ns0:ref>, which presents the research articles and the respective datasets, it could be seen that the dataset providers receive significant attention from other universities and the industry. It is hoped that this action would enhance the security of the Android device and its users from time to time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.'>Reverse engineer</ns0:head><ns0:p>Static analysis is an activity to investigate the code of an application without executing it. In order to investigate, security practitioners implement the reverse engineering method. This method reversed from the executable file to its source code <ns0:ref type='bibr' target='#b41'>[41]</ns0:ref>. This reverse engineering process loads the executable into a disassembler to discover what the program does. Figure <ns0:ref type='figure'>4</ns0:ref> illustrates the tools used to perform a reverse engineering method, which was also adopted by security practitioners to identify Android malware. Table <ns0:ref type='table'>6</ns0:ref> illustrates the tools adopted in the respective articles. malware. Meanwhile, several paths related to Android kernel directory were identified by another study <ns0:ref type='bibr' target='#b66'>[65]</ns0:ref>, such as 'data/local/tmp/rootshell', '/proc', and '/system/bin/su'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.5.'>Commands</ns0:head><ns0:p>Two types of commands are available, namely 1) root command and 2) botnet command. Specifically, several root commands were identified by <ns0:ref type='bibr' target='#b66'>[65]</ns0:ref> in the Unix machine, such as 'cp', 'cat', 'kill', and 'mount'. Normally, these commands were used by the administrators to execute higher privileged actions in the Unix machine. Provided that Android architecture was based on the Unix kernel, the attackers included root commands in their malware to control the victim's Android devices. Therefore, the identification of root commands is crucial in investigating malwares.</ns0:p><ns0:p>The second type of command is a botnet command. Meanwhile, one type of malware, which is known as a mobile botnet, includes botnet commands in their malware codes, such as 'note', 'push', 'soft', 'window', 'xbox', and 'mark'. The attacker used these commands to communicate with the command and control (C&C) server, while droidanalyzer <ns0:ref type='bibr' target='#b58'>[58]</ns0:ref> combines API, root command, and botnet command into a set of features to detect root exploit and mobile botnet.</ns0:p><ns0:p>Other than ad libraries, certain researchers inspect the Android Debug Bridge (adb) code. ADB <ns0:ref type='bibr' target='#b67'>[66]</ns0:ref> is a tool, which provides a command-line access facility for users or developers to communicate with Android mobile devices. This facility allows the installation of unwanted applications and execution of various Unix by the attacker in the victim's device. Therefore, RODS <ns0:ref type='bibr' target='#b68'>[67]</ns0:ref> is a root exploit detection system for the detection of a root exploit malware with ADB features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.6.'>Function call</ns0:head><ns0:p>In programming, a function call is a declaration, which consists of a name and is followed by an argument in parenthesis. The list of the argument may include any numbers of the name, which are either separated by commas or left empty. Another study by <ns0:ref type='bibr' target='#b69'>[68]</ns0:ref> involved the extraction of a function call through readelf, which was then used for the features in machine learning prediction. Meanwhile, <ns0:ref type='bibr' target='#b70'>[69]</ns0:ref> extracted the function calls in a graph to identify the nodes from the start to the end of the process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.7.'>Geographic location</ns0:head><ns0:p>Geographic location is a feature, which identifies the origin of the application. The geographic detector was identified as one of the features in research by <ns0:ref type='bibr' target='#b54'>[54]</ns0:ref>. Provided that 35% of the mobile malware families appeared to originate from China with 40% of the facilities originating from Russia, Ukraine, Belorus, Latvia, and Lithuania countries, it was crucial to consider geographic location as one of the features for the detection of Android malware. For this reason, researchers increased the risk signal for the applications originating from the aforementioned countries.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.8.'>Manifest file</ns0:head><ns0:p>Android application is built on the top of the application framework which provides an interface for the user. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. It consists of metainf, resource, assets and library directory, classes.dex, resources.arsc, and androidmanifest.xml file. One of the files, androidmanifest.xml (manifest file), is an essential file with contents of various features, such as permission, intent, hardware component, and components of the application (activities, services, broadcast receivers, and content providers) <ns0:ref type='bibr' target='#b71'>[70]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Permission</ns0:head><ns0:p>Permission is a unique security mechanism for Android devices. To enable the permission, the user needs to allow the application during the installation period. However, many users accidentally enable certain permissions, which leads to access to sensitive security-relevant resources. Therefore, permission features were examined in many studies. Based on the application of permission in several studies to measure the risk of the application, permission was further identified as malicious <ns0:ref type='bibr' target='#b72'>[71]</ns0:ref>, <ns0:ref type='bibr' target='#b73'>[72]</ns0:ref>. Some other studies, such as <ns0:ref type='bibr' target='#b74'>[73]</ns0:ref>[63] <ns0:ref type='bibr' target='#b75'>[74]</ns0:ref>[75] <ns0:ref type='bibr' target='#b77'>[76]</ns0:ref>[77] <ns0:ref type='bibr' target='#b79'>[78]</ns0:ref>[79], used the permission features as the inputs for machine learning prediction.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Intent</ns0:head><ns0:p>The intent is coded in the manifest file and allows a component of the application to request certain functionality from another component from other application. For example, application A can use the component of application B for the management of photos in the device despite the exclusion of the component from application A. Provided that this feature enables malicious activities among the attackers, several experiments used intent (declared in the manifest file) as one of the features for the detection of malware, such as <ns0:ref type='bibr' target='#b82'>[80]</ns0:ref> and <ns0:ref type='bibr' target='#b83'>[81]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>c) Application component</ns0:head><ns0:p>The manifest file declared application component, which consists of four types, namely 1) activities, 2) services, 3) broadcast receivers, and 4) content providers. Specifically, activity is represented as the user interface or interactive screen to the users, while service refers to an operation occurring in the backgrounds, which perform the long-service process. This is followed by broadcast receivers, which respond to system-wide broadcast announcements. On the other hand, content providers manage a structured set of application data. Overall, these four components follow a life cycle model during execution. Dexteroid <ns0:ref type='bibr' target='#b84'>[82]</ns0:ref> proposed a framework, which systematically guides the event sequences through the reverse engineering/reconstruction of the life cycle models and the extraction of callback sequences from event sequences to detect malicious behaviours.</ns0:p></ns0:div>
<ns0:div><ns0:head>d) Hardware component</ns0:head><ns0:p>The manifest file also incorporated hardware components in the Android application. To illustrate, the developer requested access to the camera of an Android device by declaring it in the manifest file to enable the use of the camera for the application. However, the attacker declared unrelated hardware components in their game application, such as camera and data. As a result, the security researchers were prompted to use hardware component as the features in their experiment <ns0:ref type='bibr' target='#b55'>[55]</ns0:ref> to detect malware <ns0:ref type='bibr' target='#b85'>[83]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.9.'>Network address</ns0:head><ns0:p>Access to the Internet is essential for attackers to retrieve private information of the victim, change the settings, or execute malicious commands. This process requires the incorporation of the Uniform Resource Locator (URL) or network address in the malware code. The examples of sensitive URLs include the Android Market on Google Play, Gmail, Google calendar, Google documents, and XML schemas. These features were used in <ns0:ref type='bibr' target='#b56'>[56]</ns0:ref> and <ns0:ref type='bibr' target='#b86'>[84]</ns0:ref> [85] for malware detection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.10.'>Code-based</ns0:head><ns0:p>Code-based or code structure comprises a line or set of programming language codes in an application. Two studies applied code structures (code chunk grammar) as the features for malware detection, which is focused on the internal structure of the code units <ns0:ref type='bibr'>[86][87]</ns0:ref>. This feature enables the analysis and differentiation between malware and benign applications. Another study by <ns0:ref type='bibr' target='#b66'>[65]</ns0:ref> identified several code-based strings, namely '.exec', 'forked', 'setptywindowsize', and 'createsubprocess'. In comparison with the normal application, it was found that the attacker frequently used these code-based features in the development of malware. Therefore, these features were also used in this study to detect malware.</ns0:p><ns0:p>Opcode (operation code) is another code-based feature. It is a part of the instruction to inform the CPU regarding the tasks to be fulfilled. Assembly language used this opcode to execute the instruction. Also referred to as bytecode, the examples of an opcode for Android included OP_ADD_DOUBLE, OP_ADD_FLOAT, OP_ADD_INT_2ADDR, and OP_SUB_LONG <ns0:ref type='bibr' target='#b90'>[88]</ns0:ref>. Specifically, this feature was adopted in the studies by <ns0:ref type='bibr' target='#b91'>[89]</ns0:ref>[90] <ns0:ref type='bibr' target='#b93'>[91]</ns0:ref> and <ns0:ref type='bibr' target='#b94'>[92]</ns0:ref> to detect Android malware in the static analysis. Further examples of the features in this section are method <ns0:ref type='bibr' target='#b95'>[93]</ns0:ref>, opcode <ns0:ref type='bibr' target='#b94'>[92]</ns0:ref>, byte stream @ byte block <ns0:ref type='bibr' target='#b93'>[91]</ns0:ref>, Dalvik code <ns0:ref type='bibr' target='#b70'>[69]</ns0:ref>, and code involving encryption <ns0:ref type='bibr' target='#b96'>[94]</ns0:ref>. The selection of the features by security practitioners is followed by classification. This process was performed to receive the features as input and differentiate between either the application malware or benign (normal).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> depicts that researchers prefer to investigate permission and API features compare to others. However, the trend in permission features is decline from 2013 until 2018. However, API features takes place in previous experiments as it increased from six (2014) to <ns0:ref type='bibr'>9 (2019)</ns0:ref>. This indicates that the API trend would increase in following year in static detection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.'>Classification</ns0:head><ns0:p>In the classification process for static analysis, many security analysts used two types of methods; 1) Machine learning (ML) and 2) Graph. The following section presents the ML studies with static features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.1.'>Machine learning (ML)</ns0:head><ns0:p>Machine learning is a scientific discipline, which is capable to predict future decisions based on the experience it has gained through past inputs (learning set), followed by a prediction of the outputs. Basing on a given dataset, the learning set makes intelligent decisions according to certain algorithms. One of the machine learning types is supervised based on the data for the training stage to create a function. Furthermore, each part of the training data contains input (features or characteristics) and output (class label-malware and benign). This is followed by the training stage, which calculates the approximate distance between the input and output examples to create a model. This training stage could classify unknown applications, such as malware or benign application. Four types of ML are present, such as 1) classical learning; 2) reinforcement learning, 3) neural network and deep learning, and 4) ensemble method. Figure <ns0:ref type='figure'>7</ns0:ref> illustrates the ML taxonomy, which starts with classical learning. a) Supervised Learning Supervised learning (SL) is a process of learning from previous instances to predict future classes. Therefore, the prediction of the class label involves the construction of a concise model from previous experience. The machine learning classifier is then used to test the unknown class <ns0:ref type='bibr' target='#b97'>[95]</ns0:ref>. To detect Android malware with static features, the SL method is widely used by security practitioners. Accordingly, the previous articles adopting this method are illustrated in Table <ns0:ref type='table' target='#tab_1'>7</ns0:ref>. Unsupervised learning is another type of learning involved in machine learning. It is a clustering technique where the data is unlabeled and has also been used in computer security areas, including malware detection and forensic <ns0:ref type='bibr' target='#b99'>[96]</ns0:ref>. Clustering refers to the division of a large dataset into smaller data sets with several similarities. It classifies a given object set through a certain number of clusters (assume k clusters) to determine the k centroids assigned for each cluster. In this case, this algorithm selects the centroid by random from the applications set, extracts each application from a given dataset, and assigns it to the nearest centroid. Table <ns0:ref type='table' target='#tab_1'>7</ns0:ref> tabulates the previous articles, which adopted this method. c) Reinforcement learning A reinforcement learning model consists of an agent (a set of actions A) and an environment (the state space S) <ns0:ref type='bibr' target='#b100'>[97]</ns0:ref>. Deep reinforcement learning was introduced by reinforcement agents as a framework to play Atari games, which often exceed human performance <ns0:ref type='bibr' target='#b101'>[98]</ns0:ref> <ns0:ref type='bibr' target='#b102'>[99]</ns0:ref>. The advances in deep learning may extract high-level features from raw sensory data, leading to breakthroughs in computer vision and speech recognition. In the case of deep learning, the agent would be required to learn a value function in an end-to-end way, which takes raw pixels as input and predicts the output rewards for each action.</ns0:p><ns0:p>The learned value function is called deep Q learning, in which Q function is learned and refined from over hundreds of games <ns0:ref type='bibr' target='#b103'>[100]</ns0:ref>. The Q-learning algorithm was trained in network <ns0:ref type='bibr' target='#b101'>[98]</ns0:ref> with stochastic gradient descent to update the weights. Replay mechanism was used from random samples previous transitions to lead smooth training distribution over past behaviors to overcome the correlated data and non-stationary distributions problems. <ns0:ref type='bibr' target='#b100'>[97]</ns0:ref> propose a framework based on reinforcement learning (RL) for attacking static portable executable (PE) anti-malware engines. Meanwhile, a DQN-based mobile proposed by <ns0:ref type='bibr' target='#b104'>[101]</ns0:ref> to enhance the malware detection performance. The results shown from simulation can increase the malware detection accuracy and reduce the detection delay as compared to a Q-learning based malware detection scheme.</ns0:p></ns0:div>
<ns0:div><ns0:head>d) Neural Network and Deep Learning</ns0:head><ns0:p>The evolution of Neural Network (NN) has been associated with various challenges since the mid-20th century. McCulloch and Pitts obtained the first inspiration of NN in 1943 from biological neurons, which was followed by proposing a computational model for the development of hypothetical nets. Although this proposal was simulated by Nathaniel Rochester at IBM research laboratory, this attempt was unsuccessful at the end. Developed by Frank Rosenblatt at Cornell Aeronautical Laboratory, the perceptron became the first learning machine <ns0:ref type='bibr' target='#b105'>[102]</ns0:ref>. Despite all the upgrades on NNs, Deep learning (DL) was developed in 2006 and has been used in almost every application. As a new variation of the classical Multilayer Perceptron (MLP), the DL aims to produce high-level and flexible features from the raw pixel data to assist in generalising the classification. Furthermore, DL also operates with complex applications containing millions of data, which require a large number of neurons and hidden layers. A few DL frameworks have been developed in the recent years, such as TensorFlow <ns0:ref type='bibr' target='#b106'>[103]</ns0:ref>, Caffe <ns0:ref type='bibr' target='#b107'>[104]</ns0:ref>, and Theano <ns0:ref type='bibr' target='#b108'>[105]</ns0:ref> to ensure an efficient implementation of Deep Network (DN) architectures and omit the unnecessary coding scratching <ns0:ref type='bibr' target='#b105'>[102]</ns0:ref>. Additionally, the DL method extracts the features based on the layer's level, which could either be high or low.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> depicts the differences between ML and DL. It shows that ML requires the security practitioners to extract the features manually and select the ML classifier, which is suitable for the selected features. However, DL involves feature extraction part and malware classification. It trains the model end-to-end with the Android application package (.apk) file and their categories, each labelled as malware or benign. The DL gains and creates a prediction model through the automatic selection of the feature <ns0:ref type='bibr' target='#b109'>[106]</ns0:ref>.</ns0:p><ns0:p>As one of the major models in deep learning, a convolutional neural network (CNN) has been widely used for image recognition <ns0:ref type='bibr' target='#b110'>[107]</ns0:ref>. It could be seen in the past few years that many studies have implemented Deep Neural Networks (DNN) to classify malware <ns0:ref type='bibr' target='#b111'>[108]</ns0:ref>[109] <ns0:ref type='bibr' target='#b94'>[92]</ns0:ref>. Additionally, although the recurrent neural networks have been explored since the 1980s, they have become uncommercial due to several issues <ns0:ref type='bibr' target='#b111'>[108]</ns0:ref>. Several machine learning methods have addressed network or malware attacks on personal computers or mobile devices. Simultaneously, several techniques were proposed by researchers who applied DL algorithms to detect or categorize malware using static, dynamic, or hybrid approaches, detection of network intrusions and phishing/spam attacks, and inspection of website defacements <ns0:ref type='bibr' target='#b114'>[110]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>e) Ensemble method</ns0:head><ns0:p>Another technique in machine learning and pattern recognition is ensemble learning. The increase in the implementation of ensemble learning methods could be seen in the computational biology field due to the unique advantages in managing small sample size, high dimension, and complex data structures <ns0:ref type='bibr' target='#b115'>[111]</ns0:ref>. The function of ensemble learning is to build a prediction model by combining the strengths of the collection of simpler base models <ns0:ref type='bibr' target='#b116'>[112]</ns0:ref>. A few approaches are applied in ensemble methods, such as bagging, boosting, and random forest. This method is also a simple device, which is popular especially in the predictive performance of a base procedure.</ns0:p><ns0:p>The bagging procedure appears to be a variance reduction scheme for some base procedure, while the boosting methods mainly reduce the bias of the base procedure. Therefore, the significant difference between bagging and boosting ensemble methods Manuscript to be reviewed Computer Science and boosting, the random forest approach is a highly distinguished ensemble method. The first proposal of the random forest was made by Amit and Geman <ns0:ref type='bibr' target='#b117'>[113]</ns0:ref>. While the performance of random forests is on the same level as boosting, it could exhibit better performance in the perspective of prediction.</ns0:p><ns0:p>Table <ns0:ref type='table'>8</ns0:ref> shows previous works done using different types of machine learnings as mentioned before. From the table, we can summarize classical learning is still valid to be used in experiment execution but there are a lot of works are using deep learning and graph method. The current trends show the demand using the deep learning technique to defend against an increasing number of sophisticated malware attacks where deep learning based have become a vital component of our economic and national security. Many recent studies on Android malware detection have leveraged graph analysis as mentioned in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.2.'>Graph</ns0:head><ns0:p>The use of a graph is another method in machine learning and pattern recognition, which is performed by investigating the data and control-flow analysis. It is also capable of identifying unknown malware through the examination on the flow of the code. This method is preferred by security analysts due to the uniform flow despite the changes made by the malware authors on the API calls to avoid intrusion detection systems. The types of analysis in graph method include call graph, inter-component call graph (ICCG), control-flow graph (CFG), and dependence graph, while Table <ns0:ref type='table'>9</ns0:ref> lists the previous works of research on static malware detection using the graph method.</ns0:p><ns0:p>A call graph (specifically known as flow graph) is a graph representing the control and data flow of the application, which investigates the exchange of information through the procedures. A node in the graph represents a procedure or function, as seen from the x and y symbols, which indicate that procedure x calls for procedure y. Apposcopy <ns0:ref type='bibr' target='#b52'>[52]</ns0:ref> presents its new form of call graph known as inter-component call graph (ICCG) to match malware signature. As a directed graph where nodes are known as components in an application, it establishes ICCG from a call graph and the results of the pointer analysis. The objective of apposcopy is to measure the inter-component communication (ICC), calls, and flow relations.</ns0:p><ns0:p>Another graph called a control flow graph (CFG) is also applied by many security analysts to investigate the malware programme. Woodpecker <ns0:ref type='bibr' target='#b49'>[49]</ns0:ref> created the CFG start from each entry point (activity, service, receiver, content provider), which is defined in the permission stated in the androidmanifest.xml file. Furthermore, the public interface or services from an execution path is discovered through the flow graph. However, it would be considered by Woodpecker as a capability leak if it is not guarded by the permission requirement nor prevented from being invoked Manuscript to be reviewed Computer Science by another unrelated application. The same graph was applied in subsequent works of research, namely Flowdroid <ns0:ref type='bibr' target='#b54'>[54]</ns0:ref>, Dendroid <ns0:ref type='bibr' target='#b88'>[86]</ns0:ref>, <ns0:ref type='bibr' target='#b77'>[76]</ns0:ref>, Asdroid <ns0:ref type='bibr' target='#b53'>[53]</ns0:ref>, Anadroid <ns0:ref type='bibr' target='#b50'>[50]</ns0:ref>, Adrisk <ns0:ref type='bibr' target='#b44'>[44]</ns0:ref>, and Dexteroid <ns0:ref type='bibr' target='#b84'>[82]</ns0:ref>.</ns0:p><ns0:p>Another graph is the dependency graph, which illustrates the dependencies of several objects on each other. An example could be seen in the dead code elimination case process, in which the graph identifies the dependencies between operation and variables. With the dependency of nonoperation on certain variables, these variables would be considered dead and should be deleted. The studies, which adopted this type of graph are CHEX <ns0:ref type='bibr' target='#b118'>[114]</ns0:ref>, Dnadroid <ns0:ref type='bibr' target='#b119'>[115]</ns0:ref>, Droidlegacy <ns0:ref type='bibr' target='#b45'>[45]</ns0:ref>, and <ns0:ref type='bibr' target='#b51'>[51]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.3'>Others</ns0:head><ns0:p>Besides machine learning and graph, several security practitioners adopted different methods, such as Normalized Compression Distance (NCD). Adopted in the studies by <ns0:ref type='bibr' target='#b120'>[116]</ns0:ref> and <ns0:ref type='bibr' target='#b121'>[117]</ns0:ref>, this method can measure the similarities between the malwares and represent them in the form of a distance matrix. Despite the evolution of many malwares from time to time, some of their behaviour patterns are similar to each other. The calculation of the similarities using NCD would identify the malwares, which share the same distance.</ns0:p><ns0:p>A study known as DelDroid <ns0:ref type='bibr' target='#b122'>[118]</ns0:ref> implemented a method called as Multiple-Domain Matrix (MDM). This method refers to a complex system, which calculates multiple domains and is based on the Design-Structure Matrix (DSM) model. Furthermore, MDM is formed by the connection of DSM models with each other. The study initialised multiple domains in the MDM to represent the architecture of an Android system for privilege analysis. To illustrate, the incorporation of certain definitions in the MDM representation in the architecture enables DelDroid to identify the communication of the application, which may result in an unauthorised malware attack.</ns0:p><ns0:p>Another previous static experiment was conducted on the MD5 signature of the application to detect malware <ns0:ref type='bibr' target='#b58'>[58]</ns0:ref>. In the first process, the study assigned the application as level C (the lowest level of suspicion), followed by calculation and cross-reference in the database of signatures. The application would be recorded if the result was positive. However, it would be identified as malware if the result of the suspicion was R. The system examined the files inside the application to find any matched MD5 signature.</ns0:p><ns0:p>Androsimilar <ns0:ref type='bibr' target='#b93'>[91]</ns0:ref> practised a method known as a statistical similarity digest hashing scheme, which inspects the similarity on the byte stream based on robust statistical malicious static features. It is also a foot-printing method, which identifies the regions or areas of statistical similarity with known malware. Following that, it generates variable-length signatures to detect unknown malware (zero-day).</ns0:p><ns0:p>The following study is DroidMOSS <ns0:ref type='bibr' target='#b62'>[62]</ns0:ref>, which identifies between the repackaged (modified) and original application. This function is important due to the content of malicious activities in many Android repackaged applications. This study used a fuzzy hashing technique, which generated fingerprint based on this technique to localise and detect any previously applied modifications to the original application. It then calculated the edited distance to measure the similarity between the applications. When the result of the similarity exceeds certain values, the application would be considered as a modified sample.</ns0:p><ns0:p>Under another static experiment, a study by <ns0:ref type='bibr' target='#b86'>[84]</ns0:ref> adopted a method known as a risk score weight, which was performed through the calculation of the risk score based on the selected features in the code. When the features were identified, the score increased according to certain risky patterns of properties. Particularly, the patterns were based on different likelihoods of the given situations between normal and malware samples. Lastly, the percentage of the likelihood was calculated. Figure <ns0:ref type='figure'>9</ns0:ref> shows that both ML and graph were the popular methods among security practitioners in static analysis. The graph method was found to exceed the ML method in 2011, 2012, and 2014, although ML was more preferred compared to graph in other years. However, this situation reveals that graphs and ML are favourable options in the static experiment.</ns0:p><ns0:p>A study started to utilise DL (part of ML) in the static experiment in 2019, which also combined DL (Convolutional neural network -CNN) with Control flow graph (CFG). Notably, provided that API was the only feature utilised in this study, many future opportunities were available to combine different DL classifiers (Recurrent neural network -RNN, Generative* adversarial networks -GAN or Deep belief network* -DBN) with other features besides API and different types of graph. It is noteworthy that DL could also be combined with NCD and MDM.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Open research issues</ns0:head><ns0:p>This section explains the issues involved in the static analysis, which were derived from previous research articles. Specifically, a discussion is made on the advantages and disadvantages of the open-source operating system, which rely on the availability of the kernel application code. Another issue on static analysis is code obfuscation used by the malware developer to increase the difficulty to detect the malware and research done to counter obfuscation. Then, we review overall static analysis articles, how to detect unknown malware, the combination of static and dynamic, resource consumption, future work, and features. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Open source advantages and disadvantages</ns0:head><ns0:p>Provided that Android malware is an open-source operating system, there is a continuous increase in its amount. To illustrate, one of the open-source availabilities is the kernel application code. Accordingly, Samsung officially provides its kernel operating system to the public <ns0:ref type='bibr' target='#b123'>[119]</ns0:ref> for kernel enhancement or private purposes. Furthermore, any person may download the link according to the mobile device version. The code is also available in Lenovo <ns0:ref type='bibr' target='#b124'>[120]</ns0:ref>, LG <ns0:ref type='bibr' target='#b125'>[121]</ns0:ref>, Sony <ns0:ref type='bibr' target='#b126'>[122]</ns0:ref>, Htc <ns0:ref type='bibr' target='#b127'>[123]</ns0:ref>, Asus <ns0:ref type='bibr' target='#b128'>[124]</ns0:ref>, Motorola <ns0:ref type='bibr' target='#b129'>[125]</ns0:ref>, and other mobile providers. Consequently, this code availability may allow malware writers to identify and manage the vulnerabilities of the kernel's operating system.</ns0:p><ns0:p>Even though the availability of open-source contributes to its easy access, it is also available for security practitioners to research it. These practitioners may be the researchers of universities, the staff of the mobile providers, Android Google researchers, freelance programmers, and the Android community. They also invent various frameworks, algorithms, and suggestions to improve the security of the operating system. The version of the kernel is updated every year, while the mobile providers are informed to regarding the kernel updates. These studies, including static analysis, would increase the confidence of Android users worldwide.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Obfuscation</ns0:head><ns0:p>Static analysis involves reverse engineering, such as decompile and disassemble, while malware developer utilises the obfuscation method to increase the difficulty of the decompiling process and lead to confusion in it. Obfuscation is a technique, which increases the difficulty in understanding the programmes due to the failure of the lead security analysts to distinguish between malware and benign application. Notably, it is a well-known obstacle to be examined by static analysis. Figure <ns0:ref type='figure' target='#fig_7'>10</ns0:ref> illustrates the types of obfuscation, which include encryption, oligomorphic, polymorphism, and metamorphism <ns0:ref type='bibr' target='#b130'>[126]</ns0:ref> <ns0:ref type='bibr' target='#b131'>[127]</ns0:ref>.</ns0:p><ns0:p>The encryption method was extensively practised by the malware writers. In this case, the important code or strings, which revealed the malware detector or security practitioner, should be identified. Accordingly, the code was encrypted and converted to the ciphertext. Furthermtore, various algorithms to encrypt the code are present, such as Caesar, Playfair, Data Encryption Standard (DES), Advanced Encryption Standard (AES), and Rivest-Shamir-Adelman (RSA). Therefore, for the security practitioner to understand the behaviour of the malware, the encrypted code should be decrypted using the correct decryptor <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>.</ns0:p><ns0:p>Besides being a malware capable of mutating @ changing the decryptor, the oligomorphic is also able to generate multiple decryptors to hundreds of types <ns0:ref type='bibr' target='#b131'>[127]</ns0:ref>. Consequently, the security practitioner would need to change different decryptor multiple times until the code is returned to the normal string. Nevertheless, this type of obfuscation does not affect the size or shape of the code. Another type of obfuscation is polymorphic. It is a descriptor, which affects the size or shape of the code. Compared to oligomorphic, it is more advanced due to the incorporation of code transposition, register reassignment, dead code @ nop insertion, and armoring. Meanwhile, metamorphism is an approach beyond the oligomorphic and polymorphic types due to the absence of decryptor in its mechanism. Therefore, its constant body could be hidden from memory and increase the difficulty of the static investigation to detect malware.</ns0:p><ns0:p>The following information is the obfuscation methods that regularly used by polymorphism (polimorphic) and metamorphism (metamorphic) obfuscation <ns0:ref type='bibr' target='#b131'>[127]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Code transportation</ns0:head><ns0:p>Code transposition is a method, which restructures the orders of the original code without causing any effects on its conduct. This process is performed with two methods. The first method is the random restructure of the original code by including jumps or unconditional branches. However, security practitioners can detect obfuscation by removing those jumps or unconditional branches. The second method is the production of new generations by selecting and restructuring independent instructions without any impact on others. However, the adoption of these methods is challenging for the malware writer, while the security practitioners are faced with a difficulty to detect this method of obfuscation.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Register reassignment</ns0:head><ns0:p>Register reassignment is another method of obfuscation, which shifts the registers of the code from one generation to another. This method is performed without changing the behaviour of the code while keeping the programme of the code similar to its original state. c) Dead-code/nop insertion Known as nop insertion, dead-code is a method, which adds several unnecessary instructions in the code and simultaneously keeps the behaviour of the code similar to its original state. Nevertheless, in certain situations, security practitioners able to detect this obfuscation by removing the aforementioned code.</ns0:p></ns0:div>
<ns0:div><ns0:head>d) Substitution of equivalent instruction sequences</ns0:head><ns0:p>The original code is changed through the substitution of several instructions. To illustrate, the SUB instruction is changed to XOR, while PUSH is changed to MOV.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.1'>Research to counter obfuscation</ns0:head><ns0:p>To overcome obfuscation, many studies were conducted on different approaches. Study by <ns0:ref type='bibr' target='#b119'>[115]</ns0:ref> used program dependence graph (PDG) to prevent program transformations in obfuscation. Droidlegacy <ns0:ref type='bibr' target='#b45'>[45]</ns0:ref> use graph node to represents the java class in detecting light obfuscation. Droidanalytics <ns0:ref type='bibr' target='#b91'>[89]</ns0:ref> and Drebin <ns0:ref type='bibr' target='#b55'>[55]</ns0:ref> extract the API calls while the codes running during execution time. In order to control the flow of obfuscation, Apposcopy use inter-component communication (ICC) to write the signature. Research by <ns0:ref type='bibr' target='#b132'>[128]</ns0:ref> uses jadx, one of reverse engineer tool to deobfuscation the obfuscation codes. Summary of studies conducted to overcome obfuscation shown in Table <ns0:ref type='table' target='#tab_2'>10</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Advantage of obfuscation</ns0:head><ns0:p>Despite the adoption of the obfuscation method by the malware writers or the attackers to evade detection, obfuscation also serves the following advantages based on other points of views: a) Reduction of the size of the application Google <ns0:ref type='bibr' target='#b133'>[129]</ns0:ref> encourages developers to enable shrinking in their release to build an application to remove any unused codes and resources. Furthermore, provided that obfuscation would shorten the names of the classes and members in the code, the developer will be able to reduce the size of the application. Notably, the size of the application is a significant concern in Android handheld devices (smartphones, smart glasses, and smartwatch) with limited storage and resources.</ns0:p><ns0:p>b) The difficulty for the malware writer to understand the obfuscated normal application To develop malware in certain situations, malware writers need to perform reverse engineering on the normal repackaged application. Therefore it is able to confuse them to steal private information and discover application vulnerabilities from that obfuscated normal @ benign application code <ns0:ref type='bibr' target='#b134'>[130]</ns0:ref>. c) Security practitioners can detect malware easily Obfuscation also facilitates the detection of malware by the researcher <ns0:ref type='bibr' target='#b135'>[131]</ns0:ref>. To illustrate, there are certain situations where malware regularly adopts similar obfuscation marks, which is impossible to exist in normal application. Therefore, security practitioners able to detect malware with the presence of these marks. Following all the advantages and drawbacks, continuous research on obfuscation is crucial to obtain better results from the detection of malware through the static analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>The list of all articles in the detection of malware in static analysis</ns0:head><ns0:p>To identify the trends in the detection of malware through static analysis, this section presents a list of previous works of research, which cover all areas (year, features, and classification). Table <ns0:ref type='table' target='#tab_0'>11</ns0:ref> lists a study DroidARA <ns0:ref type='bibr' target='#b136'>[132]</ns0:ref> in 2019, which performed an experiment combined with DL and graph and differentiation between malware and normal application. It applied a call graph to extract the API features and convolutional neural network (CNN) for classification. At the time of writing this paper, this is a new trend in detecting Android malware. Therefore, in future research, it is possible to witness more research combination similar to this with different features.</ns0:p><ns0:p>From the lists, most of researchers used API and manifest file features in their experiments to detect the malware. It proofs that API features were the popular codes used by the malware developers to create the malware. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. Every app project must have an androidmanifest.xml file at the root of the project source set. This manifest file is regularly in a binary form inside the APK, however when chosen in the APK Analyzer, the xml form is restructured and produced. There are some changes of the androidmanifest.xml document from a library application depends on was converged into the last androidmanifest.xml record. Other package files fall down into apk, xml and dex properties feature.</ns0:p><ns0:p>Besides the combination of DL and graph, ML and graph were also combined in the studies by <ns0:ref type='bibr' target='#b89'>[87]</ns0:ref> in 2016 and <ns0:ref type='bibr' target='#b77'>[76]</ns0:ref> in 2012. These studies utilised a similar graph, which was the Control flow graph (CFG), indicating that the combination of ML and graph increased the detection result. Therefore, future work is suggested to test this combination in different static features. Other parts of classification (Multiple-Domain Matrix (MDM), MD5 signature, similarity digest hashing, normalized compression distance (NCD), and fuzzy hashing technique) were also useful in the detection of malware with static features. These classifications also contributed to the availability of future work combinations with ML, DL, and graph.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Detect unknown malware</ns0:head><ns0:p>Initially, static analysis is unable to discover new or unknown malware as it only examined the code of the application without executing it. To elaborate on this situation, certain malware only executes certain parts whenever the application runs. Provided the drawback of static analysis in the identification of unknown malware, many security practitioners started to adopt machine learning, such as <ns0:ref type='bibr' target='#b137'>[133]</ns0:ref>[13], Drebin <ns0:ref type='bibr' target='#b55'>[55]</ns0:ref>, <ns0:ref type='bibr' target='#b57'>[57]</ns0:ref>, Droidapiminer <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>, <ns0:ref type='bibr' target='#b139'>[134]</ns0:ref>, Androsimilar <ns0:ref type='bibr' target='#b93'>[91]</ns0:ref>, <ns0:ref type='bibr' target='#b46'>[46]</ns0:ref>, <ns0:ref type='bibr' target='#b141'>[135]</ns0:ref>, <ns0:ref type='bibr' target='#b121'>[117]</ns0:ref>, <ns0:ref type='bibr' target='#b65'>[64]</ns0:ref>, and <ns0:ref type='bibr' target='#b66'>[65]</ns0:ref>. Similarly, the graph approach was also a suitable approach for this identification, as shown in <ns0:ref type='bibr' target='#b142'>[136]</ns0:ref>, Riskranker <ns0:ref type='bibr' target='#b143'>[137]</ns0:ref>, and Dendroid <ns0:ref type='bibr' target='#b88'>[86]</ns0:ref>. The <ns0:ref type='bibr' target='#b142'>[136]</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.'>Combination of static and dynamic analyses</ns0:head><ns0:p>It was proven in <ns0:ref type='bibr' target='#b130'>[126]</ns0:ref> that static analysis was inadequate for the detection of malware as this analysis should be combined with dynamic analysis to detect the malware effectively. Compared to static analysis, the dynamic analysis can evade the obfuscation technique. Essentially, provided that each type of analysis (static and dynamic) has its advantages and drawbacks, the combination of static and dynamic analyses would increase the effectiveness of the countermeasure action on the malware.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.'>Resource consumption in Android OS and deep learning</ns0:head><ns0:p>Deep learning (DL) is a subset of machine learning in artificial intelligence (AI), which is also known as a deep neural network or deep neural learning. Notably, with unlabeled and unstructured data, DL is capable of learning and predicting the output. It imitates the human brain in data processing, development of patterns from that data, and the implementation of decision making. It could be seen from the current trends that deep learning (DL) technique has a potential for further discovery. The implementation of this technique enables the DL to automatically determine the ideal features for prediction and classification. Currently, DL is widely used in almost every area, such as large scale image recognition tasks, automated driving, new cancer cell detection, hearing and speech translation, and aerospace area identification <ns0:ref type='bibr' target='#b144'>[138]</ns0:ref>.</ns0:p><ns0:p>However, DL requires substantial computing power, which needs a high amount of the graphic processing unit (GPU) based on the data to be processed <ns0:ref type='bibr' target='#b144'>[138]</ns0:ref>. This situation leads to an issue in the detection of malware, which attacks Android devices. Provided that Android mobile device is a device with small computing power, the adoption of DL becomes the main concern. However, the transfer of information from the Android device to the cloud provider is possible only for the execution of the DL process, which would then develop the device. Therefore, the large-scale adoption of DL is possible for future work in the static analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7.'>Future work in static analysis</ns0:head><ns0:p>It could be seen from the review in the previous sections (section 2) that many future opportunities for the static analysis to detect the malware, which attacks the Android. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.8.'>Popular features (API and manifest file)</ns0:head><ns0:p>Section 3.3 shows that many static analysis researchers frequently applied the manifest file and API calls as the features in their experiments. To illustrate, these popular features had been examined by the researchers from 2010 until 2019 due to the official update for Android and the addition of new features from time to time. However, most of the malwares still utilised similar features within this timespan, while other malwares utilised the latest and updated features. Therefore, the researchers are required to place continuous focus on these popular features.</ns0:p><ns0:p>The manifest file is one file with numerous features in it, such as permission, intent, hardware component, and application component, while API is a ready code for the programmer to develop their application. Therefore, it is crucial for researchers to thoroughly scrutinise these two categories of features. Moreover, some researchers incorporated other features to combine the manifest and API for more effective malware detection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Following the interest to explore the recent studies in the static analysis, a review was performed on the existing studies by past security investigators on Android malware detection, which was explained through phases (reverse engineer, features, and classification). Furthermore, this review covered the information within the ten years range (2009 to 2019). In this article, the features used in the static analysis were also reviewed. Within the aforementioned timespan, many security practitioners still preferred the API and manifest files, indicating the relevance of these two features. Moreover, the latest trends in classification were highlighted, which consists of machine learning, deep learning, graph, and other methods. These trends have proven the relevance of the graph method compared to machine learning. Static analysis researchers began the adoption of deep learning in their detection. This article also discussed the open research issues in the static analysis, including obfuscation as one of the weaknesses of static analysis and the methods of overcoming it. Many static analysis researchers implemented multiple methods to solve obfuscation and achieve a successful malware detection, such as concentrating the codes with the implementation of obfuscation (DexClassLoader.loadClass, Crypto.spec.DESKeySpec, and Cipher.getInstance) using a tool with the de-obfuscation option, including the adoption of graph node and program dependence graphs (PDGs). Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Malware detection using static analysis.</ns0:p><ns0:p>The static analysis operation, which consisted of several steps. The steps included dataset collections, reverse engineer, features identification, and classification. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b78'>[77]</ns0:ref> Instance-based learning with parameter k (ibk) <ns0:ref type='bibr' target='#b78'>[77]</ns0:ref> Simple logistic <ns0:ref type='bibr' target='#b78'>[77]</ns0:ref> Multilayer perceptron <ns0:ref type='bibr' target='#b66'>[65]</ns0:ref> Unsupervised K-means <ns0:ref type='bibr' target='#b135'>[131]</ns0:ref> [47] <ns0:ref type='bibr' target='#b63'>[63]</ns0:ref> [79] Normalised Compression distance (NCD) <ns0:ref type='bibr' target='#b117'>[113]</ns0:ref> <ns0:ref type='bibr' target='#b118'>[114]</ns0:ref> 2 Year Solution for the obfuscation DNADroid <ns0:ref type='bibr' target='#b116'>[112]</ns0:ref> 2012 Using programme dependence graphs (PDGs), DNADroid can prevent typical program transformations in obfuscation. <ns0:ref type='bibr' target='#b85'>[83]</ns0:ref> 2012 Detects encryption attempts as one of the obfuscation methods DroidAPIMiner Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Dendroid <ns0:ref type='bibr' target='#b86'>[84]</ns0:ref> 2014 Concentrates on the internal structure of code units (methods) to resist obfuscation.</ns0:p><ns0:p>Droidlegacy <ns0:ref type='bibr' target='#b45'>[45]</ns0:ref> 2014 Graph node represents the Java class, which detects light obfuscation. <ns0:ref type='bibr' target='#b129'>[125]</ns0:ref> 2017 Uses Jadx (a reverse engineering tool), which provides the de-obfuscation option.</ns0:p><ns0:p>It is capable of de-obfuscating the obfuscation code in minimal error. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The detection of malware, which attacks Android OS, based on previous static analysis.</ns0:p><ns0:p>To identify the trends in the detection of malware through static analysis, this section presents a list of previous works of research, which cover all areas (year, features, and classification). • √ √</ns0:p><ns0:formula xml:id='formula_0'>[173] • • √ [197] • √ [198] • • √ [199] • • • √ [200] • √ [201] • √ [184] • • √ [185] • • √ [186] • √ √ [67] • • • √ [90] • √ [92] • √ [171] • • • √ [172] • • • • [187] • √ [188] • √ [189] • • • • √ [190] • √ [80] • √ [91] • √ [156] • √ [191] • √ [192] • √ [193] • • √ [82]</ns0:formula><ns0:p>• √ Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b90'>[88]</ns0:ref> • √ </ns0:p><ns0:formula xml:id='formula_1'>[194] • √ [195] • √ [59] • • √ [61] • • • √ [61] • • • √ [65] • • • √ [78] • √ [82] • • √ [130] • [133] • • √ [196] • √ [13] • • √ [55] • • • √ [45] • √ [52] • √ [53] • • √ [54] • √ [57] • • • √ [58]</ns0:formula></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed Computer Science b) Unsupervised Learning</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>is indicated. Compared to bagging PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>study utilised a PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed Computer Science data dependence graph (DDG), Riskranker, and Dendroid, which employed the control-flow graph (CFG).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>One of the opportunities is the combination of different DL classifier (Recurrent neural network -RNN, Generative* adversarial networks -GAN or Deep belief network* -DBN) with other features besides API, with different types of graph. However, Section 3.3 shows that only one experiment started the combination between DL and graph with one API feature in 2019. Therefore, the accuracy of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed Computer Science detection results would increase, leading to the identification of a new family of malware. It is also noteworthy that other future alternatives are available for the combination of DL with NCD and MDM.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 1 The</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>the API calls in methods and classes, which will be executed in the run time. Additionally, the generated signature, which is based on the analystdefined API, which have the ability to update flexibly. Apposcopy [52] 2014 Includes the predicate inter-component communication (ICC), which allows the writing of signatures, which are resilient to high-level control flow obfuscation. Drebin [55] 2014 DREBIN extracts API calls related to obfuscation and loading of code, such as DexClassLoader.loadClass() and Cipher.getInstance(). PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>PeerJ1</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)Manuscript to be reviewedComputer Science Table1: The detection of malware, which attacks Android OS, based on previous static analysis</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,229.87,525.00,116.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,229.87,525.00,216.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,229.87,525.00,339.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,220.38,525.00,416.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,229.87,525.00,414.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,220.38,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,259.62,525.00,291.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,229.87,525.00,279.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>1 Table 1 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Comparison malware analysis techniques</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>features from native binaries of apps</ns0:cell></ns0:row><ns0:row><ns0:cell>Year</ns0:cell><ns0:cell>References [153]</ns0:cell><ns0:cell>Analysis Static</ns0:cell><ns0:cell cols='2'>Features API calls, intent actions and information</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>tools</ns0:cell><ns0:cell>flow</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>[16] [154] [136] [155] apktool aapt</ns0:cell><ns0:cell cols='2'>Static Static Static Dynamic [45] [47] [89] [56] App's permissions Texture, color, text permissions, API Fingerprint [77]</ns0:cell><ns0:cell>calls,</ns0:cell><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>[156] androguard</ns0:cell><ns0:cell cols='2'>addresses API call graphs [84] [76] [30] [82] Static</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>[137] [157] baksmali [138] dex2jar [139] [158] jadx dedexer</ns0:cell><ns0:cell cols='3'>Static Hybrid [177] [83] [75] [49] [51] [62] [44] API calls, code, Android Manifest, Assembly, Dex, Xml, Apk Static [178] [53] [59] [46] [56] encryption or reflection CFG, DFG Static API calls Static Minimum-Redundancy [179] [65] Maximum-[180]</ns0:cell></ns0:row><ns0:row><ns0:cell>smali</ns0:cell><ns0:cell>[140]</ns0:cell><ns0:cell>Static [170]</ns0:cell><ns0:cell cols='2'>Repackaging and standalone Relevance (MRMR)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[141] [159]</ns0:cell><ns0:cell>Static Static</ns0:cell><ns0:cell cols='2'>Root node, decision nodes, and leaf nodes Java Bytecode</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[142] [160]</ns0:cell><ns0:cell>Static Dynamic</ns0:cell><ns0:cell>API call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[143] [161]</ns0:cell><ns0:cell>Static Static</ns0:cell><ns0:cell cols='2'>Code metric API call and the names of functions and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[144]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>Platform-based permissions, hard-ware methods</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[162]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>components, and suspicious API calls Subgraph</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[145] [163]</ns0:cell><ns0:cell>Static Static</ns0:cell><ns0:cell cols='2'>Permissions and API calls n-opcode feature</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[146] [164]</ns0:cell><ns0:cell>Hybrid Dynamic</ns0:cell><ns0:cell cols='2'>Manifest file (i.e., number of activities, System call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[165]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>permissions, receivers, filters), and the Manifest file, apk file</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[166]</ns0:cell><ns0:cell>Hybrid</ns0:cell><ns0:cell>source code analysis Malware signatures</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[147] [130]</ns0:cell><ns0:cell>Hybrid Static</ns0:cell><ns0:cell cols='2'>API call Signature extraction signature matching</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[148] [167]</ns0:cell><ns0:cell>Static Dynamic</ns0:cell><ns0:cell cols='2'>Weighted-sensitive-API-call-based graph Fine-grained HTTP structural</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[149] [168]</ns0:cell><ns0:cell>Dynamic Static</ns0:cell><ns0:cell cols='2'>Enabling the recording of parameters and API data dependency</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[169]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>return value of an API call API call, apk</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[150] [170]</ns0:cell><ns0:cell>Dynamic Static</ns0:cell><ns0:cell cols='2'>transitions Bytecode frequency probabilities,</ns0:cell><ns0:cell>states</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[84]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>frequencies, and aggregated state Code structures</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>frequencies grouped</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[151]</ns0:cell><ns0:cell>Hybrid</ns0:cell><ns0:cell cols='2'>number of activities, services and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>receivers given in the Manifest file and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>the size of the APK file</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[152]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>API usage, reflection-based features, and</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed 2 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Machine learning and its classifier used in studies.Machine learning types supervised and unsupervised with the classifier and the respective articles. To detect Android malware with static features, the supervised learning method is widely used by security practitioners.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Machine learning and its classifier used in studies</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Machine learning type Classifier</ns0:cell><ns0:cell /><ns0:cell>Reference</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised</ns0:cell><ns0:cell cols='2'>K-nearest neighbor</ns0:cell><ns0:cell>[181] [30]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Support</ns0:cell><ns0:cell>vector</ns0:cell><ns0:cell>[30] [76] [75] [55]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>machine (svm)</ns0:cell><ns0:cell>[181] [182] [55]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CART</ns0:cell><ns0:cell /><ns0:cell>[79]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Adaboost</ns0:cell><ns0:cell /><ns0:cell>[75] [59]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bayes</ns0:cell><ns0:cell /><ns0:cell>[13] [132] [73] [77]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>[64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Logistic Regression</ns0:cell><ns0:cell>[78]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Prism (PART)</ns0:cell><ns0:cell>[68] [57] [64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Voting feature interval</ns0:cell><ns0:cell>[64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(vfi)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random forest</ns0:cell><ns0:cell>[64] [77] [30] [75]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>minimal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>optimisation (smo)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 11 (on next page)</ns0:head><ns0:label>11</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:1:1:NEW 11 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Journal: PeerJ Computer Science
Manuscript Reference Number:
Title: Malware detection using static analysis in Android: A review of FeCO (Features, Classification, and Obfuscation)
Authors: Rosmalissa Jusoh, Ahmad Firdaus, Shahid Anwar, Mohd Zamri Osman, Mohd Faaizie Darmawan, Mohd Faizal Ab Razak
Document Correction Form:
Dear Editor,
We highly appreciate your thorough investigation of our paper. We would like to thank your team for reviewing the paper and making several suggestions for improvement. We would also like to thank the anonymous reviewers for their excellent suggestions and comments. Based on the reviews, we have made required changes to the paper which are incorporated as follows. All the reviews provided by the reviewers have been honored and worthy suggestions been accommodated in the revised manuscript. The revised manuscript along with the marked revisions can be found in the word file.
Reviewer #1 Comment
Comment 1
Authors’ Response
Thanks to the author for using a clear and understandable English in article. It's nice to mention dynamic analysis and static analysis in the article. however, there are a few more approaches in the literature. For example, these are signature-based approach and hybrid approach. The author should mention them as well. In my opinion, the author can add them to line 61. The author should briefly explain these approaches in the article.
Thank you very much for the insightful feedback. It is our sincere hope that that the quality meets the reviewer’s requirement.
There are dynamic, static, and hybrid analysis.
Briefly explained about hybrid analysis in the paragraph as below.
To integrate the characteristics of the static and dynamic method, three-layer detection model called SAMAdroid has been proposed by [14] which combines static and dynamic characteristics. Mobile Sandbox by [15] which proposed to use the results of static analysis to guide the dynamic analysis and finally realize classification. The hybrid analysis technique is great to help in improving the accuracy, but it also has a major drawback such as the waste of time and space for the huge number of malware samples to be detected and analyzed [16][17].
Comment 2
Authors’ Response
Between lines 63 and 71, a comparison of Dynamic and Static analysis has been tried. It would be more appropriate to include this in a comparison table with citations. The author can add the signature-based approach and hybrid approaches in this table.
We kindly appreciate your comments. We have added comparison between these static, dynamic and hybrid analysis in Table 1 with citation. However, for signature-based approach is one of detection techniques in static analysis. We added explanation of signature-based in section 2.
A lot of researchers publish their works using static approaches for malware detection on the Android platform. Even in this static approach, in its turn, contains a number of approaches. For example, there are signature-based approach and other approach are depending on detection and classification of the source code. Signature-based detection utilizes its specification by having an information of malware signatures determined and arranged in advance inspection [32]. However, signature-based approach are not able to detect unknown malware even though this approach is a set of features that uniquely differentiate the executable code [33].
Comment 3
Authors’ Response
As the author mentioned in line 119, He used “static analysis”, “malware”, and “Android” in the CA database for his literature review. however, the keywords of the article are; Android; Review; Static analysis; Machine learning; Features. Author should add “malware” here like as others.
Thank you for your comment. Here we added the malware keyword as suggested.
Keywords: Android; Malware; Static analysis; Machine learning; Features
Comment 4
Authors’ Response
Table 2 and 7, Figure 9 can be removed. The author should explain “Figure 5” in section 2.3. Feature more. A little more explanation for Figure 5 in there is better.
We kindly appreciate your comments. Table 2, Table 7, and Figure 9 have been removed.
Comment 5
Authors’ Response
A comparison table with 4 machine learning (CL, RL, NN&DL and EM) should be created with citations. This table should show the machine learning metrics (such as KNN,SVM,K Means, Deep Q,MLP or RF) used as well. This table created can be added to section 2.4.1.This section should explain the advantages and disadvantages of compared machine learning methods. The advantages and disadvantages of static analysis against these ML methods should also be mentioned.
We greatly appreciate the reviewer’s comment. We have created the new table for comparing previous works using different types of machine learning.
Table 8 shows previous works done using different types of machine learnings as mentioned before. From the table, we can summarize classical learning is still valid to be used in experiment execution but there are a lot of works are using deep learning and graph method. The current trends show the demand using the deep learning technique to defend against an increasing number of sophisticated malware attacks where deep learning based have become a vital component of our economic and national security. Many recent studies on Android malware detection have leveraged graph analysis as mentioned in the next section.
Comment 6
Authors’ Response
Table 9 mentions “studies conducted to overcome obfuscation”. Open source advantages and disadvantages in section 3.1 can also be added to this table with citations
We kindly appreciate your valuable recommendation. However, open-source advantages and disadvantages in section 3.1 is not related to Table 10 (table updated). Table 10 mentions about studies conducted to overcome obfuscation while section 3.1 mentions about the availability of open-source operating system.
Comment 7
Authors’ Response
Table 10 is very nice. It is listed by years. The 2nd listed criterion may be reference numbers. The reference numbers in the table can be listed in ascending order. So it can look better.
We greatly appreciate the reviewer’s comment. The reference number in Table 11 (updated table) has been reviewed and listed in ascending order as suggested. Hope it will look better.
Comment 8
Authors’ Response
The interpretation of table 10 should be given more in section 3.3. It is useful to add explanations such as which ML was used the most and why it was preferred. The frequency of using static and dynamic analysis in studies can also be mentioned. Is the data set used or not used in the studies? Similar interpretations should be made for table.
We kindly appreciate your valuable recommendation. Explanations of Table 11 (updated table) have added to the section 3.3 as below.
From the lists, most of researchers used API and manifest file features in their experiments to detect the malware. It proofs that API features were the popular codes used by the malware developers to create the malware. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. Every app project must have an androidmanifest.xml file at the root of the project source set. This manifest file is regularly in a binary form inside the APK, however when chosen in the APK Analyzer, the xml form is restructured and produced. There are some changes of the androidmanifest.xml document from a library application depends on was converged into the last androidmanifest.xml record. Other package files fall down into apk, xml and dex properties feature.
Comment 9
Authors’ Response
Check whether the web links in references are accessible. The author can use ”webcitation” for such links.
We greatly appreciate the reviewer’s comment. The webcitation in references are accessible to be used.
Comment 10
Authors’ Response
All section numbers in the article should be rechecked. It seems to me that it will be “3.2.1. Research to counter obfuscation” instead of 4.2.1. and it should be 3.2.2. instead of 4.2.2 as well.
Thank you for your comments. Section 4.2.1 changed to 3.2.1 and section 4.2.2 changed to 3.2.2.
3.2.1 Research to counter obfuscation
3.2.2 Advantage of obfuscation
Reviewer #2 Comments
Comment 1
Authors’ Response
Which journals have articles listed most at the end of your research? In other words, which journals have been used more? Survey publications are especially important for new researchers who will work in this field. How many articles from the most frequently used 5 or 10? These journals should be given in a table. In this way, it will be revealed which journals will mostly benefit people who will conduct research in this field. These journals can be added by opening a new subsection in Section 1 or where appropriate.
We greatly appreciate the reviewer’s comment. We have added this paragraph in Section 1.1 regarding the suggestion given. Table 3 summarize the lists of journal and conference.
There were 389 records identified through database searching. These journals and conferences are mainly from Computer and Security and IEEE Access, which are listed in Table 3. Collections of the studies that are related to Android malware detection using static analysis in the reference section, where studies take up a small proportion in the primary studies. All the studies related to search terms are taken into account, and the searching range is from January 2009 to December 2019.
Comment 2
Authors’ Response
When referencing tables and figures, there is no need to use the above or below patterns. It is already clear which table or figure it is. As a matter of fact, the tables or figures in the manuscript I read are in a different file. Instead of the above or below, a statement like in Table x, Figure y will be more fluent.
We appreciate the reviewer’s insightful comment. The statement above or below has been removed from the figures and the tables.
Comment 3
Authors’ Response
The data sets used in this field are given in Section 2.1. When I examine some of the references provided in this section, I only see the names of the relevant data sets in the reference list. For example, in [31, 32, 34, 35] references only the names of the relevant data sets are written. The official link of the data set should be added to these references. If there are blank references other than these, they should also contain the necessary information. This is important for the reader.
Thank you for the thorough observations. The citation for the datasets updated in references section with the official links.
[34] “Malgenome Project.” http://www.malgenomeproject.org/.
[36] U. du Luxembourg, “Androzoo,” 2016. https://androzoo.uni.lu/.
[37] “Drebin dataset.” http://www.sec.cs.tu-bs.de/~danarp/drebin/.
[39] “AMD dataset.” http://amd.arguslab.org/.
Comment 4
Authors’ Response
While 'apktool' and 'aapt' tools are expressed separately in Figure 4, these two tools are given as 'Apktool @ aapt' in Table 5. Why is it included in Table 5? Is it given as “Apktool @ aapt” because the related works use both tools? I suggest it to be given separately in the works using the 'apktool' and 'aapt' tools. If it will be used together, the necessary explanation should be made. Otherwise, the reader will be confused.
We greatly appreciate the reviewer’s comment. “apktool” and “aapt” are different tools for reverse engineer.
Table 6 (updated table) shows separately each tool with references to the articles.
Comment 5
Authors’ Response
Are there any studies using the Reinforcement learning technique under section 2.4.1? While the studies using other techniques are given in Table form, there is no table related to this technique. If there are no studies using this technique, an explanation should be made about it.
We kindly appreciate your comments. Additional explanation using Reinforcement Learning technique has been added under section 2.4.1.
The learned value function is called deep Q learning, in which Q function is learned and refined from over hundreds of games [98]. The Q-learning algorithm was trained in network [96] with stochastic gradient descent to update the weights. Replay mechanism was used from random samples previous transitions to lead smooth training distribution over past behaviors to overcome the correlated data and non-stationary distributions problems. [95] propose a framework based on reinforcement learning (RL) for attacking static portable executable (PE) anti-malware engines. Meanwhile, a DQN-based mobile proposed by [99] to enhance the malware detection performance. The results shown from simulation can increase the malware detection accuracy and reduce the detection delay as compared to a Q-learning based malware detection scheme.
Comment 6
Authors’ Response
Some paragraphs consist of one or two sentences. Paragraphs should be expanded using meaningful sentences or combined with other paragraphs in a way that does not disturb the integrity of the part.
Thank you for your comment. We have rearranged the paragraph structures. Short paragraphs expanded to few more sentences and long paragraphs adjusted to be balanced.
Comment 7
Authors’ Response
On line 562, a section titled 'Open research issues' is written. What is explained with the chapter title is not compatible. When I read the title, I thought it would give obvious problems in this area, but I did not encounter such a thing. I was disappointed as a result. This section needs to be rewritten.
Thank you for the thorough observations. We have rewritten the information in the section titled “Open research issues”.
This section explains the issues involved in the static analysis, which were derived from previous research articles. Specifically, a discussion is made on the advantages and disadvantages of the open-source operating system, which rely on the availability of the kernel application code. Another issue on static analysis is code obfuscation used by the malware developer to increase the difficulty to detect the malware and research done to counter obfuscation. Then, we review overall static analysis articles, how to detect unknown malware, the combination of static and dynamic, resource consumption, future work, and features.
Comment 8
Authors’ Response
There is a sentence in the subsection on line 637. This subsection should be expanded in line with the relevant tables.
We kindly appreciate your valuable recommendation. This subsection expanded as below.
To overcome obfuscation, many studies were conducted on different approaches. Study by [112] used program dependence graph (PDG) to prevent program transformations in obfuscation. Droidlegacy [45] use graph node to represents the java class in detecting light obfuscation. Droidanalytics [87] and Drebin [55] extract the API calls while the codes running during execution time. In order to control the flow of obfuscation, Apposcopy use inter-component communication (ICC) to write the signature. Research by [192] uses jadx, one of reverse engineer tool to de-obfuscation the obfuscation codes. Summary of studies conducted to overcome obfuscation shown in Table 10.
Comment 9
Authors’ Response
When Table 10 is examined, A and M are similar to each other. This needs to be parsed. Because 'Manifest file' represented by M can be evaluated as a subset of 'apk, dex and XML properties' represented by A. This will make it difficult for the reader to understand.
Thank you for your comments. Manifest file features are different from apk, dex and xml properties. The description added in Section 3.3.
From the lists, most of researchers used API and manifest file features in their experiments to detect the malware. It proofs that API features were the popular codes used by the malware developers to create the malware. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. Every app project must have an androidmanifest.xml file at the root of the project source set. This manifest file is regularly in a binary form inside the APK, however when chosen in the APK Analyzer, the xml form is restructured and produced. There are some changes of the androidmanifest.xml document from a library application depends on was converged into the last androidmanifest.xml record. Other package files fall down into apk, xml and dex properties feature.
" | Here is a paper. Please give your review comments after reading it. |
87 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Android is a free open-source operating system (OS), which allows an in-depth understanding of its architecture. Therefore, many manufacturers are utilizing this OS to produce mobile devices (smartphones, smartwatch, and smart glasses) in different brands, including Google Pixel, Motorola, Samsung, and Sony. Notably, the employment of OS leads to a rapid increase in the number of Android users. However, unethical authors tend to develop malware in the devices for wealth, fame, or private purposes. Although practitioners conduct intrusion detection analyses, such as static analysis, there is an inadequate number of review articles discussing the research efforts on this type of analysis. Therefore, this study discusses the articles published from 2009 until 2019 and analyses the steps in the static analysis (reverse engineer, features, and classification) with taxonomy. Following that, the research issue in static analysis is also highlighted.</ns0:p><ns0:p>Overall, this study serves as the guidance for novice security practitioners and expert researchers in the proposal of novel research to detect malware through static analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Mobile devices, such as smartphones, iPads, and computer tablets, have become everyday necessities to perform important tasks, including education, paying bills online, bank transactions, job information, and leisure. Based on the information from an online mobile device production website, Android is one of the popular operating systems (OS) used by manufacturers <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. The open-source platform in Android has facilitated the smartphone manufacturers in producing Android devices of various sizes and types, such as smartphones, smartwatches, smart televisions, and smart glasses. In the most recent decades, the quantity of remarkable Android gadgets accessible worldwide has increased from 38 in 2009 to over 20,000 in 2016 <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. As a result of the demand for this Android OS, the recent statistics from Statista revealed that the number of Android malware increase to 26.6 million in March 2018 <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Moreover, McAfee discovered a malware known as Grabos, which compromises the Android and breaches Google Play Store security <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. It was also predicted that 17.5 million Android smartphones had downloaded this Grabos mobile malware before they were taken down.</ns0:p><ns0:p>Mobile malware is designed to disable a mobile device, allow malicious acts to remotely control the device, or steal personal information <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. Moreover, these malicious acts able to run stealthily and bypass permission if the Android kernel is compromised by mobile malware <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>. In September 2019, a total of 172 malicious applications were detected on Google Play Store, with approximately 330 million installations. According to researchers, the malicious components were hidden inside the functional applications. When the applications are downloaded, it leads to the appearance of popup advertisements, which remain appear even when the application was closed <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. To detect this malware, security practitioners conducting malware analysis, which aims to study the malware characteristics and behaviour. There are dynamic, static, and hybrid analysis.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> shows comparison for static, dynamic and hybrid analysis done from previous researches. Specifically, dynamic analysis is an analysis, which studies the execution and behaviour of the malware <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. However, dynamic analysis is incapable of identifying several parts of the code operating outside the monitoring range. Besides, provided that the dynamic analysis is a high resource-consuming analysis with a high specification for hardware <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>, static analysis is another alternative to detect malware. It is an analysis, which examines malware without executing or running the application. Additionally, this analysis able to identify malware more accurately, which would act under unusual conditions <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>. This is due to static analysis examine overall parts of a program including parts that excluded in dynamic analysis. Furthermore, static analysis is able to detect unknown malware just as dynamic analysis could <ns0:ref type='bibr' target='#b14'>[13]</ns0:ref> and requiring low resources.</ns0:p><ns0:p>To integrate the characteristics of the static and dynamic method, three-layer detection model called SAMAdroid has been proposed by <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref> which combines static and dynamic characteristics. Mobile Sandbox by <ns0:ref type='bibr' target='#b16'>[15]</ns0:ref> which proposed to use the results of static analysis to guide the dynamic analysis and finally realize classification. The hybrid analysis technique is great to help in improving the accuracy, but it also has a major drawback such as the waste of time and space for the huge number of malware samples to be detected and analyzed <ns0:ref type='bibr' target='#b17'>[16]</ns0:ref> <ns0:ref type='bibr' target='#b18'>[17]</ns0:ref>.</ns0:p><ns0:p>There were 430 records identified through database searching. These journals and conferences are mainly from Computer and Security and IEEE Access, which are listed in Table <ns0:ref type='table'>3</ns0:ref>. Collections of the studies that are related to Android malware detection using static analysis in the reference section, where studies take up a small proportion in the primary studies. All the studies related to search terms are taken into account, and the searching range is from January 2009 to December 2019.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2.'>Screening</ns0:head><ns0:p>Experiment articles were identified in the static analysis, omitting other unrelated articles. Initially, the searching of articles was specified into a journal article and excluded review articles, books, and conference proceedings. To focus specifically on static analysis, the articles, which combined both static and dynamic analyses, were removed. Another criterion focused on the selection of the articles was the use of English, which therefore removed all non-English articles to avoid any difficulty in translating in the future. The selection of articles took place from 2009 to 2019, totaling the duration to 10 years. This duration was suitable for exploring the evolution of research in security areas. Apart from that, the Android platform was the focus of this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.'>Eligibility</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>1</ns0:ref> depicts the review that process involved four steps; identification, screening, eligibility, and analysis. The review was performed in mid of 2019. Based on previous studies, the process used similar keywords related to malware detection, static analysis, and security. After the identification process, we remove any duplicated articles. During the screening process, we discover 375 documents and remove a few articles and left 172 articles. This is because the articles were unrelated to the interested area. Lastly, we used 150 articles for review <ns0:ref type='bibr' target='#b24'>[23]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.'>Data analysis included</ns0:head><ns0:p>Then we analyzed the remaining articles, extracted the abstract, and downloaded the full articles. This is to find the appropriate topic in-depth and to have a strong justification for the research. Then, this process organized the topic and subtopic accordingly based on the static analysis. Qualitative analysis was performed based on content analysis to identify issues related to this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Static analysis</ns0:head><ns0:p>Mobile malware compromises Android devices (smartphone, smartwatch, and smart television) for wealth, stealing data, and personal purposes. The examples of mobile malware include root exploit, botnets, worms, and Trojan. To detect malware, most of security practitioners perform two types of analysis; dynamic and static. Specifically, dynamic analysis is an experiment, which detects malware by executing malware and benign applications to monitor and differentiate their behaviours. However, the monitoring of all behaviours is costly and requires high specifications in terms of device memory, CPU, and storage. Furthermore, the malware is inflicted on a device at a certain time or whenever the attacker decides on it. Accordingly, as the dynamic analysis only monitors behaviours at a certain range of time based on the research period, numerous malware activities outside the research period might be omitted <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref>[25] <ns0:ref type='bibr' target='#b28'>[26]</ns0:ref>. Furthermore, dynamic analysis requires a separate and closed virtual environment to run a malware and observe its behaviour on the system. However, an isolated setup dynamic leads to an impractical analysis in the Android platform due to the increase in power and memory consumption. While power and memory are the most concerning constraints of Android devices, static analysis is the alternative for the dynamic analysis.</ns0:p><ns0:p>Static analysis is a category of analysis, which investigates the malware application code and examine full activities in an application within an unlimited range of time, by without executing the application <ns0:ref type='bibr' target='#b30'>[27]</ns0:ref>. The main step of static analysis procedure is the reverse engineer process, which retrieves the whole code and further scrutinises the structure and substance within the application <ns0:ref type='bibr' target='#b32'>[28]</ns0:ref>- <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref>. Therefore, this analysis can examine the overall code with low requirement for memory resources and minimum CPU processes. Additionally, the analysis process is prompt due to the absence of the application. With this analysis, unknown malware is also identified using enhanced detection accuracy through machine learning approaches <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref> <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref>. Table <ns0:ref type='table'>4</ns0:ref> presents the advantages and disadvantages of dynamic and static analyses. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A lot of researchers publish their works using static approaches for malware detection on the Android platform. Even in this static approach, in its turn, contains a number of approaches. For example, there are signature-based approach and other approach are depending on detection and classification of the source code. Signature-based detection utilizes its specification by having an information of malware signatures determined and arranged in advance inspection <ns0:ref type='bibr' target='#b36'>[32]</ns0:ref>. However, signature-based approach are not able to detect unknown malware even though this approach is a set of features that uniquely differentiate the executable code <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref>.</ns0:p><ns0:p>Obfuscation is one of the obstacles in the static analysis, which is used by malware authors in their malicious software to evade the intrusion detection or antivirus system <ns0:ref type='bibr' target='#b28'>[26]</ns0:ref>. The examples of the obfuscation methods are renaming the code, adding unnecessary codes, and encrypting the string. Therefore, security practitioners need to overcome obfuscation to increase their detection results. Accordingly, the alternatives performed by the security practitioners are presented in Section 3.2. Table <ns0:ref type='table'>4</ns0:ref> shows that both static and dynamic analyses have similar limitations despite the selection of the ideal features in minimal amount. In detecting malware, features refer to the attributes or elements to differentiate an application, which may either be malware or benign. Security practitioners are faced with obstacles in investigating various features in all types of categories (e.g., permission, API, directory path, and code-based) and the need to simultaneously reduce these features. Notably, determining the ideal features in minimal amount is crucial to enhance the accuracy of the analyses (e.g., the accuracy of the predictive model) and reduce data and model complexity <ns0:ref type='bibr' target='#b19'>[18]</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> illustrates the static analysis operation, which consisted of several steps. The first step was the acquirement of the benign and malware datasets in the Android application, each with the (.apk) filename extension. This was followed by the reverse engineering performed on these applications to retrieve the code by extracting a few folders from one .apk file, which consisted of nested files with codes (Java or smali). Furthermore, one .apk would comprise approximately a thousand lines of codes. Therefore, with a total of 1000 applications in one dataset, the security practitioners were required to scrutinise millions of lines of code. With the completion of the reverse engineering, an analysis would be conducted, which involved features. Features consist of a series of application characteristics for the detection of malware, while classification is an approach used to differentiate between malware and benign (normal) application. The following section thoroughly discusses the static analysis, which specifically begins with a focus on the dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.'>Dataset</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the Android malware dataset from different places. Notably, the majority of the datasets were obtained from universities. The datasets were in the form of an Android application package, which was followed by an .apk filename extension. Malgenome <ns0:ref type='bibr' target='#b39'>[34]</ns0:ref> is the name of Android malware dataset, which was made to be publicly available with permission from their administrator. These malware samples, which were collected by North Carolina State University (NCSU) from August 2010 to October 2011, covered multiple families of malware consisting of botnet and root exploit. The characterization of the malware families was based on the method of the installation, the way the malware carried the malicious payloads, and its method of activation.</ns0:p><ns0:p>Androzoo <ns0:ref type='bibr' target='#b40'>[35]</ns0:ref> <ns0:ref type='bibr' target='#b41'>[36]</ns0:ref> is another dataset consisting of approximately more than three million of Android applications (.apk). This dataset originates from the University of Luxembourg to contribute to the community for research purposes and further explore the notable development in the detection of malware, which damages the Android. Drebin <ns0:ref type='bibr' target='#b42'>[37]</ns0:ref> dataset also presents Android malware publicly with strict requirements. A university from Germany (University in Braunschweig, Germany) collected 5560 samples with 179 families. The time range provided for the malware was from August 2010 to October 2012. The university project, which was known as MobileSandbox, was an initiative for the acquirement of samples for academia and industry.</ns0:p><ns0:p>Android malware dataset (AMD) [39] is a public Android malware dataset from the University of South Florida, which consists of 24,650 samples with 71 categorised families. To obtain this dataset, the user is required to acquire permission from the university and provide authentic information with evidence. The academia and the industry are allowed to use these samples for research purposes.</ns0:p><ns0:p>Contagio <ns0:ref type='bibr' target='#b44'>[40]</ns0:ref> dataset presents the malware, which focuses on mobile malware, with a condition that the user should omit one sample to obtain another sample. It provides a dropbox for the user to share their mobile malware samples. According to their blogspot <ns0:ref type='bibr' target='#b44'>[40]</ns0:ref>, the name of the administrator of this dataset is Mila Parkour, who is reachable only through emails. Based on Table <ns0:ref type='table'>5</ns0:ref>, which presents the research articles and the respective datasets, it could be seen that the dataset providers receive significant attention from other universities and the industry. It is hoped that this action would enhance the security of the Android device and its users from time to time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.'>Reverse engineer</ns0:head><ns0:p>Static analysis is an activity to investigate the code of an application without executing it. In order to investigate, security practitioners implement the reverse engineering method. This method reversed from the executable file to its source code <ns0:ref type='bibr' target='#b45'>[41]</ns0:ref>. This reverse engineering process loads the executable into a disassembler to discover what the program does. Figure <ns0:ref type='figure'>4</ns0:ref> illustrates the tools used to perform a reverse engineering method, which was also adopted by security practitioners to identify Android malware. Table <ns0:ref type='table'>6</ns0:ref> illustrates the tools adopted in the respective articles.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.4.'>Directory path</ns0:head><ns0:p>Directory path allows access for a specific folder in the operating system (OS). It was found by security practitioners that the attacker incorporated a directory path for a sensitive folder in their malware. Meanwhile, several paths related to Android kernel directory were identified by another study <ns0:ref type='bibr' target='#b72'>[65]</ns0:ref>, such as 'data/local/tmp/rootshell', '/proc', and '/system/bin/su'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.5.'>Commands</ns0:head><ns0:p>Two types of commands are available, namely 1) root command and 2) botnet command. Specifically, several root commands were identified by <ns0:ref type='bibr' target='#b72'>[65]</ns0:ref> in the Unix machine, such as 'cp', 'cat', 'kill', and 'mount'. Normally, these commands were used by the administrators to execute higher privileged actions in the Unix machine. Provided that Android architecture was based on the Unix kernel, the attackers included root commands in their malware to control the victim's Android devices. Therefore, the identification of root commands is crucial in investigating malwares.</ns0:p><ns0:p>The second type of command is a botnet command. Meanwhile, one type of malware, which is known as a mobile botnet, includes botnet commands in their malware codes, such as 'note', 'push', 'soft', 'window', 'xbox', and 'mark'. The attacker used these commands to communicate with the command and control (C&C) server, while droidanalyzer <ns0:ref type='bibr' target='#b65'>[58]</ns0:ref> combines API, root command, and botnet command into a set of features to detect root exploit and mobile botnet.</ns0:p><ns0:p>Other than ad libraries, certain researchers inspect the Android Debug Bridge (adb) code. ADB <ns0:ref type='bibr' target='#b73'>[66]</ns0:ref> is a tool, which provides a command-line access facility for users or developers to communicate with Android mobile devices. This facility allows the installation of unwanted applications and execution of various Unix by the attacker in the victim's device. Therefore, RODS <ns0:ref type='bibr' target='#b74'>[67]</ns0:ref> is a root exploit detection system for the detection of a root exploit malware with ADB features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.6.'>Function call</ns0:head><ns0:p>In programming, a function call is a declaration, which consists of a name and is followed by an argument in parenthesis. The list of the argument may include any numbers of the name, which are either separated by commas or left empty. Another study by <ns0:ref type='bibr' target='#b75'>[68]</ns0:ref> involved the extraction of a function call through readelf, which was then used for the features in machine learning prediction. Meanwhile, <ns0:ref type='bibr' target='#b77'>[69]</ns0:ref> extracted the function calls in a graph to identify the nodes from the start to the end of the process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.7.'>Geographic location</ns0:head><ns0:p>Geographic location is a feature, which identifies the origin of the application. The geographic detector was identified as one of the features in research by <ns0:ref type='bibr' target='#b60'>[54]</ns0:ref>. Provided that 35% of the mobile malware families appeared to originate from China with 40% of the facilities originating from Russia, Ukraine, Belorus, Latvia, and Lithuania countries, it was crucial to consider geographic location as one of the features for the detection of Android malware. For this reason, researchers increased the risk signal for the applications originating from the aforementioned countries.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.8.'>Manifest file</ns0:head><ns0:p>Android application is built on the top of the application framework which provides an interface for the user. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. It consists of metainf, resource, assets and library directory, classes.dex, resources.arsc, and androidmanifest.xml file. One of the files, androidmanifest.xml (manifest file), is an essential file with contents of various features, such as permission, intent, hardware component, and components of the application (activities, services, broadcast receivers, and content providers) <ns0:ref type='bibr' target='#b78'>[70]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Permission</ns0:head><ns0:p>Permission is a unique security mechanism for Android devices. To enable the permission, the user needs to allow the application during the installation period. However, many users accidentally enable certain permissions, which leads to access to sensitive security-relevant resources. Therefore, permission features were examined in many studies. Based on the application of permission in several studies to measure the risk of the application, permission was further identified as malicious <ns0:ref type='bibr' target='#b80'>[71]</ns0:ref>, <ns0:ref type='bibr' target='#b81'>[72]</ns0:ref>. Some other studies, such as <ns0:ref type='bibr' target='#b82'>[73]</ns0:ref>[63] <ns0:ref type='bibr' target='#b83'>[74]</ns0:ref>[75] <ns0:ref type='bibr' target='#b85'>[76]</ns0:ref>[77] <ns0:ref type='bibr' target='#b87'>[78]</ns0:ref>[79], used the permission features as the inputs for machine learning prediction.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Intent</ns0:head><ns0:p>The intent is coded in the manifest file and allows a component of the application to request certain functionality from another component from other application. For example, application A can use the component of application B for the management of photos in the device despite the exclusion of the component from application A. Provided that this feature enables malicious activities among the attackers, several experiments used intent (declared in the manifest file) as one of the features for the detection of malware, such as <ns0:ref type='bibr' target='#b89'>[80]</ns0:ref> and <ns0:ref type='bibr' target='#b90'>[81]</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The manifest file declared application component, which consists of four types, namely 1) activities, 2) services, 3) broadcast receivers, and 4) content providers. Specifically, activity is represented as the user interface or interactive screen to the users, while service refers to an operation occurring in the backgrounds, which perform the long-service process. This is followed by broadcast receivers, which respond to system-wide broadcast announcements. On the other hand, content providers manage a structured set of application data. Overall, these four components follow a life cycle model during execution. Dexteroid <ns0:ref type='bibr' target='#b91'>[82]</ns0:ref> proposed a framework, which systematically guides the event sequences through the reverse engineering/reconstruction of the life cycle models and the extraction of callback sequences from event sequences to detect malicious behaviours.</ns0:p></ns0:div>
<ns0:div><ns0:head>d) Hardware component</ns0:head><ns0:p>The manifest file also incorporated hardware components in the Android application. To illustrate, the developer requested access to the camera of an Android device by declaring it in the manifest file to enable the use of the camera for the application. However, the attacker declared unrelated hardware components in their game application, such as camera and data. As a result, the security researchers were prompted to use hardware component as the features in their experiment <ns0:ref type='bibr' target='#b61'>[55]</ns0:ref> to detect malware <ns0:ref type='bibr' target='#b92'>[83]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.9.'>Network address</ns0:head><ns0:p>Access to the Internet is essential for attackers to retrieve private information of the victim, change the settings, or execute malicious commands. This process requires the incorporation of the Uniform Resource Locator (URL) or network address in the malware code. The examples of sensitive URLs include the Android Market on Google Play, Gmail, Google calendar, Google documents, and XML schemas. These features were used in <ns0:ref type='bibr' target='#b62'>[56]</ns0:ref> and <ns0:ref type='bibr' target='#b93'>[84]</ns0:ref> [85] for malware detection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.10.'>Code-based</ns0:head><ns0:p>Code-based or code structure comprises a line or set of programming language codes in an application. Two studies applied code structures (code chunk grammar) as the features for malware detection, which is focused on the internal structure of the code units <ns0:ref type='bibr'>[86][87]</ns0:ref>. This feature enables the analysis and differentiation between malware and benign applications. Another study by <ns0:ref type='bibr' target='#b72'>[65]</ns0:ref> identified several code-based strings, namely '.exec', 'forked', 'setptywindowsize', and 'createsubprocess'. In comparison with the normal application, it was found that the attacker frequently used these code-based features in the development of malware. Therefore, these features were also used in this study to detect malware. Opcode (operation code) is another code-based feature. It is a part of the instruction to inform the CPU regarding the tasks to be fulfilled. Assembly language used this opcode to execute the instruction. Also referred to as bytecode, the examples of an opcode for Android included OP_ADD_DOUBLE, OP_ADD_FLOAT, OP_ADD_INT_2ADDR, and OP_SUB_LONG <ns0:ref type='bibr' target='#b97'>[88]</ns0:ref>. Specifically, this feature was adopted in the studies by <ns0:ref type='bibr' target='#b98'>[89]</ns0:ref>[90] <ns0:ref type='bibr' target='#b100'>[91]</ns0:ref> and <ns0:ref type='bibr' target='#b101'>[92]</ns0:ref> to detect Android malware in the static analysis. Further examples of the features in this section are method <ns0:ref type='bibr' target='#b102'>[93]</ns0:ref>, opcode <ns0:ref type='bibr' target='#b101'>[92]</ns0:ref>, byte stream @ byte block <ns0:ref type='bibr' target='#b100'>[91]</ns0:ref>, Dalvik code <ns0:ref type='bibr' target='#b77'>[69]</ns0:ref>, and code involving encryption <ns0:ref type='bibr' target='#b103'>[94]</ns0:ref>. The selection of the features by security practitioners is followed by classification. This process was performed to receive the features as input and differentiate between either the application malware or benign (normal).</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> depicts that researchers prefer to investigate permission and API features compare to others. However, the trend in permission features is decline from 2013 until 2018. However, API features takes place in previous experiments as it increased from six (2014) to 9 (2019). This indicates that the API trend would increase in following year in static detection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.'>Classification</ns0:head><ns0:p>In the classification process for static analysis, many security analysts used two types of methods; 1) Machine learning (ML) and 2) Graph. The following section presents the ML studies with static features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.1.'>Machine learning (ML)</ns0:head><ns0:p>Machine learning is a scientific discipline, which is capable to predict future decisions based on the experience it has gained through past inputs (learning set), followed by a prediction of the outputs. Basing on a given dataset, the learning set makes intelligent decisions according to certain algorithms. One of the machine learning types is supervised based on the data for the training stage to create a function. Furthermore, each part of the training data contains input (features or characteristics) and output (class label-malware and benign). This is followed by the training stage, which calculates the approximate distance between the input and output examples to create a model. This training stage could classify unknown applications, such as malware or benign application. Four types of ML are present, such as 1) classical learning; 2) reinforcement learning, 3) neural network and deep learning, and 4) ensemble method. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Supervised learning (SL) is a process of learning from previous instances to predict future classes. Therefore, the prediction of the class label involves the construction of a concise model from previous experience. The machine learning classifier is then used to test the unknown class <ns0:ref type='bibr' target='#b104'>[95]</ns0:ref>. To detect Android malware with static features, the SL method is widely used by security practitioners. Accordingly, the previous articles adopting this method are illustrated in Table <ns0:ref type='table' target='#tab_2'>7</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Unsupervised Learning</ns0:head><ns0:p>Unsupervised learning is another type of learning involved in machine learning. It is a clustering technique where the data is unlabeled and has also been used in computer security areas, including malware detection and forensic <ns0:ref type='bibr' target='#b105'>[96]</ns0:ref>. Clustering refers to the division of a large dataset into smaller data sets with several similarities. It classifies a given object set through a certain number of clusters (assume k clusters) to determine the k centroids assigned for each cluster. In this case, this algorithm selects the centroid by random from the applications set, extracts each application from a given dataset, and assigns it to the nearest centroid. Table <ns0:ref type='table' target='#tab_2'>7</ns0:ref> tabulates the previous articles, which adopted this method. c) Reinforcement learning A reinforcement learning model consists of an agent (a set of actions A) and an environment (the state space S) <ns0:ref type='bibr' target='#b106'>[97]</ns0:ref>. Deep reinforcement learning was introduced by reinforcement agents as a framework to play Atari games, which often exceed human performance <ns0:ref type='bibr' target='#b108'>[98]</ns0:ref> <ns0:ref type='bibr' target='#b109'>[99]</ns0:ref>. The advances in deep learning may extract high-level features from raw sensory data, leading to breakthroughs in computer vision and speech recognition. In the case of deep learning, the agent would be required to learn a value function in an end-to-end way, which takes raw pixels as input and predicts the output rewards for each action.</ns0:p><ns0:p>The learned value function is called deep Q learning, in which Q function is learned and refined from over hundreds of games <ns0:ref type='bibr' target='#b111'>[100]</ns0:ref>. The Q-learning algorithm was trained in network <ns0:ref type='bibr' target='#b108'>[98]</ns0:ref> with stochastic gradient descent to update the weights. Replay mechanism was used from random samples previous transitions to lead smooth training distribution over past behaviors to overcome the correlated data and non-stationary distributions problems. <ns0:ref type='bibr' target='#b106'>[97]</ns0:ref> propose a framework based on reinforcement learning (RL) for attacking static portable executable (PE) anti-malware engines. Meanwhile, a DQN-based mobile proposed by <ns0:ref type='bibr' target='#b112'>[101]</ns0:ref> to enhance the malware detection performance. The results shown from simulation can increase the malware detection accuracy and reduce the detection delay as compared to a Q-learning based malware detection scheme. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The evolution of Neural Network (NN) has been associated with various challenges since the mid-20th century. McCulloch and Pitts obtained the first inspiration of NN in 1943 from biological neurons, which was followed by proposing a computational model for the development of hypothetical nets. Although this proposal was simulated by Nathaniel Rochester at IBM research laboratory, this attempt was unsuccessful at the end. Developed by Frank Rosenblatt at Cornell Aeronautical Laboratory, the perceptron became the first learning machine <ns0:ref type='bibr' target='#b113'>[102]</ns0:ref>.</ns0:p><ns0:p>Despite all the upgrades on NNs, Deep learning (DL) was developed in 2006 and has been used in almost every application. As a new variation of the classical Multilayer Perceptron (MLP), the DL aims to produce high-level and flexible features from the raw pixel data to assist in generalising the classification. Furthermore, DL also operates with complex applications containing millions of data, which require a large number of neurons and hidden layers. A few DL frameworks have been developed in the recent years, such as TensorFlow <ns0:ref type='bibr' target='#b114'>[103]</ns0:ref>, Caffe <ns0:ref type='bibr' target='#b115'>[104]</ns0:ref>, and Theano <ns0:ref type='bibr' target='#b116'>[105]</ns0:ref> to ensure an efficient implementation of Deep Network (DN) architectures and omit the unnecessary coding scratching <ns0:ref type='bibr' target='#b113'>[102]</ns0:ref>. Additionally, the DL method extracts the features based on the layer's level, which could either be high or low.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> depicts the differences between ML and DL. It shows that ML requires the security practitioners to extract the features manually and select the ML classifier, which is suitable for the selected features. However, DL involves automatic feature extraction part and malware classification. It trains the model end-to-end with the Android application package (.apk) file and their categories, each labelled as malware or benign. The DL gains and creates a prediction model through the automatic selection of the feature.</ns0:p><ns0:p>As one of the major models in deep learning, a convolutional neural network (CNN) has been widely used for image recognition <ns0:ref type='bibr' target='#b117'>[106]</ns0:ref>. It could be seen in the past few years that many studies have implemented Deep Neural Networks (DNN) to classify malware <ns0:ref type='bibr' target='#b118'>[107]</ns0:ref>[108] <ns0:ref type='bibr' target='#b101'>[92]</ns0:ref>. Additionally, although the recurrent neural networks have been explored since the 1980s, they have become uncommercial due to several issues <ns0:ref type='bibr' target='#b118'>[107]</ns0:ref>. Several machine learning methods have addressed network or malware attacks on personal computers or mobile devices. Simultaneously, several techniques were proposed by researchers who applied DL algorithms to detect or categorize malware using static, dynamic, or hybrid approaches, detection of network intrusions and phishing/spam attacks, and inspection of website defacements <ns0:ref type='bibr' target='#b120'>[109]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>e) Ensemble method</ns0:head><ns0:p>Another technique in machine learning and pattern recognition is ensemble learning. The increase in the implementation of ensemble learning methods could be seen in the computational biology field due to the unique advantages in managing small sample size, high dimension, and complex data structures <ns0:ref type='bibr' target='#b121'>[110]</ns0:ref>. The function of ensemble learning is to build a prediction model by combining the strengths of the collection of simpler base models <ns0:ref type='bibr' target='#b122'>[111]</ns0:ref>. A few approaches are applied in ensemble methods, such as bagging, boosting, and random forest. This method is also a simple device, which is popular especially in the predictive performance of a base procedure.</ns0:p><ns0:p>The bagging procedure appears to be a variance reduction scheme for some base procedure, while the boosting methods mainly reduce the bias of the base procedure. Therefore, the significant difference between bagging and boosting ensemble methods is indicated. Compared to bagging and boosting, the random forest approach is a highly distinguished ensemble method. The first proposal of the random forest was made by Amit and Geman <ns0:ref type='bibr' target='#b123'>[112]</ns0:ref>. While the performance of random forests is on the same level as boosting, it could exhibit better performance in the perspective of prediction.</ns0:p><ns0:p>Table <ns0:ref type='table'>8</ns0:ref> shows previous works done using different types of machine learnings as mentioned before. From the table, we can summarize classical learning is still valid to be used in experiment execution but there are a lot of works are using deep learning and graph method. The current trends show the demand using the deep learning technique to defend against an increasing number of sophisticated malware attacks where deep learning based have become a vital component of our economic and national security. Many recent studies on Android malware detection have leveraged graph analysis as mentioned in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.2.'>Graph</ns0:head><ns0:p>The use of a graph is another method in machine learning and pattern recognition, which is performed by investigating the data and control-flow analysis. It is also capable of identifying unknown malware through the examination on the flow of the code. This method is preferred by security analysts due to the uniform flow despite the changes made by the malware authors on the API calls to avoid intrusion detection systems. The types of analysis in graph method include call graph, inter-component call graph (ICCG), control-flow graph (CFG), and dependence graph, while Table <ns0:ref type='table' target='#tab_4'>9</ns0:ref> lists the previous works of research on static malware detection using the graph method.</ns0:p><ns0:p>A call graph (specifically known as flow graph) is a graph representing the control and data flow of the application, which investigates the exchange of information through the procedures. A node in the graph represents a procedure or function, as seen from the x and y symbols, which indicate that procedure x calls for procedure y. Apposcopy <ns0:ref type='bibr' target='#b57'>[52]</ns0:ref> presents its new form of call graph known as inter-component call graph (ICCG) to match malware signature. As a directed graph where nodes are known as components in an application, it establishes ICCG from a call graph and the results of the pointer analysis. The objective of apposcopy is to measure the inter-component communication (ICC), calls, and flow relations.</ns0:p><ns0:p>Another graph called a control flow graph (CFG) is also applied by many security analysts to investigate the malware programme. Woodpecker <ns0:ref type='bibr' target='#b54'>[49]</ns0:ref> created the CFG start from each entry point (activity, service, receiver, content provider), which is defined in the permission stated in the androidmanifest.xml file. Furthermore, the public interface or services from an execution path is discovered through the flow graph. However, it would be considered by Woodpecker as a capability leak if it is not guarded by the permission requirement nor prevented from being invoked by another unrelated application. The same graph was applied in subsequent works of research, namely Flowdroid <ns0:ref type='bibr' target='#b60'>[54]</ns0:ref>, Dendroid <ns0:ref type='bibr' target='#b95'>[86]</ns0:ref>, <ns0:ref type='bibr' target='#b85'>[76]</ns0:ref>, Asdroid <ns0:ref type='bibr' target='#b59'>[53]</ns0:ref>, Anadroid <ns0:ref type='bibr' target='#b55'>[50]</ns0:ref>, Adrisk <ns0:ref type='bibr' target='#b48'>[44]</ns0:ref>, and Dexteroid <ns0:ref type='bibr' target='#b91'>[82]</ns0:ref>.</ns0:p><ns0:p>Another graph is the dependency graph, which illustrates the dependencies of several objects on each other. An example could be seen in the dead code elimination case process, in which the graph identifies the dependencies between operation and variables. With the dependency of nonoperation on certain variables, these variables would be considered dead and should be deleted. The studies, which adopted this type of graph are CHEX <ns0:ref type='bibr' target='#b124'>[113]</ns0:ref>, Dnadroid <ns0:ref type='bibr' target='#b126'>[114]</ns0:ref>, Droidlegacy <ns0:ref type='bibr' target='#b49'>[45]</ns0:ref>, and <ns0:ref type='bibr' target='#b56'>[51]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.3'>Others</ns0:head><ns0:p>Besides machine learning and graph, several security practitioners adopted different methods, such as Normalized Compression Distance (NCD). Adopted in the studies by <ns0:ref type='bibr' target='#b128'>[115]</ns0:ref> and <ns0:ref type='bibr' target='#b129'>[116]</ns0:ref>, this method can measure the similarities between the malwares and represent them in the form of a distance matrix. Despite the evolution of many malwares from time to time, some of their behaviour patterns are similar to each other. The calculation of the similarities using NCD would identify the malwares, which share the same distance.</ns0:p><ns0:p>A study known as DelDroid <ns0:ref type='bibr' target='#b130'>[117]</ns0:ref> implemented a method called as Multiple-Domain Matrix (MDM). This method refers to a complex system, which calculates multiple domains and is based on the Design-Structure Matrix (DSM) model. Furthermore, MDM is formed by the connection of DSM models with each other. The study initialised multiple domains in the MDM to represent the architecture of an Android system for privilege analysis. To illustrate, the incorporation of certain definitions in the MDM representation in the architecture enables DelDroid to identify the communication of the application, which may result in an unauthorised malware attack.</ns0:p><ns0:p>Another previous static experiment was conducted on the MD5 signature of the application to detect malware <ns0:ref type='bibr' target='#b65'>[58]</ns0:ref>. In the first process, the study assigned the application as level C (the lowest level of suspicion), followed by calculation and cross-reference in the database of signatures. The application would be recorded if the result was positive. However, it would be identified as PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science malware if the result of the suspicion was R. The system examined the files inside the application to find any matched MD5 signature.</ns0:p><ns0:p>Androsimilar <ns0:ref type='bibr' target='#b100'>[91]</ns0:ref> practised a method known as a statistical similarity digest hashing scheme, which inspects the similarity on the byte stream based on robust statistical malicious static features. It is also a foot-printing method, which identifies the regions or areas of statistical similarity with known malware. Following that, it generates variable-length signatures to detect unknown malware (zero-day).</ns0:p><ns0:p>The following study is DroidMOSS <ns0:ref type='bibr' target='#b69'>[62]</ns0:ref>, which identifies between the repackaged (modified) and original application. This function is important due to the content of malicious activities in many Android repackaged applications. This study used a fuzzy hashing technique, which generated fingerprint based on this technique to localise and detect any previously applied modifications to the original application. It then calculated the edited distance to measure the similarity between the applications. When the result of the similarity exceeds certain values, the application would be considered as a modified sample.</ns0:p><ns0:p>Under another static experiment, a study by <ns0:ref type='bibr' target='#b93'>[84]</ns0:ref> adopted a method known as a risk score weight, which was performed through the calculation of the risk score based on the selected features in the code. When the features were identified, the score increased according to certain risky patterns of properties. Particularly, the patterns were based on different likelihoods of the given situations between normal and malware samples. Lastly, the percentage of the likelihood was calculated. Figure <ns0:ref type='figure'>9</ns0:ref> shows that both ML and graph were the popular methods among security practitioners in static analysis. The graph method was found to exceed the ML method in 2011, 2012, and 2014, although ML was more preferred compared to graph in other years. However, this situation reveals that graphs and ML are favourable options in the static experiment.</ns0:p><ns0:p>A study started to utilise DL (part of ML) in the static experiment in 2019, which also combined DL (Convolutional neural network -CNN) with Control flow graph (CFG). Notably, provided that API was the only feature utilised in this study, many future opportunities were available to combine different DL classifiers (Recurrent neural network -RNN, Generative* adversarial networks -GAN or Deep belief network* -DBN) with other features besides API and different types of graph. It is noteworthy that DL could also be combined with NCD and MDM.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Open research issues</ns0:head><ns0:p>This section explains the issues involved in the static analysis, which were derived from previous research articles. Specifically, a discussion is made on the advantages and disadvantages of the open-source operating system, which rely on the availability of the kernel application code. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Another issue on static analysis is code obfuscation used by the malware developer to increase the difficulty to detect the malware and research done to counter obfuscation. Then, we review overall static analysis articles, how to detect unknown malware, the combination of static and dynamic, resource consumption, future work, and features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Open source advantages and disadvantages</ns0:head><ns0:p>Provided that Android malware is an open-source operating system, there is a continuous increase in its amount. To illustrate, one of the open-source availabilities is the kernel application code. Accordingly, Samsung officially provides its kernel operating system to the public <ns0:ref type='bibr' target='#b131'>[118]</ns0:ref> for kernel enhancement or private purposes. Furthermore, any person may download the link according to the mobile device version. The code is also available in Lenovo <ns0:ref type='bibr' target='#b132'>[119]</ns0:ref>, LG <ns0:ref type='bibr' target='#b133'>[120]</ns0:ref>, Sony <ns0:ref type='bibr' target='#b134'>[121]</ns0:ref>, Htc <ns0:ref type='bibr' target='#b135'>[122]</ns0:ref>, Asus <ns0:ref type='bibr' target='#b136'>[123]</ns0:ref>, Motorola <ns0:ref type='bibr' target='#b137'>[124]</ns0:ref>, and other mobile providers. Consequently, this code availability may allow malware writers to identify and manage the vulnerabilities of the kernel's operating system.</ns0:p><ns0:p>Even though the availability of open-source contributes to its easy access, it is also available for security practitioners to research it. These practitioners may be the researchers of universities, the staff of the mobile providers, Android Google researchers, freelance programmers, and the Android community. They also invent various frameworks, algorithms, and suggestions to improve the security of the operating system. The version of the kernel is updated every year, while the mobile providers are informed to regarding the kernel updates. These studies, including static analysis, would increase the confidence of Android users worldwide.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Obfuscation</ns0:head><ns0:p>Static analysis involves reverse engineering, such as decompile and disassemble, while malware developer utilises the obfuscation method to increase the difficulty of the decompiling process and lead to confusion in it. Obfuscation is a technique, which increases the difficulty in understanding the programmes due to the failure of the lead security analysts to distinguish between malware and benign application. Notably, it is a well-known obstacle to be examined by static analysis. Figure <ns0:ref type='figure' target='#fig_8'>10</ns0:ref> illustrates the types of obfuscation, which include encryption, oligomorphic, polymorphism, and metamorphism <ns0:ref type='bibr' target='#b138'>[125]</ns0:ref> <ns0:ref type='bibr' target='#b139'>[126]</ns0:ref>.</ns0:p><ns0:p>The encryption method was extensively practised by the malware writers. In this case, the important code or strings, which revealed the malware detector or security practitioner, should be identified. Accordingly, the code was encrypted and converted to the ciphertext. Furthermtore, various algorithms to encrypt the code are present, such as Caesar, Playfair, Data Encryption Standard (DES), Advanced Encryption Standard (AES), and Rivest-Shamir-Adelman (RSA). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Therefore, for the security practitioner to understand the behaviour of the malware, the encrypted code should be decrypted using the correct decryptor <ns0:ref type='bibr' target='#b28'>[26]</ns0:ref>.</ns0:p><ns0:p>Besides being a malware capable of mutating @ changing the decryptor, the oligomorphic is also able to generate multiple decryptors to hundreds of types <ns0:ref type='bibr' target='#b139'>[126]</ns0:ref>. Consequently, the security practitioner would need to change different decryptor multiple times until the code is returned to the normal string. Nevertheless, this type of obfuscation does not affect the size or shape of the code. Another type of obfuscation is polymorphic. It is a descriptor, which affects the size or shape of the code. Compared to oligomorphic, it is more advanced due to the incorporation of code transposition, register reassignment, dead code @ nop insertion, and armoring. Meanwhile, metamorphism is an approach beyond the oligomorphic and polymorphic types due to the absence of decryptor in its mechanism. Therefore, its constant body could be hidden from memory and increase the difficulty of the static investigation to detect malware.</ns0:p><ns0:p>The following information is the obfuscation methods that regularly used by polymorphism (polimorphic) and metamorphism (metamorphic) obfuscation <ns0:ref type='bibr' target='#b139'>[126]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Code transportation</ns0:head><ns0:p>Code transposition is a method, which restructures the orders of the original code without causing any effects on its conduct. This process is performed with two methods. The first method is the random restructure of the original code by including jumps or unconditional branches. However, security practitioners can detect obfuscation by removing those jumps or unconditional branches. The second method is the production of new generations by selecting and restructuring independent instructions without any impact on others. However, the adoption of these methods is challenging for the malware writer, while the security practitioners are faced with a difficulty to detect this method of obfuscation.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Register reassignment</ns0:head><ns0:p>Register reassignment is another method of obfuscation, which shifts the registers of the code from one generation to another. This method is performed without changing the behaviour of the code while keeping the programme of the code similar to its original state. c) Dead-code/nop insertion Known as nop insertion, dead-code is a method, which adds several unnecessary instructions in the code and simultaneously keeps the behaviour of the code similar to its original state. Nevertheless, in certain situations, security practitioners able to detect this obfuscation by removing the aforementioned code. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science d) Substitution of equivalent instruction sequences</ns0:head><ns0:p>The original code is changed through the substitution of several instructions. To illustrate, the SUB instruction is changed to XOR, while PUSH is changed to MOV.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.1'>Research to counter obfuscation</ns0:head><ns0:p>To overcome obfuscation, many studies were conducted on different approaches. Study by <ns0:ref type='bibr' target='#b126'>[114]</ns0:ref> used program dependence graph (PDG) to prevent program transformations in obfuscation. Droidlegacy <ns0:ref type='bibr' target='#b49'>[45]</ns0:ref> use graph node to represents the java class in detecting light obfuscation. Droidanalytics <ns0:ref type='bibr' target='#b98'>[89]</ns0:ref> and Drebin <ns0:ref type='bibr' target='#b61'>[55]</ns0:ref> extract the API calls while the codes running during execution time. In order to control the flow of obfuscation, Apposcopy use inter-component communication (ICC) to write the signature. Research by <ns0:ref type='bibr' target='#b141'>[127]</ns0:ref> uses jadx, one of reverse engineer tool to deobfuscation the obfuscation codes. Summary of studies conducted to overcome obfuscation shown in Table <ns0:ref type='table' target='#tab_5'>10</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Advantage of obfuscation</ns0:head><ns0:p>Despite the adoption of the obfuscation method by the malware writers or the attackers to evade detection, obfuscation also serves the following advantages based on other points of views: a) Reduction of the size of the application Google <ns0:ref type='bibr' target='#b142'>[128]</ns0:ref> encourages developers to enable shrinking in their release to build an application to remove any unused codes and resources. Furthermore, provided that obfuscation would shorten the names of the classes and members in the code, the developer will be able to reduce the size of the application. Notably, the size of the application is a significant concern in Android handheld devices (smartphones, smart glasses, and smartwatch) with limited storage and resources.</ns0:p><ns0:p>b) The difficulty for the malware writer to understand the obfuscated normal application To develop malware in certain situations, malware writers need to perform reverse engineering on the normal repackaged application. Therefore it is able to confuse them to steal private information and discover application vulnerabilities from that obfuscated normal @ benign application code <ns0:ref type='bibr' target='#b143'>[129]</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science c) Security practitioners can detect malware easily</ns0:head><ns0:p>Obfuscation also facilitates the detection of malware by the researcher <ns0:ref type='bibr' target='#b144'>[130]</ns0:ref>. To illustrate, there are certain situations where malware regularly adopts similar obfuscation marks, which is impossible to exist in normal application. Therefore, security practitioners able to detect malware with the presence of these marks. Following all the advantages and drawbacks, continuous research on obfuscation is crucial to obtain better results from the detection of malware through the static analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>The list of all articles in the detection of malware in static analysis</ns0:head><ns0:p>To identify the trends in the detection of malware through static analysis, this section presents a list of previous works of research, which cover all areas (year, features, and classification). Table <ns0:ref type='table' target='#tab_0'>11</ns0:ref> lists a study DroidARA <ns0:ref type='bibr' target='#b145'>[131]</ns0:ref> in 2019, which performed an experiment combined with DL and graph and differentiation between malware and normal application. It applied a call graph to extract the API features and convolutional neural network (CNN) for classification. At the time of writing this paper, this is a new trend in detecting Android malware. Therefore, in future research, it is possible to witness more research combination similar to this with different features.</ns0:p><ns0:p>From the lists, most of researchers used API and manifest file features in their experiments to detect the malware. It proofs that API features were the popular codes used by the malware developers to create the malware. The program is based on the Android application package file in the (.apk) format, which is also used to install an application in android-based mobile devices. Every app project must have an androidmanifest.xml file at the root of the project source set. This manifest file is regularly in a binary form inside the APK, however when chosen in the APK Analyzer, the xml form is restructured and produced. There are some changes of the androidmanifest.xml document from a library application depends on was converged into the last androidmanifest.xml record. Other package files fall down into apk, xml and dex properties feature.</ns0:p><ns0:p>Besides the combination of DL and graph, ML and graph were also combined in the studies by <ns0:ref type='bibr' target='#b96'>[87]</ns0:ref> in 2016 and <ns0:ref type='bibr' target='#b85'>[76]</ns0:ref> in 2012. These studies utilised a similar graph, which was the Control flow graph (CFG), indicating that the combination of ML and graph increased the detection result. Therefore, future work is suggested to test this combination in different static features. Other parts of classification (Multiple-Domain Matrix (MDM), MD5 signature, similarity digest hashing, normalized compression distance (NCD), and fuzzy hashing technique) were also useful in the detection of malware with static features. These classifications also contributed to the availability of future work combinations with ML, DL, and graph.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Detect unknown malware</ns0:head><ns0:p>Initially, static analysis is unable to discover new or unknown malware as it only examined the code of the application without executing it. To elaborate on this situation, certain malware only executes certain parts whenever the application runs. Provided the drawback of static analysis in the identification of unknown malware, many security practitioners started to adopt machine learning, such as <ns0:ref type='bibr' target='#b146'>[132]</ns0:ref>[13], Drebin <ns0:ref type='bibr' target='#b61'>[55]</ns0:ref>, <ns0:ref type='bibr' target='#b63'>[57]</ns0:ref>, Droidapiminer <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref>, <ns0:ref type='bibr' target='#b147'>[133]</ns0:ref>, Androsimilar <ns0:ref type='bibr' target='#b100'>[91]</ns0:ref>, <ns0:ref type='bibr' target='#b51'>[46]</ns0:ref>, <ns0:ref type='bibr' target='#b149'>[134]</ns0:ref>, <ns0:ref type='bibr' target='#b129'>[116]</ns0:ref>, <ns0:ref type='bibr' target='#b71'>[64]</ns0:ref>, and <ns0:ref type='bibr' target='#b72'>[65]</ns0:ref>. Similarly, the graph approach was also a suitable approach for this identification, as shown in <ns0:ref type='bibr' target='#b150'>[135]</ns0:ref>, Riskranker <ns0:ref type='bibr' target='#b151'>[136]</ns0:ref>, and Dendroid <ns0:ref type='bibr' target='#b95'>[86]</ns0:ref>. The <ns0:ref type='bibr' target='#b150'>[135]</ns0:ref> study utilised a data dependence graph (DDG), Riskranker, and Dendroid, which employed the control-flow graph (CFG).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.'>Combination of static and dynamic analyses</ns0:head><ns0:p>It was proven in <ns0:ref type='bibr' target='#b138'>[125]</ns0:ref> that static analysis was inadequate for the detection of malware as this analysis should be combined with dynamic analysis to detect the malware effectively. Compared to static analysis, the dynamic analysis can evade the obfuscation technique. Essentially, provided that each type of analysis (static and dynamic) has its advantages and drawbacks, the combination of static and dynamic analyses would increase the effectiveness of the countermeasure action on the malware.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.'>Resource consumption in Android OS and deep learning</ns0:head><ns0:p>Deep learning (DL) is a subset of machine learning in artificial intelligence (AI), which is also known as a deep neural network or deep neural learning. Notably, with unlabeled and unstructured data, DL is capable of learning and predicting the output. It imitates the human brain in data processing, development of patterns from that data, and the implementation of decision making. It could be seen from the current trends that deep learning (DL) technique has a potential for further discovery. The implementation of this technique enables the DL to automatically determine the ideal features for prediction and classification. Currently, DL is widely used in almost every area, such as large scale image recognition tasks, automated driving, new cancer cell detection, hearing and speech translation, and aerospace area identification <ns0:ref type='bibr' target='#b152'>[137]</ns0:ref>.</ns0:p><ns0:p>However, DL requires substantial computing power, which needs a high amount of the graphic processing unit (GPU) based on the data to be processed <ns0:ref type='bibr' target='#b152'>[137]</ns0:ref>. This situation leads to an issue in the detection of malware, which attacks Android devices. Provided that Android mobile device is a device with small computing power, the adoption of DL becomes the main concern. However, the transfer of information from the Android device to the cloud provider is possible only for the execution of the DL process, which would then develop the device. Therefore, the large-scale adoption of DL is possible for future work in the static analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7.'>Future work in static analysis</ns0:head><ns0:p>It could be seen from the review in the previous sections (section 2) that many future opportunities for the static analysis to detect the malware, which attacks the Android. One of the opportunities is the combination of different DL classifier (Recurrent neural network -RNN, Generative* adversarial networks -GAN or Deep belief network* -DBN) with other features besides API, with different types of graph. However, Section 3.3 shows that only one experiment started the combination between DL and graph with one API feature in 2019. Therefore, the accuracy of detection results would increase, leading to the identification of a new family of malware. It is also noteworthy that other future alternatives are available for the combination of DL with NCD and MDM.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.8.'>Popular features (API and manifest file)</ns0:head><ns0:p>Section 3.3 shows that many static analysis researchers frequently applied the manifest file and API calls as the features in their experiments. To illustrate, these popular features had been examined by the researchers from 2010 until 2019 due to the official update for Android and the addition of new features from time to time. However, most of the malwares still utilised similar features within this timespan, while other malwares utilised the latest and updated features. Therefore, the researchers are required to place continuous focus on these popular features.</ns0:p><ns0:p>The manifest file is one file with numerous features in it, such as permission, intent, hardware component, and application component, while API is a ready code for the programmer to develop their application. Therefore, it is crucial for researchers to thoroughly scrutinise these two categories of features. Moreover, some researchers incorporated other features to combine the manifest and API for more effective malware detection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Following the interest to explore the recent studies in the static analysis, a review was performed on the existing studies by past security investigators on Android malware detection, which was explained through phases (reverse engineer, features, and classification). Furthermore, this review covered the information within the ten years range (2009 to 2019). In this article, the features used in the static analysis were also reviewed. Within the aforementioned timespan, many security practitioners still preferred the API and manifest files, indicating the relevance of these two features. Moreover, the latest trends in classification were highlighted, which consists of machine learning, deep learning, graph, and other methods. These trends have proven the relevance of the graph method compared to machine learning. Static analysis researchers began the adoption of deep learning in their detection. This article also discussed the open research issues in the static analysis, including obfuscation as one of the weaknesses of static analysis and the methods of Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Malware detection using static analysis.</ns0:p><ns0:p>The static analysis operation, which consisted of several steps. The steps included dataset collections, reverse engineer, features identification, and classification. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>features from native binaries of apps <ns0:ref type='bibr' target='#b172'>[155]</ns0:ref> Static API calls, intent actions and information flow <ns0:ref type='bibr' target='#b173'>[156]</ns0:ref> Static App's permissions <ns0:ref type='bibr' target='#b174'>[157]</ns0:ref> Dynamic Fingerprint <ns0:ref type='bibr' target='#b175'>[158]</ns0:ref> Static API call graphs <ns0:ref type='bibr' target='#b176'>[159]</ns0:ref> Hybrid API calls, code, Android Manifest, encryption or reflection <ns0:ref type='bibr' target='#b177'>[160]</ns0:ref> Static Minimum-Redundancy Maximum-Relevance (MRMR) <ns0:ref type='bibr' target='#b178'>[161]</ns0:ref> Static Java Bytecode <ns0:ref type='bibr' target='#b180'>[162]</ns0:ref> Dynamic API call <ns0:ref type='bibr' target='#b181'>[163]</ns0:ref> Static API call and the names of functions and methods <ns0:ref type='bibr' target='#b165'>[150]</ns0:ref> Static Subgraph <ns0:ref type='bibr' target='#b182'>[164]</ns0:ref> Static n-opcode feature <ns0:ref type='bibr' target='#b185'>[165]</ns0:ref> Dynamic System call <ns0:ref type='bibr' target='#b186'>[166]</ns0:ref> Static Manifest file, apk file <ns0:ref type='bibr' target='#b188'>[167]</ns0:ref> Hybrid Malware signatures <ns0:ref type='bibr' target='#b146'>[132]</ns0:ref> Static Signature extraction signature matching <ns0:ref type='bibr' target='#b189'>[168]</ns0:ref> Dynamic Fine-grained HTTP structural <ns0:ref type='bibr' target='#b190'>[169]</ns0:ref> Static API data dependency <ns0:ref type='bibr' target='#b192'>[170]</ns0:ref> Static API call, apk <ns0:ref type='bibr'>[</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Previous static analysis research, which used the graph method.</ns0:p><ns0:p>The types of analysis in graph method include call graph, inter-component call graph (ICCG), control-flow graph (CFG), and dependence graph.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>1 Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>: Previous static analysis research, which used the graph method Type of graph Reference</ns0:p></ns0:div>
<ns0:div><ns0:head>Call graph</ns0:head><ns0:p>Copes <ns0:ref type='bibr' target='#b53'>[48]</ns0:ref>, Leakminer <ns0:ref type='bibr' target='#b217'>[193]</ns0:ref>, Riskranker <ns0:ref type='bibr' target='#b151'>[136]</ns0:ref>, A3 <ns0:ref type='bibr' target='#b62'>[56]</ns0:ref> and <ns0:ref type='bibr' target='#b77'>[69]</ns0:ref> Inter-component call graph (ICCG) <ns0:ref type='bibr' target='#b57'>[52]</ns0:ref> Control flow graph (CFG) Woodpecker <ns0:ref type='bibr' target='#b54'>[49]</ns0:ref>, Flowdroid <ns0:ref type='bibr' target='#b60'>[54]</ns0:ref>, Dendroid <ns0:ref type='bibr' target='#b95'>[86]</ns0:ref>, <ns0:ref type='bibr' target='#b85'>[76]</ns0:ref>, Asdroid <ns0:ref type='bibr' target='#b59'>[53]</ns0:ref>, Anadroid <ns0:ref type='bibr' target='#b55'>[50]</ns0:ref>, Adrisk <ns0:ref type='bibr' target='#b48'>[44]</ns0:ref>, and Dexteroid <ns0:ref type='bibr' target='#b91'>[82]</ns0:ref> Dependency graph CHEX <ns0:ref type='bibr' target='#b124'>[113]</ns0:ref>, Dnadroid <ns0:ref type='bibr' target='#b126'>[114]</ns0:ref>, Droidlegacy <ns0:ref type='bibr' target='#b49'>[45]</ns0:ref> and <ns0:ref type='bibr' target='#b56'>[51]</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Studies conducted to overcome obfuscation.</ns0:p><ns0:p>To overcome obfuscation, many studies were conducted on different approaches.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Dendroid <ns0:ref type='bibr' target='#b95'>[86]</ns0:ref> 2014 Concentrates on the internal structure of code units (methods) to resist obfuscation.</ns0:p><ns0:p>Droidlegacy <ns0:ref type='bibr' target='#b49'>[45]</ns0:ref> 2014 Graph node represents the Java class, which detects light obfuscation.</ns0:p><ns0:p>[127] 2017 Uses Jadx (a reverse engineering tool), which provides the de-obfuscation option.</ns0:p><ns0:p>It is capable of de-obfuscating the obfuscation code in minimal error. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The detection of malware, which attacks Android OS, based on previous static analysis.</ns0:p><ns0:p>To identify the trends in the detection of malware through static analysis, this section presents a list of previous works of research, which cover all areas (year, features, and classification). • √ √</ns0:p><ns0:formula xml:id='formula_0'>[174] • • √ [194] • √ [195] • • √ [196] • • • √ [197] • √ [198] • √ [187] • • √ [188] • • √ [189] • √ √ [67] • • • √ [92] • √ [94] • √ [172] • • • √ [173] • • • • [199] • √ [200] • √ [201] • • • • √ [202] • √ [80] • √ [93] • √ [158] • √ [203] • √ [204] • √ [205] • • √ [82]</ns0:formula><ns0:p>• √ Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b99'>[90]</ns0:ref> • √</ns0:p><ns0:p>[206]</ns0:p><ns0:p>• √</ns0:p><ns0:p>[207]</ns0:p><ns0:p>• √ </ns0:p><ns0:formula xml:id='formula_1'>[59] • • √ [61] • • • √ [61] • • • √ [65] • • • √ [78] • √ [82] • • √ [132] • [135] • • √ [208] • √ [13] • • √ [55] • • • √ [45] • √ [52] • √ [53] • • √ [54] • √ [57] • • • √ [58]</ns0:formula></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>c) Application component PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7 illustrates the ML taxonomy, which starts with classical learning. a) Supervised Learning PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>d) Neural Network and Deep Learning PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)Manuscript to be reviewed Computer Science overcoming it. Many static analysis researchers implemented multiple methods to solve obfuscation and achieve a successful malware detection, such as concentrating the codes with the implementation of obfuscation (DexClassLoader.loadClass, Crypto.spec.DESKeySpec, and Cipher.getInstance) using a tool with the de-obfuscation option, including the adoption of graph node and program dependence graphs (PDGs).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 The</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>the API calls in methods and classes, which will be executed in the run time. Additionally, the generated signature, which is based on the analystdefined API, which have the ability to update flexibly. Apposcopy [52] 2014 Includes the predicate inter-component communication (ICC), which allows the writing of signatures, which are resilient to high-level control flow obfuscation. Drebin [55] 2014 DREBIN extracts API calls related to obfuscation and loading of code, such as DexClassLoader.loadClass() and Cipher.getInstance(). PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>PeerJ1</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)Manuscript to be reviewedComputer Science Table1: The detection of malware, which attacks Android OS, based on previous static analysis</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,229.87,525.00,116.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,229.87,525.00,216.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,229.87,525.00,339.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,220.38,525.00,416.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,229.87,525.00,414.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,220.38,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,259.62,525.00,291.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,229.87,525.00,279.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>1 Table 1 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Comparison malware analysis techniques</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year</ns0:cell><ns0:cell>References</ns0:cell><ns0:cell>Analysis</ns0:cell><ns0:cell>Features</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[16]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='2'>Texture, color, text</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[138]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell>permissions,</ns0:cell><ns0:cell>API</ns0:cell><ns0:cell>calls,</ns0:cell><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>addresses</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[139]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='3'>Assembly, Dex, Xml, Apk</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[140]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell>CFG, DFG</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[141]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell>API calls</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[142]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='3'>Repackaging and standalone</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[143]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='4'>Root node, decision nodes, and leaf nodes</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[144]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[145]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell>Code metric</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[146]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='4'>Platform-based permissions, hard-ware</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>components, and suspicious API calls</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[147]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='3'>Permissions and API calls</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[148]</ns0:cell><ns0:cell>Hybrid</ns0:cell><ns0:cell cols='4'>Manifest file (i.e., number of activities,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>permissions, receivers, filters), and the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>source code analysis</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[149]</ns0:cell><ns0:cell>Hybrid</ns0:cell><ns0:cell>API call</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[150]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='4'>Weighted-sensitive-API-call-based graph</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[151]</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell cols='4'>Enabling the recording of parameters and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>return value of an API call</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>[152]</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>transitions</ns0:cell><ns0:cell cols='2'>probabilities,</ns0:cell><ns0:cell>states</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>frequencies, and aggregated state</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>frequencies grouped</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[153]</ns0:cell><ns0:cell>Hybrid</ns0:cell><ns0:cell cols='4'>number of activities, services and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>receivers given in the Manifest file and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>the size of the APK file</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>[154]</ns0:cell><ns0:cell>Static</ns0:cell><ns0:cell cols='4'>API usage, reflection-based features, and</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Machine learning and its classifier used in studies.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Machine learning types supervised and unsupervised with the classifier and the respective</ns0:cell></ns0:row><ns0:row><ns0:cell>articles. To detect Android malware with static features, the supervised learning method is</ns0:cell></ns0:row><ns0:row><ns0:cell>widely used by security practitioners.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Machine learning and its classifier used in studies</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Machine learning type Classifier</ns0:cell><ns0:cell /><ns0:cell>Reference</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised</ns0:cell><ns0:cell cols='2'>K-nearest neighbor</ns0:cell><ns0:cell>[181] [30]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Support</ns0:cell><ns0:cell>vector</ns0:cell><ns0:cell>[30] [76] [75] [55]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>machine (svm)</ns0:cell><ns0:cell>[181] [182] [55]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CART</ns0:cell><ns0:cell /><ns0:cell>[79]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Adaboost</ns0:cell><ns0:cell /><ns0:cell>[75] [59]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Bayes</ns0:cell><ns0:cell /><ns0:cell>[13] [132] [73] [77]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>[64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Logistic Regression</ns0:cell><ns0:cell>[78]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Prism (PART)</ns0:cell><ns0:cell>[68] [57] [64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Voting feature interval</ns0:cell><ns0:cell>[64]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(vfi)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random forest</ns0:cell><ns0:cell>[64] [77] [30] [75]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>minimal</ns0:cell><ns0:cell>[77]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>optimisation (smo)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Instance-based</ns0:cell><ns0:cell>[77]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>learning</ns0:cell><ns0:cell>with</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>parameter k (ibk)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Simple logistic</ns0:cell><ns0:cell>[77]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Multilayer perceptron [65]</ns0:cell></ns0:row><ns0:row><ns0:cell>Unsupervised</ns0:cell><ns0:cell>K-means</ns0:cell><ns0:cell /><ns0:cell>[131] [47] [63] [79]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Normalised</ns0:cell><ns0:cell /><ns0:cell>[113] [114]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Compression distance</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(NCD)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 9 (on next page)</ns0:head><ns0:label>9</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 10 (on next page)</ns0:head><ns0:label>10</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>1 Table 1 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Studies conducted to overcome obfuscation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>References</ns0:cell><ns0:cell>Year Solution for the obfuscation</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>DNADroid [114] 2012 Using programme dependence graphs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(PDGs), DNADroid can prevent typical</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>program transformations in obfuscation.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 11 (on next page)</ns0:head><ns0:label>11</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54261:2:0:NEW 14 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Journal: PeerJ Computer Science
Manuscript Reference Number:
Title: Malware detection using static analysis in Android: A review of FeCO (Features, Classification, and Obfuscation)
Authors: Rosmalissa Jusoh, Ahmad Firdaus, Shahid Anwar, Mohd Zamri Osman, Mohd Faaizie Darmawan, Mohd Faizal Ab Razak
Document Correction Form:
Dear Editor,
We highly appreciate your thorough investigation of our paper. We would like to thank your team for reviewing the paper and making several suggestions for improvement. We would also like to thank the anonymous reviewers for their excellent suggestions and comments. Based on the reviews, we have made required changes to the paper which are incorporated as follows. All the reviews provided by the reviewers have been honored and worthy suggestions been accommodated in the revised manuscript. The revised manuscript along with the marked revisions can be found in the word file.
Reviewer #2 Comment
Comment 1
Authors’ Response
Different reverse engineering tools are mentioned in Section 2.2. The studies they are used and the way they are used are expressed. However, the reverse engineering tools themselves are not cited.
Thank you very much for the insightful feedback. It is our sincere hope that that the quality meets the reviewer’s requirement.
We already cite the source of the reverse engineering tools in Table 6.
Comment 2
Authors’ Response
In the description of Figure 8, the expression '(B) However, DL involves automatic feature selection and malware classification' is used. However, in the B part of the figure, 'feature extraction + classification' expression is used. This situation causes the two different terms feature extraction and feature selection to be perceived as the same thing. Likewise, there is the same confusion in the paragraph starting at line 470.
We kindly appreciate your comments. The changes have been done to the figure description. We choose the words feature extraction as stated in Figure 8 rather than feature selection. So, the new description as below:
(A) ML extract the features manually and select the ML classifier. (B) DL involves automatic feature extraction and malware classification.
The statement in the paragraph starting at line 470, to be more specific at line 472 has been added with the word “automatic”. The new statement becomes like this:
However, DL involves automatic feature extraction part and malware classification.
Comment 3
Authors’ Response
At the end of the paragraph starting on line 470, publication 106 is cited. However, I could not see the direct relationship of the cited work with DL or automatic feature extraction. In Table 8, publication 106 is cited again and the year of publication is written as 2015. However, the year of publication 106 in the references is 2019. I think there is a confusion in the citations
Thank you for your comment.
We already removed the 106 citation as it is not related to the work. In Table 8, the correct publication was cited with the correct year (2018) in references.
Comment 4
Authors’ Response
Although the last reference number is 138 in the references section, I see that there are higher numbered references in Table 11. I think there are missing references in the references table.
We kindly appreciate your comments.
The references are missing after we refresh the page without tables. We have done the correction for the number of references in Table 11. So, the total number of references is 208.
" | Here is a paper. Please give your review comments after reading it. |
88 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Today, increased attention is drawn towards network representation learning, a technique that maps nodes of a network into vectors of a low-dimensional embedding space. A network embedding constructed this way aims to preserve nodes similarity and other specific network properties. Embedding vectors can later be used for downstream machine learning problems, such as node classification, link prediction and network visualization.</ns0:p><ns0:p>Naturally, some networks have text information associated with them. For instance, in a citation network, each node is a scientific paper associated with its abstract or title; in a social network, all users may be viewed as nodes of a network and posts of each user as textual attributes. In this work, we explore how combining existing methods of text and network embeddings can increase accuracy for downstream tasks and propose modifications to popular architectures to better capture textual information in network embedding and fusion frameworks.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Many real-world data can be modeled as graphs: citation networks, social networks, knowledge databases.</ns0:p><ns0:p>Ability to analyze such data structures is crucial for a great variety of applications. For instance, when social networks try to get new users to subscribe, they need to solve a link prediction problem (LPP) <ns0:ref type='bibr' target='#b0'>(Backstrom and Leskovec (2011)</ns0:ref>). Telecom companies' marketing departments might want to segment users according to their behavior within a network of calls, which can be stated as a node clustering problem <ns0:ref type='bibr' target='#b36'>(Zhu et al. (2011)</ns0:ref>). Biologists need to find out the structural roles of proteins via analyzing their interaction network, requiring a solution for node classification problem <ns0:ref type='bibr' target='#b7'>Do et al. (2020)</ns0:ref>.</ns0:p><ns0:p>All problems mentioned above correspond to classic machine learning problems applied to networks, with every network represented by a graph and attributes of its components, such as nodes or edges. Solving machine learning problems on network data require vector representation for object features, including graph structure. To be able to solve these problems, one has to develop the efficient representation of a network that will preserve attribute features and graph structure, and will be feasible for existing machine learning frameworks.</ns0:p><ns0:p>Historically, the first way to represent a graph is the adjacency matrix. This representation has two significant drawbacks. Firstly, it captures only direct relationships between nodes. Secondly, for realworld networks, the adjacency matrix tends to be very sparse and does not directly represent structural features apart from first-order proximity.</ns0:p><ns0:p>Network Representation Learning (NRL) techniques were created to mitigate the problems mentioned above. The main idea of NRL is to map nodes (or edges) of a network into low-dimensional space preserving their topological structure from the network. The first NRL methods were mostly based on matrix factorization <ns0:ref type='bibr' target='#b25'>(Roweis and Saul (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Belkin and Niyogi (2002)</ns0:ref>). These methods do solve the dimensionality problem but are highly computationally expensive. A more advanced approaches use random walks on networks to approximate different kinds of similarity matrices <ns0:ref type='bibr' target='#b23'>(Perozzi et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b8'>Grover and Leskovec (2016)</ns0:ref>). These methods are very scalable and, therefore, can be applied even to large networks.</ns0:p><ns0:p>Quite often, nodes of a network have different kinds of attributes associated with them. This work is concerned with one type of attributes -textual information. The problem of efficient representation of textual information is very similar to the same problem with graphs. The classic techniques, such as Bag of Words (BoW) suggested by <ns0:ref type='bibr' target='#b10'>Harris (1954)</ns0:ref> and Term Frequency -Inverse Document Frequency (TF-IDF) suggested by <ns0:ref type='bibr' target='#b26'>Salton and Buckley (1988)</ns0:ref>, encode each word as a one-hot vector and represent a document as a sum of representations of all words (using certain coefficients). These methods are straightforward but produce very sparse representations and do not consider the order of words. A more advanced approach, called Word2Vec <ns0:ref type='bibr' target='#b18'>(Mikolov et al. (2013)</ns0:ref>), employs a Skip-Gram model to learn semantics of words through their context. This method produces dense low-dimensional embeddings, thus gaining an advantage over the classic approaches. There are some extensions of Word2Vec like <ns0:ref type='bibr' target='#b21'>Pagliardini et al. (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Mikolov and Le (2014)</ns0:ref>. Their aim is to learn document embeddings instead of embeddings for separate words. The most advanced models use bidirectional transformers <ns0:ref type='bibr' target='#b24'>(Reimers and Gurevych (2019)</ns0:ref>) to learn sensible embeddings.</ns0:p><ns0:p>The fusion of graph and text information for representation learning is still an area that is not well researched. The most straightforward approach is to learn network and text embeddings separately and then concatenate them to produce the final embedding. More sophisticated approaches include TADW <ns0:ref type='bibr'>(Yang et al. (2015)</ns0:ref>), which incorporates text attributes into a matrix factorization problem. TriDNR <ns0:ref type='bibr' target='#b22'>(Pan et al. (2016)</ns0:ref>) uses combined loss between Doc2Vec and DeepWalk algorithms. Finally, GCN <ns0:ref type='bibr' target='#b11'>(Kipf and Welling (2016)</ns0:ref>) and its variations use graph neural networks to take node attributes into account.</ns0:p><ns0:p>In this work, the following contributions are made:</ns0:p><ns0:p>1. Different combinations of network and text embeddings are studied to improve the downstream tasks quality.</ns0:p><ns0:p>2. Some modifications are proposed to existing architectures to better take into account text and graph information and the way how they are fused.</ns0:p><ns0:p>3. Comprehensive comparison of existing methods is performed on node classification, link prediction and visualization problems.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The paper is structured as follows. We start with a brief explanation of related work and the choice of models. Then, we describe experiment methodology: used datasets, training and validation schemes for all the models and machine learning problems on networks. Next, we explain the obtained results.</ns0:p><ns0:p>Finally, we provide ideas for further enhancement of fusion techniques in discussion and conclude our study. All sections describe the content in the following order: text embeddings, then structural network embeddings, and finally, fusion models of text and network data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>In the real-life scenario, networks are often accompanied by additional information. In this work, the main focus is on one particular case, where each node of a network is associated with text information.</ns0:p><ns0:p>Below, we shortly discuss the chosen text and network embedding models, as well as several popular strategies of information fusion for the considered problem.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Text embeddings</ns0:head><ns0:p>2.1.1 Latent Dirichlet Allocation (LDA) <ns0:ref type='bibr' target='#b16'>Martínez and Kak (2001)</ns0:ref> propose the topic modeling techniques. It is a Bayesian generative probabilistic model for document clustering. Each document embedding is a vector of weights for underlying topics, where topics consist of several words with individual weights.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.2'>Word2Vec</ns0:head><ns0:p>The idea of <ns0:ref type='bibr' target='#b18'>Mikolov et al. (2013)</ns0:ref> is to predict context from a word (Skip-gram) or a word from its context (Continuous Bag of Word or just, CBoW).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.3'>Sent2Vec</ns0:head><ns0:p>It is an extension of Word2Vec CBoW model, which was explicitly designed to improve sentence embeddings <ns0:ref type='bibr' target='#b21'>(Pagliardini et al. (2017)</ns0:ref>). Firstly, it also learns embeddings for word n-grams. Secondly, it uses a whole sentence as a context window. Such an approach allows receiving better sentence embedding with n-gram aggregations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.4'>Doc2Vec</ns0:head><ns0:p>Mikolov and Le (2014) extend Word2Vec approach even further to learn continuous representations for texts of variable length (starting from a short phrase to very long articles). Its main distinction from Sent2Vec is that Doc2Vec can preserve text context for very long sequences of words. Doc2Vec additionally creates a lookup table with text embeddings. When a target word is predicted, this vector is concatenated to a source word vector.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.5'>SBERT</ns0:head><ns0:p>SBERT <ns0:ref type='bibr' target='#b24'>(Reimers and Gurevych (2019)</ns0:ref>) is an extension of classic BERT <ns0:ref type='bibr' target='#b6'>(Devlin et al. (2018)</ns0:ref>). The main difference is that SBERT is trained in contrastive fashion using Siamese architecture. In comparison to bidirectional autoencoder with a self-attention mechanism, it uses more advanced pooling strategies. <ns0:ref type='bibr' target='#b28'>Sun et al. (2020)</ns0:ref> suggest increasing the number of pretraining objectives to capture corpora's lexical, syntactic, and semantic information. The framework uses continual multi-task learning to sequentially learn new tasks without 'forgetting' the previous ones.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.6'>Ernie</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.2'>Network embeddings</ns0:head><ns0:p>There is a large variety of network embedding models for different cases. In the current work, we use in experiments only three models without node attributes, typically called structural embeddings because of their nature to learn graph structure independently of node attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.1'>DeepWalk</ns0:head><ns0:p>Invented by <ns0:ref type='bibr' target='#b23'>Perozzi et al. (2014)</ns0:ref>, the model samples random walks and learns embeddings using a skip-gram approach similar to <ns0:ref type='bibr' target='#b18'>Mikolov et al. (2013)</ns0:ref>. <ns0:ref type='bibr' target='#b8'>Grover and Leskovec (2016)</ns0:ref> propose a more efficient realization of the random walk idea. It balances between breadth-first and depth-first searches to keep trade-off between local and global graph structures.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>Node2Vec</ns0:head></ns0:div>
<ns0:div><ns0:head>3/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b20'>Ou et al. (2016)</ns0:ref> employ matrix factorization technique to directly reconstruct asymmetric distance measures like Katz index, Adamic-Adar or common neighbors. So it preserves asymmetric transitivity, which is important property of directed graphs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.2.3'>HOPE</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3'>Naive mixture</ns0:head><ns0:p>The most straightforward method to fuse graph and text information is to learn graph and text embeddings independently. Then combine two types of embeddings, concatenating them. This method has the following advantages:</ns0:p><ns0:p>1. Graph and text embeddings have been researched separately for quite a long time, so there are plenty of available methods/libraries etc.</ns0:p><ns0:p>2. Because embeddings for nodes and texts are learned individually, they provide a lot of freedom to choose a different dimension for graph and text embeddings, pre-train text embeddings on an entirely different corpus.</ns0:p><ns0:p>The main disadvantage is evident: text information is not taken into account while learning graph embedding and vice versa. It is essential, because two nodes might have the same distance in graph proximity but completely different semantic meaning.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Advanced mixture</ns0:head><ns0:p>Below, we describe chosen well-known fusion methods to use for our comparison of information fusion methods.</ns0:p><ns0:p>Text Attributed Deep Walk (TADW). One of the first attempts to incorporate text information into network representation learning was made in the TADW algorithm <ns0:ref type='bibr'>(Yang et al. (2015)</ns0:ref>). The main idea was to enrich ordinary DeepWalk algorithm by taking into account text attributes. The authors prove that DeepWalk performs a matrix factorization process and extend it with TF-IDF feature matrix.</ns0:p><ns0:p>Tri-Party Deep Network Representation (TriDNR). <ns0:ref type='bibr' target='#b22'>Pan et al. (2016)</ns0:ref> try to solve two issues of TADW: computational complexity of matrix factorization and missed word order in TF-IDF matrix encoding of texts. As the name suggests, the algorithm learns the network representation using three sources: graph, text and label information. DeepWalk algorithm is applied to capture graph information.</ns0:p><ns0:p>For text and label information, refined Doc2Vec is used.</ns0:p><ns0:p>Graph Convolutional Network (GCN). <ns0:ref type='bibr' target='#b11'>Kipf and Welling (2016)</ns0:ref> propose Graph Convolution Networks (GCN) as a light-weight approximation for the spectral convolution. This method provides better computational efficiency for semi-supervised tasks, such as link prediction or node classification. One of the main advantages of GCNs is their ability to account for node attributes. GCN works similarly to the fully-connected layers for neural networks. It multiplies weight matrices with the original features but masking them with an adjacency matrix. Such a method allows to account only for node neighbors and node representation from the previous layer.</ns0:p><ns0:p>Graph Attention Networks (GAT). <ns0:ref type='bibr' target='#b32'>Veličković et al. (2017)</ns0:ref> utilize the idea of the self-attention mechanism of <ns0:ref type='bibr' target='#b31'>Vaswani et al. (2017)</ns0:ref> for network data. Such an approach allows to balance the weights of neighbors in node embedding according to structure and node attributes. Because masked self-attention does not require knowing the graph structure upfront, this model could be used inductively.</ns0:p><ns0:p>Graph SAmple and aggreGatE (GraphSAGE). <ns0:ref type='bibr' target='#b9'>Hamilton et al. (2017)</ns0:ref> suggest using sampling over node neighborhood to learn final embedding. It provides more scalability and different choices for learnable aggregation functions.</ns0:p><ns0:p>Graph InfoClust (GIC). GIC <ns0:ref type='bibr' target='#b17'>(Mavromatis and Karypis (2020)</ns0:ref>) leverages the cluster-level information to any graph neural network (GNN) encoder. They propose to add a new part to the loss maximizing mutual information between node representations on both cluster and global levels. Network substructures such as clusters usually correlate with node labels, and link creation inside a cluster is more likely by their definition.</ns0:p><ns0:p>Generally, described fusion methods outperform text or network embeddings. Still, there is some room for improvement: as for now, most researchers use BoW or TF-IDF to produce input feature matrix for fusion methods, such as TADW and GCN. It is promising to see how the combination of advanced text embedding techniques with these methods might improve the accuracy of machine learning tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Also, one might be interested in enhancing GCN architecture by adding simultaneously trainable word embeddings to the network.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>EXPERIMENTS</ns0:head><ns0:p>This section explains the experiment pipeline to determine whether the fusion of text and graph information helps improve the quality of the downstream tasks. Firstly, we describe the choice of datasets. Then, we define the process of constructing text embeddings after text preprocessing. Next, we describe our choice of network embeddings and their hyperparameters. Finally, the fusion techniques and hyperparameters are provided. In the end, we describe the training and validation scheme for node classification and link prediction tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Datasets</ns0:head><ns0:p>To be able to compare different kinds of algorithms described above, the chosen dataset should possess the following properties:</ns0:p><ns0:p>1. It should have a graph structure, i.e. it should contain entities and relations between them.</ns0:p><ns0:p>2. At least some of the nodes should have text associated with it. It is important to note that texts associated with nodes should be in raw format (e.g., not in embedding format already, such as BoW). Although it is not required for every node to have text associated with it, the more nodes have it, the better the quality is.</ns0:p><ns0:p>3. At least some nodes should be associated with labels. This property is necessary to state the node classification problem.</ns0:p><ns0:p>Below, we describe three main datasets chosen as benchmarks for network-related machine learning problems and satisfying conditions above.</ns0:p><ns0:p>Cora <ns0:ref type='bibr' target='#b27'>(Sen et al. (2008)</ns0:ref>). Cora dataset is a citation network, in which each node represents a scientific paper, and each link shows that one article cites another one. There are 2708 nodes and 5429 edges in the network. Each node has text with a short description (abstract) of the paper. Average text length in words is 130. All nodes are grouped into seven classes: Neural Networks, Rule Learning, Reinforcement Learning, Probabilistic Methods, Theory, Case-Based, Genetic Algorithms. The network does not contain any isolated nodes.</ns0:p><ns0:p>CiteSeer-M10 <ns0:ref type='bibr' target='#b12'>(Lim and Buntine (2016)</ns0:ref>). This dataset is a subset of original CiteSeer data, which contains scientific publications in different disciplines grouped into ten different classes. M10 version consists of 38 996 nodes and 76 630 edges. However, only 10 310 nodes have the text (paper title) and label information associated with them. Average text length in words is 9. In this case, text information contains only name of the paper (rather than the abstract). Some of the nodes are isolated, which makes this dataset generally more problematic than the previous one.</ns0:p><ns0:p>DBLP <ns0:ref type='bibr' target='#b34'>(Yang and Leskovec (2015)</ns0:ref>). DBLP is a bibliographic system for computer science publications.</ns0:p><ns0:p>Citations might connect publications (described by the title). In this work, we follow the comparison setting suggested by <ns0:ref type='bibr' target='#b22'>Pan et al. (2016)</ns0:ref>, and consider only subset of the network, containing 60 744 nodes (all accompanied with text and label attributes) and 52 890 edges. Average text length in words is 8.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Text embeddings</ns0:head><ns0:p>The first part of the experiments is mostly concerned with estimating whether textual information alone can efficiently solve machine learning problems on text networks. Intuitively, in case of citation networks, text description should be correlated with the target class (the topic of research), so results on text data can provide a good baseline, using which other types of embeddings are compared. Questions to be addressed in this section:</ns0:p><ns0:p>1. Whether advanced text embedding techniques (Sent2Vec, Doc2Vec, SBERT) generally outperform classic approaches (such as BoW, TF-IDF) in case of citation networks? 2. How does a share of train data (compared with test) affect the model prediction power? 3. How does average text length influence model quality?</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_9'>2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>4. Whether models pre-trained on a vast amount of data perform better than models trained 'from scratch'? One of the most crucial steps to start with the problem is text preprocessing. We perform preprocessing before embedding algorithm is applied. We follow the standard pipeline.</ns0:p><ns0:p>Firstly, we remove all special symbols and switch the case to lower. Next, we remove stop words.</ns0:p><ns0:p>Stop words are the set of most frequently used words in a language like 'also'. In addition, we filter the most frequent words for the current dataset (appear in more than 70% of texts) and the rarest (appear less than three times). Finally, each token is converted to the corresponding lemma, which is the form of a word presented in the dictionary. Bag of Words and TF-IDF models use only unigrams as input since datasets are relatively small, and choosing higher ngram range will lead to poor generalization. For LDA we use Gensim implementation SBERT was pre-trained 2 on SNLI dataset <ns0:ref type='bibr' target='#b4'>Bowman et al. (2015)</ns0:ref>, which consists of 570 000 sentence pairs divided into 3 classes: contradiction, entailment and neutral. We use the original pre-trained version of ERNIE by Baidu 3 . To achieve sentence embedding from Ernie, we average last hidden state for all its tokens.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Network embeddings</ns0:head><ns0:p>In citation networks, papers from one field tend to cite each other more frequently than articles from other areas. Therefore, the graph structure should give significant insights into the node classification and link prediction tasks. Another critical issue is comparing how well network embeddings perform compared to text embeddings on different datasets.</ns0:p><ns0:p>Three network embedding methods were selected for the experiments with structural network embeddings: HOPE, Node2Vec and DeepWalk. The reason for such a choice is quite straightforward: these methods tend to outperform others in most settings of structural network embeddings <ns0:ref type='bibr' target='#b15'>(Makarov et al. (2021)</ns0:ref>). We use GEM implementation of HOPE 4 . Hyperparameter β is chosen to be 0.01 (as used in other papers). For DeepWalk original implementation 5 is used, with the following hyperparameters: vector size -10, number of walks per-vertex -80, window size -10. Node2Vec also follows original implementation 6 with the following hyperparameters: vector size -10, number of walks per-vertex -80, window size -10 (the same as DeepWalk).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Fusion of text and graph information</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.4.1'>Naive Combination</ns0:head><ns0:p>Text and network embeddings are learned separately. For every node, the final embedding is represented as concatenation of the corresponding text embedding and network embedding. This method can be viewed as a good baseline for fusion methods. In this combination, we use DeepWalk as network embedding similarly to the more comprehensive TADW method. We concatenate it with BoW, also following the approach of TADW. Additionally, we test it with concatenations of Sent2Vec embedding as an advance text encoding approach.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.2'>TADW</ns0:head><ns0:p>Two versions of TADW were constructed with the help of TF-IDF or Sent2Vec for the feature generation.</ns0:p><ns0:p>The following hyperparameters are used: vector size = 160, number of iterations = 20, λ = 0.2. SVD is used on input feature matrix to reduce its dimension to 200 (as in the original paper).</ns0:p><ns0:p>1 https://radimrehurek.com/gensim/models/ldamodel.html 2 https://github.com/UKPLab/sentence-transformers 3 https://huggingface.co/nghuyong/ernie-2.0-en 4 https://github.com/palash1992/GEM 5 https://github.com/phanein/deepwalk 6 https://github.com/aditya-grover/node2vec</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.3'>TriDNR</ns0:head><ns0:p>All three sources are used: texts, network and labels to get the final embeddings. Only labels from the train set are present, while others are masked. The following hyperparameters are used: vector size = 160 (to match TADW), text weight = 0.8, passes = 50.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.4'>Graph neural networks (GCN, GAT, GraphSAGE and GIC)</ns0:head><ns0:p>In most papers, authors use simple BoW or TF-IDF matrices as a feature matrix for GCN. It might be sensible to experiment with more advanced text embedding techniques to improve the results, as we have already seen that Sent2Vec or Word2Vec outperform BoW and TF-IDF for some settings. The model is trained for 200 epochs using Adam optimizer. The best model (according to validation results) is saved.</ns0:p><ns0:p>The vector size is equal to 64, and the model contains two convolutional layers. Also, it is interesting to try new modification for GCN architecture. Instead of using a fixed feature matrix as input, one can replace it with a lookup table with learnable embeddings. This way model can simultaneously learn text embeddings as well as network embeddings. In this case, padded sentences of tokens are fed as input, then the lookup table with embeddings is used. Next, to obtain embeddings for sentences, mean and max functions are applied for word embeddings of each sentence. In the end, the rest of the network is treated the same way as for ordinary GCN.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Training and validation</ns0:head><ns0:p>Input network data consists of nodes of the graph with associated text information and edges between them. Before the validation procedure, text information is preprocessed using the steps described in Section 3.2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.1'>Node classification</ns0:head><ns0:p>We start with encoding nodes using one of the embedding techniques: text, graph or their fusion. Next, we split the dataset into train and test subsets in different proportions (5%, 10%, 30% and 50% of labeled nodes). Finally, Logistic Regression classifier is trained.</ns0:p><ns0:p>We use Logistic Regression for two reasons. Firstly, almost all learned embeddings are non-linear in nature (except for TF-IDF and BoW). So Logistic Regression is sufficient for the final classification task.</ns0:p><ns0:p>Secondly, it could be pipelined in GNN models by simply adding one fully-connected output layer. Such a technique allows training neural networks in an end-to-end fashion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2'>Link prediction</ns0:head><ns0:p>The edges of the graph are split randomly into train and test sets with specified train ratios (5%, 10%, 30% and 50% train edges). Then test edges are masked (effectively removed) from the graph.</ns0:p><ns0:p>Usually, most graphs are sparse, so the number of existing edges dramatically less than the number of all possible links. We keep the LPP as a binary classification problem. So in general, LPP has highly imbalanced classes. One of the popular techniques to handle it is to use undersampling of the dominant class. To make the final classifier more powerful, we sample non-existing links between most similar nodes because it is the most probable connection to appear. Existing edges are marked as 'ones' and non-existing as 'zeros'. The same is done for the test set.</ns0:p><ns0:p>The masked graph is then used to learn node embeddings (using text or graph information or both).</ns0:p><ns0:p>We use simple element-wise (Hadamard) product of node embeddings as encoding for the corresponding edge, leaving other edge encoder operators for future work (see <ns0:ref type='bibr' target='#b14'>Makarov et al. (2019)</ns0:ref>). Finally, we train Logistic Regression on obtained vectors to classify the presence or absence of the links between pairs of nodes in the graph.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.3'>Evaluation process</ns0:head><ns0:p>The procedures described above are repeated five times with random train/test splits for different train/test ratio values in both cases. The mean and standard deviation of the results are reported for F 1 quality metric.</ns0:p><ns0:p>For Logistic Regression sklearn 7 implementation (in python) is used with lbfgs solver, L 2 penalty and C = 1. For multi-class classification (number of classes greater than 2) One-Vs-Rest setting is applied, which means that one classifier is trained for every class, besides each model use samples from all other classes as 'zero' class. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>Firstly, we present the node classification task results, then we discuss results for link prediction and explain node visualization. Similarly to other blocks, subsections have the following order: text embedding, network embedding, their fusion. Despite being significantly simpler and easier to train, advanced models trained from scratch generally performed better than their pre-trained counterparts. That is probably because language, which is used to describe the scientific paper in computer science, significantly differs from the language used in news or (average) Wikipedia article. The best model out of advanced text embeddings is Sent2Vec (trained from scratch). It also shows consistent results concerning the different share of labeled nodes. Another insight is that advanced embeddings could not beat the Bag of Words technique when the labeled data share is high enough. The nature of the data might explain this. Abstract for scientific papers is just a set of keywords. In this case, the Bag of Words hypothesis is applied very well. However, when there is a small percentage of labeled data (which is the practical case), advanced embeddings significantly outperform Bag of Words, which means that they tend to generalize better. One can note that choosing different values for embeddings dimension does not influence the results significantly.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Node classification</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> present the results on CiteSeer-M10 dataset. This dataset differs from the first one because texts are significantly shorter, but the total amount of nodes is bigger. One can note that although Bag of Words is still the best out of the classic techniques, TF-IDF performs almost as good on CiteSeer-M10.</ns0:p><ns0:p>The quality does not degrade significantly when the percentage of labeled nodes becomes smaller. The reason for that is two-fold: firstly, the attributed network is large, so it is easier to generalize even with a smaller number of labels. Secondly, text length is much smaller, therefore, there is less 'variation' in the data. Similarly to Cora, LDA performed even poorer than on the first dataset because it is much harder to extract 'topics' from a few words for small datasets.</ns0:p><ns0:p>Considering advanced text embedding techniques, all (except for Ernie) architectures (at least in one of the configurations) outperform classic methods when the percentage of labeled nodes is small (5% or 10%). When the share of labeled data is more significant, they show performance similar to Bag of Words. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and performs similarly when it is higher. LDA generally shows bad performance on all three datasets. Although pre-trained SBERT model shows decent results on CiteSeer-M10 and DBLP, it was outperformed by other architectures and even classic approaches.</ns0:p><ns0:p>2. Ernie framework generally shows poor performance. The reasons for that is that it is not optimized to produce sensible sentence embeddings. Architecture similar to Sentence Bert might be applied to improve the model quality for these kinds of tasks.</ns0:p><ns0:p>3. In general, advanced embedding techniques and LDA show very consistent results even for a small percentage of trained labels. In contrast, Bag of Words and TF-IDF show degrading results when a small share of labeled nodes is available. However, when text information is short (only titles) and there is more trained data in terms of the number of nodes (documents), this effect is mitigated.</ns0:p><ns0:p>4. TF-IDF and Bag of Words generally performs better for short texts (paper titles) because they are basically set of keywords. Advanced methods show good performance in both settings (short and long texts).</ns0:p><ns0:p>5. One can see that in some cases, pre-trained models perform better and in some cases worse, so it is better to experiment with both of the approaches. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the other, still both might be useful as they tend to provide complementary information (different 'views' on target). When analyzing the results on the Cora dataset (Table <ns0:ref type='table'>5</ns0:ref>), one can note that a naive combination of textual and network features performs similarly to more advanced approaches TADW and TriDNR. Also, all approaches except TriDNR perform better than methods that use only text or network information, so one can conclude that these two types of information are complementing each other. GNNs significantly outperforms all the other approaches. The best variation is the GAT with TF-IDF text encoding. Nevertheless, almost all other GNN approaches lead to similar solid results. We can observe an intriguing effect that the GIC model highly relies on the train set size. This model accounts for network substructures like communities. So to efficiently learn it, GIC requires substantial part of the graph to train to good quality. Another remarkable thing is that text embeddings with high individual performance require a larger subsample of a graph to achieve competitive results. However, this effect is less noticeable for the GAT model because GAT relies more on the attention mechanism than the graph structure.</ns0:p><ns0:p>Tables <ns0:ref type='table' target='#tab_6'>6 and 7</ns0:ref> show that on Citeseer and DBLP networks, unlike the Cora dataset, the naive combination of BoW and DeepWalk significantly outperforms much more advanced algorithms. However, some GNN models still show superior performance. GraphSAGE with Sent2Vec initial features gives the best results on almost all percentages of training nodes except for 50%. It refers us to the nature of GraphSAGE. It works in an inductive and scalable manner by sampling the node neighbors before GCN aggregation, so for larger networks than Cora, it performs better. However, GCN still shows good performance for large train parts. The difference is that GCN requires a whole node neighborhood while GraphSAGE samples it with random walks. It seems that this effect is similar to the dramatic growth of the GIC performance described above. Generally, the fusion of text and graph information shows superior results to network or text embeddings alone. However, GNN tends to perform better with sparse input features like TF-IDF. It could be because high-quality, dense vectors are susceptible to any change, so it is hard to mix information from different domains based on dense vectors.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Link prediction</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Text methods</ns0:head><ns0:p>For the task of link prediction, one can expect text embeddings to perform more consistently concerning train ratio in comparison with network embeddings. Because network embedding techniques 'suffer' 0.74 ± 0.01 0.75 ± 0.01 0.76 ± 0.01 0.77 ± 0.01 GAT -TF-IDF 0.79±0.00 0.80±0.00 0.82±0.00 0.82±0.00 GAT -Sent2Vec 0.79±0.00 0.79±0.00 0.80±0.01 0.80±0.00 GAT -Ernie 0.73 ± 0.00 0.73 ± 0.00 0.75 ± 0.00 0.75 ± 0.00 GraphSAGE -TF-IDF 0.79±0.01 0.79±0.01 0.81±0.00 0.82±0.00 GraphSAGE -Sent2Vec 0.79±0.00 0.80±0.00 0.81±0.00 0.81±0.00 GraphSAGE -Ernie 0.70 ± 0.03 0.70 ± 0.02 0.71 ± 0.01 0.72 ± 0.01 GIC -TF-IDF 0.75±0.00 0.77±0.00 0.80±0.00 0.81±0.00 GIC -Sent2Vec 0.78±0.00 0.79±0.00 0.81±0.00 0.81±0.00 GIC -Ernie 0.51 ± 0.04 0.57 ± 0.02 0.63 ± 0.03 0.71 ± 0.01</ns0:p><ns0:p>Table <ns0:ref type='table'>7</ns0:ref>. Fusion methods on DBLP for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results) twice when the percentage of train data is decreasing: firstly, it affects the initial graph, so it is harder to learn embeddings themselves. Secondly, it is harder to train a classifier using less data, whereas text embeddings have only the second problem: they are not dependent on the graph structure. <ns0:ref type='table'>9</ns0:ref>. Text embeddings on Citeseer-M10 for link prediction (micro-F 1 , metric lies between (0, 1) and higher value means better results) for such tasks.</ns0:p><ns0:p>Generally, one can say that for link prediction choice of the text embeddings algorithm should be dependent on dataset, as there is no universal best performer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Network methods</ns0:head><ns0:p>When one masks edges of a network, it changes the graph structure (contrary to node classification), so it might be more challenging for structural network embedding method to perform well. Table <ns0:ref type='table' target='#tab_9'>10</ns0:ref> shows how network embedding algorithms perform on Cora for link prediction task.</ns0:p><ns0:p>For Citeseer-M10 dataset (Table <ns0:ref type='table' target='#tab_9'>10</ns0:ref>), the situation is quite similar to Cora dataset in a sense that Node2Vec performs better than DeepWalk, and both of these methods significantly outperform HOPE.</ns0:p><ns0:p>Results for DBLP are omitted, but they are pretty much the same.</ns0:p><ns0:p>To sum these experiments up, for a link prediction problem (contrary to node classification), text information plays a much more critical role than graph structure.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.3'>Fusion methods</ns0:head><ns0:p>For the link prediction task, one can see ( Table <ns0:ref type='table' target='#tab_1'>12</ns0:ref>. Fusion embeddings on Citeseer-M10 for link prediction (micro-F 1 , metric lies between (0, 1) and higher value means better results)</ns0:p><ns0:p>SBERT and Ernie show superior performance to other text embedding techniques when used alone and in combination with Graph Neural Networks. However, TF-IDF still shows high performance in fusion tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Network visualization</ns0:head><ns0:p>The main goal of network visualization is to produce a 'meaningful' 2D plot of nodes. Meaningful visualization would place nodes from one class close to each other and nodes from different classes far away from each other. Consequently, having solved the visualization problem, one automatically gets node clustering and vice versa.</ns0:p><ns0:p>To produce a 2D plot, one has to find a vector of two points describing the position of a node. This problem can be solved in two ways using network embeddings:</ns0:p><ns0:p>1. Explicitly learn embeddings of size 2 using any methods described in the previous chapters.</ns0:p><ns0:p>2. First, learn embeddings of length d, then use a dimensionality reduction method to obtain vectors of the size 2.</ns0:p><ns0:p>We follow the second approach since the first one generally produces worse results, as it is a much more challenging task to learn a realistic representation of size 2.</ns0:p><ns0:p>For embeddings compression, we will use t-distributed Stochastic Neighbor Embedding (t-SNE) presented by <ns0:ref type='bibr' target='#b13'>Maaten and Hinton (2008)</ns0:ref>. Firstly, t-SNE initialize the projection to the two-dimensional space.</ns0:p><ns0:p>Then, it calculates similarities between all points in both spaces and converts it to the joint probability distributions. Finally, it enhances projection via minimization of the Kullback-Leibler divergence between distributions in original and manifold spaces.</ns0:p><ns0:p>For all models, hyperparameters are chosen in the same way as in previous experiments. For GCN model, we take the output of the first activation layer.</ns0:p><ns0:p>When analyzing result obtained without fusion (Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>), one can note that even simple TF-IDF method provides a solid baseline as most of the classes (represented by colors) are clearly separable. because it provides a very clean separation of classes. However, the classes themselves are located very close to each other, so it would be hard to apply clustering algorithms. GCN provides the best result since classes are far away from each other and clearly separated, so it would be elementary to cluster points in the embedding space.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>One can see that fusion of text and graph information shows superior results for all machine learning tasks on graphs compared to methods that use only text or network information. It proves that text information and graph structure are complementary to each other, but each component's contribution depends on a task and a dataset. It is also clear that using advanced text embedding techniques such as Sent2Vec</ns0:p><ns0:p>and SBERT can significantly boost the performance of fusion methods. The reason is that advanced text embeddings can better capture the semantic of the words (synonyms, antonyms) and, therefore, better generalize. Moreover, pre-trained embeddings might be preferable when the number of training samples is low. However, Bag of Words and TF-IDF provide a substantial baseline for machine learning tasks on citation networks because a set of keywords can efficiently represent nodes (scientific papers). Also, sparse vectors are less sensible to the minor changes, so it fits the fusion task better.</ns0:p><ns0:p>The choice of the text embedding technique should be task-dependent: SBERT works better for link prediction, whereas Sent2Vec shows good performance for node classification. It can be explained by the fact that Sent2Vec is aimed to preserve the overall semantics of a text. In contrast, SBERT is specifically trained to predict whether two texts describe the same thing or not, so it is no wonder that SBERT is incredibly good at solving a link prediction problem. The GraphSAGE shows better performance for large networks compared to the other GNNs because it is designed for scalable inductive learning. GIC</ns0:p></ns0:div>
<ns0:div><ns0:head>16/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>works much better when the training part of the network is large. It requires many details about graph substructure to utilize all power of cluster-level loss. Also, SBERT and ERNIE perform better in fusion when the GNN model accurately selects the nodes to be aggregated. So the modification of models like GIC and GraphSAGE, which work with subgraph structures, could leverage the boost in its performance after some modifications.</ns0:p><ns0:p>Unfortunately, our custom GCN architecture, which allows learning word and network embeddings simultaneously, does not outperform state-of-the-art algorithms. Nevertheless, there is some potential for it. As mentioned before, the GCN part could be replaced by GraphSAGE or GIC models. Furthermore, pretraining of text embedding layer with further fine-tuning can show much better results on bigger networks (millions of nodes) due to the better quality of source text embeddings. The other possible modification of the fusion technique is to use the graph as a source of pairs for the SBERT framework.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>A comprehensive comparison of different fusion methods for machine learning problems on networks was conducted in this work. The best combinations of network and text embeddings for different machine learning tasks were outlined and compared with the traditional approaches. The new GCN architecture was proposed for learning text and network representations simultaneously.</ns0:p><ns0:p>Main conclusions of the work:</ns0:p><ns0:p>1. Fusion of text and graph information allows boosting performance on machine learning tasks significantly.</ns0:p><ns0:p>2. Usage of advanced text embeddings such as Sent2Vec and SBERT can improve the accuracy of different fusion architectures such as TADW and GCN. SBERT generally works better for link prediction, Sent2Vec for node classification.</ns0:p><ns0:p>3. There is no universal solution that fits all problems and all datasets. Different methods (and combinations of methods) might work better for different datasets.</ns0:p><ns0:p>4. Proposed GCN modification does not work well for datasets considered in this work but might show a better performance for bigger networks with more text data.</ns0:p><ns0:p>This work can be continued in the following ways. Firstly, it is promising to experiment with the proposed GCN architecture using bigger networks (ideally millions of nodes). It might show better results because a lot of data is required to learn sensible word embeddings. However, it is better to use models that work on the subgraph level (GraphSAGE or GIC) for better scalability and synergy with SBERT and ERNIE features. Another possible extension is to use a joint loss to learn network and text embeddings simultaneously. For instance, combining GNNs and BERT might present very competitive results for link prediction. Also, networks could be a source of positive pairs for contrastive learning technique (like SBERT is trained) or even provide more insights on knowledge graph related problems (like suggested by <ns0:ref type='bibr' target='#b5'>Deng et al. (2020)</ns0:ref>).</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>with following hyperparameters: number of topics (efficient embedding size) = 20, α = 0.1, β = 0.1. Word2Vec, Doc2Vec and Sent2Vec models were used with and without pretraining. Trained models are based on English Wikipedia. Local training of Word2vec and Doc2vec was performed using Gensim with following hyperparameters: window size is equal to 5, α = 0.025, ns exponent parameter equals to 0.75.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>7 https://scikit-learn.org/stable/modules/generated/7/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Embeddings visualization on Cora</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Text</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4.1.1 Text methods</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>% Labels</ns0:cell><ns0:cell>5%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>50%</ns0:cell></ns0:row><ns0:row><ns0:cell>BoW</ns0:cell><ns0:cell>0.63±0.01</ns0:cell><ns0:cell>0.68±0.01</ns0:cell><ns0:cell>0.76±0.01)</ns0:cell><ns0:cell>0.78±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>TF-IDF</ns0:cell><ns0:cell>0.35±0.01</ns0:cell><ns0:cell>0.49±0.01</ns0:cell><ns0:cell>0.70±0.01</ns0:cell><ns0:cell>0.76±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>LDA</ns0:cell><ns0:cell>0.49±0.01</ns0:cell><ns0:cell>0.57±0.01</ns0:cell><ns0:cell>0.60±0.01</ns0:cell><ns0:cell>0.61±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>SBERT pretrained</ns0:cell><ns0:cell>0.57±0.01</ns0:cell><ns0:cell>0.61±0.01</ns0:cell><ns0:cell>0.68±0.01</ns0:cell><ns0:cell>0.70±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Word2Vec pretrained 0.34±0.01</ns0:cell><ns0:cell>0.44±0.01</ns0:cell><ns0:cell>0.59±0.01</ns0:cell><ns0:cell>0.63±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Word2Vec (d=300)</ns0:cell><ns0:cell>0.64±0.01</ns0:cell><ns0:cell>0.68±0.01</ns0:cell><ns0:cell>0.70±0.01</ns0:cell><ns0:cell>0.71±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Word2Vec (d=64)</ns0:cell><ns0:cell>0.65±0.01</ns0:cell><ns0:cell>0.68±0.01</ns0:cell><ns0:cell>0.70±0.01</ns0:cell><ns0:cell>0.72±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Doc2Vec pretrained</ns0:cell><ns0:cell>0.54±0.01</ns0:cell><ns0:cell>0.61±0.00</ns0:cell><ns0:cell>0.65±0.01</ns0:cell><ns0:cell>0.67±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Doc2Vec (d=300)</ns0:cell><ns0:cell>0.49±0.01</ns0:cell><ns0:cell>0.58±0.01</ns0:cell><ns0:cell>0.66±0.01</ns0:cell><ns0:cell>0.68±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Doc2Vec (d=64)</ns0:cell><ns0:cell>0.50±0.02</ns0:cell><ns0:cell>0.58±0.01</ns0:cell><ns0:cell>0.65±0.00</ns0:cell><ns0:cell>0.67±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec pretrained</ns0:cell><ns0:cell>0.63±0.02</ns0:cell><ns0:cell>0.69±0.01</ns0:cell><ns0:cell>0.74±0.01</ns0:cell><ns0:cell>0.77±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec (d=600)</ns0:cell><ns0:cell>0.68±0.02</ns0:cell><ns0:cell>0.72±0.01</ns0:cell><ns0:cell>0.75±0.01</ns0:cell><ns0:cell>0.77±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec (d=64)</ns0:cell><ns0:cell>0.68±0.02</ns0:cell><ns0:cell>0.72±0.01</ns0:cell><ns0:cell>0.75±0.01</ns0:cell><ns0:cell>0.77±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Ernie pretrained</ns0:cell><ns0:cell cols='4'>0.43 ± 0.01 0.52 ± 0.01 0.62 ± 0.01 0.65 ± 0.01</ns0:cell></ns0:row></ns0:table><ns0:note>methods on Cora for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results)Table1shows the comparison between text approaches on Cora dataset. Classic techniques show very promising metrics, especially when the percentage of labeled nodes is not very small. The best algorithm is the Bag of Words, which outperforms every other classic method. It also shows quite a good quality for different percentage of known labels. TF-IDF performs similarly on 30% and 50% of labeled nodes but degrades significantly on the lower values. Although LDA results are not very high, it shows consistent results across different shares of labeled nodes.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>± 0.01 0.63 ± 0.00 0.67 ± 0.00 0.68 ± 0.01 Text methods on Citeseer-M10 for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results) Opposite to the Cora results, one can note that pre-trained versions of all models substantially outperform their counterparts trained from scratch. The explanation for that might be that text length is relatively short, and the amount of data is not enough to restore dependencies between words.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>8/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Text methods on DBLP for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Table 3 presents the results of text embedding methods on DBLP dataset. The results appear to be</ns0:cell></ns0:row><ns0:row><ns0:cell>quite similar to the ones achieved on CiteSeer-M10 dataset: Bag of Words and TF-IDF perform equally</ns0:cell></ns0:row><ns0:row><ns0:cell>well, with the former performing slightly better. Also, there is no dramatic degradation in the score when</ns0:cell></ns0:row><ns0:row><ns0:cell>the percentage of labeled nodes is small.</ns0:cell></ns0:row><ns0:row><ns0:cell>Regarding advanced methods on DBLP dataset one can see that Sent2Vec outperforms all other</ns0:cell></ns0:row><ns0:row><ns0:cell>architectures. Word2Vec also show very decent results (especially in terms of stability over different train</ns0:cell></ns0:row><ns0:row><ns0:cell>rate sample). For Doc2Vec (in opposite to Sent2Vec) pre-trained version performed far better than the one</ns0:cell></ns0:row></ns0:table><ns0:note>trained from scratch. Again, advanced embeddings outperform classic techniques for a small percentage of labeled data and perform almost as good in case of more labeled data.To sum up text embeddings experiments:1. Advanced text embedding techniques such as Sent2Vec, Doc2Vec, Word2Vec outperform classic approaches such as Bag of Words and TF-IDF when the percentage of labeled data is small (< 30%)9/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Network methods for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results)According to the results on Cora dataset (Table4) DeepWalk and Node2Vec show similar performance with DeepWalk being slightly better when the percentage of labeled nodes is larger than 5%. HOPE shows inferior results (near to random) for node classification task. When comparing the results with text embedding techniques (Table1), one can note that DeepWalk and Node2Vec outperform all the other algorithms at a significant margin. Moreover, the tendency holds for different values of labeled nodes. It generally means that the Cora graph structure has a higher correlation with the target.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4.1.2 Network methods</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>% Labels</ns0:cell><ns0:cell>5%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>50%</ns0:cell></ns0:row><ns0:row><ns0:cell>CORA</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DeepWalk</ns0:cell><ns0:cell cols='4'>0.72±0.01 0.77±0.00 0.81±0.00 0.82±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Node2Vec</ns0:cell><ns0:cell cols='4'>0.74 ±0.01 0.76±0.01 0.80±0.00 0.81±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>HOPE</ns0:cell><ns0:cell cols='4'>0.29±0.00 0.30±0.00 0.30±0.00 0.31±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>CITESEER</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DeepWalk</ns0:cell><ns0:cell cols='4'>0.63±0.00 0.65±0.01 0.67±0.00 0.68±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Node2Vec</ns0:cell><ns0:cell cols='4'>0.63±0.01 0.65±0.00 0.67±0.00 0.68±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>HOPE</ns0:cell><ns0:cell cols='4'>0.12±0.00 0.13±0.00 0.17±0.00 0.20±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>DBLP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DeepWalk</ns0:cell><ns0:cell cols='4'>0.52±0.00 0.53±0.00 0.53±0.00 0.53±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Node2Vec</ns0:cell><ns0:cell cols='4'>0.52±0.00 0.53±0.00 0.53±0.00 0.53±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>HOPE</ns0:cell><ns0:cell cols='4'>0.29±0.01 0.30±0.01 0.31±0.00 0.31±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>For Citeseer-M10 dataset (Table 4), DeepWalk and Node2Vec show identical performance for all</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>labeled nodes' values, whereas HOPE again performs quite poorly. Interestingly, in contrast with Cora,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>here, one can see that text embedding techniques outperform network embeddings.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>shows the results for network embedding methods on DBLP dataset. Similar to the Citeseer-M10 dataset, we can see that DeepWalk and Node2Vec perform equally. Also, one can see that text embedding techniques severely outperform network embeddings on this dataset.</ns0:figDesc><ns0:table /><ns0:note>Generally, different datasets show different levels of importance for text and network data. For some datasets, nodes from the same class tend to link each other (the phenomenon is called homophily<ns0:ref type='bibr' target='#b1'>Barabási and Pósfai (2016)</ns0:ref>), which means that graph structure is beneficial for predicting the target. For other datasets, nodes might also tend to cite nodes from other classes. In this case, network information is less useful. Even though on some datasets, one type of information (text or network) significantly outperforms10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fusion methods on Citeseer-M10 for node classification (micro-F 1 , metric lies between (0, 1) and higher value means better results)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>11/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)Manuscript to be reviewed 0.49 ± 0.05 0.57 ± 0.02 0.57 ± 0.02 0.63 ± 0.00</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>shows results of text embedding techniques on Cora dataset. Again, one can see that BoW outperforms other methods, but LDA demonstrates much better performance for link prediction contrary ± 0.01 0.58 ± 0.01 0.62 ± 0.01 0.63 ± 0.01</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>to node classification. Similar to node classification problem on link prediction, advanced network embeddings perform worse when the percentage of train data is high but show better results when it gets lower.Table9show results for classic methods for link prediction problem on CiteSeer dataset. Surprisingly, here BoW and TF-IDF perform very poorly, whereas SBERT shows superior performance. SBERT performance makes more sense since it is trained to differentiate two texts from each other, so it fits well12/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Text embeddings on Cora for link prediction (micro-F 1 , metric lies between (0, 1) and higher value means better results)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>% Train edges</ns0:cell><ns0:cell>5%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>50%</ns0:cell></ns0:row><ns0:row><ns0:cell>BoW</ns0:cell><ns0:cell>0.52±0.01</ns0:cell><ns0:cell>0.52±0.00</ns0:cell><ns0:cell>0.52±0.01</ns0:cell><ns0:cell>0.52±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>TF-IDF</ns0:cell><ns0:cell>0.52±0.01</ns0:cell><ns0:cell>0.52±0.01</ns0:cell><ns0:cell>0.53±0.01</ns0:cell><ns0:cell>0.53±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>LDA</ns0:cell><ns0:cell>0.69±0.01</ns0:cell><ns0:cell>0.69±0.01</ns0:cell><ns0:cell>0.70±0.01</ns0:cell><ns0:cell>071±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>SBERT pretrained</ns0:cell><ns0:cell>0.84±0.00</ns0:cell><ns0:cell>0.85±0.00</ns0:cell><ns0:cell>0.86±0.01</ns0:cell><ns0:cell>0.86±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Word2Vec pretrained</ns0:cell><ns0:cell>0.53±0.01</ns0:cell><ns0:cell>0.53±0.01</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Word2Vec (d=300)</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Word2Vec (d=64)</ns0:cell><ns0:cell>0.54±0.01</ns0:cell><ns0:cell>0.54±0.01</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Doc2Vec (pretrained) 0.55±0.01</ns0:cell><ns0:cell>0.55±0.00</ns0:cell><ns0:cell>0.55±0.00</ns0:cell><ns0:cell>0.55±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Doc2Vec (d=300)</ns0:cell><ns0:cell>0.77±0.01</ns0:cell><ns0:cell>0.77±0.00</ns0:cell><ns0:cell>0.78±0.00</ns0:cell><ns0:cell>0.79±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Doc2Vec (d=64)</ns0:cell><ns0:cell>0.77±0.01</ns0:cell><ns0:cell>0.77±0.01</ns0:cell><ns0:cell>0.77±0.00</ns0:cell><ns0:cell>0.78±0.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec pretrained</ns0:cell><ns0:cell>0.54±0.01</ns0:cell><ns0:cell>0.54±0.01</ns0:cell><ns0:cell>0.55±0.01</ns0:cell><ns0:cell>0.55±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec (d=600)</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.55±0.01</ns0:cell><ns0:cell>0.55±0.00</ns0:cell><ns0:cell>0.56±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Sent2Vec (d=64)</ns0:cell><ns0:cell>0.53±0.00</ns0:cell><ns0:cell>0.53±0.01</ns0:cell><ns0:cell>0.54±0.00</ns0:cell><ns0:cell>0.54 ±0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Ernie pretrained</ns0:cell><ns0:cell cols='4'>0.84 ± 0.01 0.84 ± 0.01 0.85 ± 0.01 0.85 ± 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Table</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Table11) that classic methods (BoW and TF-IDF) outperform more advanced combinations (Sent2Vec and Word2Vec). Also, TADW performs on the same level that Network embeddings for link prediction (micro-F 1 , metric lies between (0, 1) and higher value means better results) plain text embedding techniques. It might happen because TADW relies mostly on network information rather than on text. However, the performance of the GIC and SBERT mixture for this dataset is high. It means that methods preserving node clustering in the embedding space can improve the quality of each</ns0:figDesc><ns0:table><ns0:row><ns0:cell>13/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Fusion embeddings on Cora for link prediction (micro-F 1 , metric lies between (0, 1) and higher value means better results) A combination of TADW and Ernie shows by far the best results on the Citeseer-M10 dataset (Table12). It becomes more evident when the percentage of training edges is high. It also follows previous results on using text embeddings, such as SBERT, on this dataset. Interestingly, usage of SBERT alone performs better than in combination with GCN. However, GraphSAGE mixture with SBERT shows close results to the pure text embedding. It means that the entire graph structure adds more noise to the fine embeddings for document clustering. Thus, careful selection of local neighbors is a crucial part to utilize all SBERT properties. Also, GNNs shows consistent results for different percentage of train edges, whereas other methods seem to degrade heavily as the percentage of train edges becomes lower.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>Generally, one can see that for the link prediction task (as opposed to node classification problem),14/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:note place='foot' n='15'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55008:2:1:NEW 28 Mar 2021)</ns0:note>
</ns0:body>
" | "Dear PeerJ CS Editors,
We thank the Reviewers for their generous and fruitful comments on the manuscript. We have
placed Reviewers’ comments below and answered their concerns in a point-by-point manner,
starting with “>> Review answering text” and also reflecting updated text line numbers in a new
Manuscript.
In particular, all the code supporting the Experiments section that was attached to the original
submission has been updated and attached to the Revised Manuscript in Supplementary files.
We also make a significant proofreading of the paper as can be seen in the “tracked changes”
manuscript version.
We believe that the Manuscript is now suitable to be published in PeerJ CS.
Sincerely yours,
On behalf of all authors.
===================================================================
Editor comments
It is my opinion as the Academic Editor for your article - Fusion of text and graph
information for machine learning problems on graphs - that it requires a number of
Minor Revisions.
My suggested changes and reviewer comments are shown below and on your article
'Overview' screen.
Please address these changes and resubmit. Although not a hard deadline please try to
submit your revision within the next 10 days.
===================================================================
Reviewer 1 (Anonymous)
I suggest that the authors do another proofreading to improve the text. Most importantly, articles
(i.e., a, an, and the) are largely abused in this paper, please correct these grammar issues.
Here are some obvious typos in Sections 1 and 2:
Line 18: the(a) great variety of
Line 45: (A) More advanced approach
Line 73: a(remove) text information
Line 78: remove '.'
Line 107: learn to embed -> learns embeddings
Line 110: (a) more efficient, (the) random walk idea
Line 115: (an) important property
Line 129: to use them(remove)
Line 137: the algorithm learn(s)
Line 142: link prediction problem(s) or node classification(s)
Line 147: Graph Attention n(N)etworks (GAT), (the) self-attention mechanism
>> We have proofread the text and also corrected mistakes mentioned above. Please, see the
manuscript with highlighted changes over text.
===================================================================
Reviewer 2 (Anonymous)
The paper was significantly improved. I like the structure of the paper now. It is easy to read the
paper and follow the main ideas.
I have some minor remarks but in general, I think, the paper can be accepted:
- I would use a word “graph” everywhere throughout the paper instead of mixing “graph” and
“network” words.
>> We have updated the text to keep terminology more clear. Usually we talk about network
representation learning or network embedding, meaning that we have graph structure with
standalone graph embeddings, and attributes with corresponding embeddings, so when we
mention the “network” concept, we mean combination of “graph” (mathematical model) and
“attributes” as it is used in network science terminology. So, we would like to keep both graph
and network concepts whenever it is proper to mention.
- Everywhere in the paper, you use 'the loss' such as in line 56 or 155. Instead, I would use 'the
loss function'.
Line 65: and models choice -> and the choice of models;
Line 73: focus is in -> focus is on;
Line 154: GIC leverage - > GIC leverages;
Line 178: the better the quality -> the better the quality is.
Line 277: I would use either logistic regression or Logistic Regression as in the line 394.
>> Corrected, thanks.
===================================================================
" | Here is a paper. Please give your review comments after reading it. |
91 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Occlusion awareness in a dynamic scene is one of the most challenging problems, especially in realistic interaction applications in the field of multimedia, remote sensing, computer vision and computer graphics. This paper proposes a new framework to build a full 3D model reconstruction that overcomes several problems in a complex dynamic scene such as the occlusion without using sensors' data. Rather than using expensive sensors, we use popular devices such as a monocular camera to generate a suitable model for video streaming applications. The main objective is to create a smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. The framework consists of two main phases. First, an unsupervised learning technique is used to predict; scene depth, camera pose, and objects' motion from RGB monocular videos. Second, uses frame-wise point cloud fusion to reconstruct a 3D model based on video frames sequence. Several evaluation metrics are performed, such as Localization error, RMSE, and Fitness between ground truth (KITTI's sparse LiDAR points) and predicted point-cloud. Besides, we compared our framework with different state-ofthe-art methods using MRE and Chamfer Distance that are more widely used metrics with point-clouds. Experimental results showed that the proposed framework surpassed the other methods and proved to be a powerful candidate in 3D model reconstruction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Constructing a full 3D model from a complex dynamic scene data has many applications in motion capture, robot navigation, augmented reality, and autonomous driving. Moreover, it aims to provide solutions to solve realistic interaction problems such as occlusion and collision. There are many challenges to reconstruct 3D models from dynamic scenes, such as predict accurate depth from sensors or a sequence of 2D RGB. To achieve that it needs to consider not only the camera pose, but also the motion of dynamic objects during navigation. There are many techniques in computer vision that introduce different image-based 3D modeling techniques such as simultaneous location and mapping (SLAM) (Mur-Artal R. J., 2015), Multiview stereo (MVS) <ns0:ref type='bibr' target='#b25'>(Kuhn, 2019)</ns0:ref>, photo tourism <ns0:ref type='bibr' target='#b13'>(Furukawa, 2009)</ns0:ref>, and an RGB-D videobased method <ns0:ref type='bibr' target='#b22'>(Keller, 2013.)</ns0:ref>. These methods use the point cloud representation to represent a realworld scene. A point cloud representation can be used for 3D inspection as it renders detailed 3D environments accurately. The depth camera such as (RGB-D) cameras, e.g., Microsoft Kinect, is widely used to reconstruct 3D indoor scenes <ns0:ref type='bibr' target='#b3'>(Chen, 2015)</ns0:ref>. However, Kinect-like scanning devices fail to capture reliable depth images from outdoor scenes. Moreover, RGB-D cameras may not be readily available in most robotics and mobile devices, and it may also introduce sensors noise. A variant of solutions has been developed using different kinds of cameras, for example, monocular <ns0:ref type='bibr' target='#b44'>(Tateno, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b47'>(Wang J. H., 2018)</ns0:ref>, and Stereo <ns0:ref type='bibr' target='#b16'>(Hassan, 2017)</ns0:ref>. Monocular cameras are most widely used because they are cheap, least restriction, and most ubiquitous for robots and mobile devices. However, the existing monocular 3D model reconstruction methods have poor performance due to the lack of accurate scene depth and camera pose. Therefore, most reconstruction directions resort to predict depth and camera pose using learning techniques. Generating a 3D point cloud from learnt depth and learnt camera pose might be the right choice that solves the problem of using expensive sensors and gives accurate results in a dynamic scene. Recently, deep neural network has been used in learning and succeeded to predict depth from a single image <ns0:ref type='bibr' target='#b33'>(Liu F. C., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b29'>(Laina, 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref>. One advantage of deep learning approaches is that the full scale can be predicted from a single image without the need of scene-based assumptions or geometric constraints. The main objectives of our framework are: Creating a smooth and accurate 3D point-cloud for a dynamic environment using accumulative information of a sequence of RGB video frames and solve several problems such as occlusion and collision. Moreover, using unsupervised learning techniques to avoid expensive sensors by replacing them with popular devices such as a monocular camera to generate a suitable framework for video streaming applications. The framework consists of two stages. In the first stage, the online refinement process that adapts the new environment by integrating 3D motion of moving objects with depth and camera ego-motion. In the Second stage, a full 3D model is reconstructed using frame-wise point cloud fusion. Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the proposed framework details.</ns0:p><ns0:p>The rest of the paper is organized as follows: State of the art section describes the most relevant related work. The proposed framework section presents an overview of the proposed approach. Monocular 3D Model Reconstruction section, explains the Mathematical model for 3D model reconstruction. The proposed 3D model reconstruction implementation section illustrated dataset, experiment metrics. The Experimental results section prove the accuracy of the framework and give a comparison between our method and the state-of-the-art techniques. Finally, conclusion and future work are explained in the last section. object motions. Most of the RGB-D cameras have the natural advantage of reconstructing dense models, and many exciting fusion schemes are proposed in this domain (J. Lee, 2016 ) (Z. <ns0:ref type='bibr' target='#b55'>Yan, 2017)</ns0:ref>. However, the received depth image from the depth camera contains noise or even outliers due to lighting conditions and spatially variant materials of the objects. <ns0:ref type='bibr' target='#b47'>(Wang J. H., 2018)</ns0:ref> improved this limitation by combing learning depth from RGB-D datasets with monocular SLAM and frame-wise point cloud fusion to build a dense 3D model of the scene. They can reconstruct smooth and surface-clear on various examples with a dedicated point cloud fusion scheme.</ns0:p><ns0:p>The 3D reconstruction of a dynamic scene is more challenging than the static scene. This is because it does not only need to deal with the camera pose, but it also deals with the object motion. In the past few years, there was a great interest to solve 3D scene reconstruction with moving objects using single or multiple Monocular camera RGB frames. Xingbin et al. <ns0:ref type='bibr' target='#b51'>(Yang X. L., 2020)</ns0:ref>, presented a real-time monocular 3D reconstruction system for mobile phone which used online incremental mesh generation for augmented reality application. For the 3D reconstruction process, they performed monocular depth estimation with a multi-view semi-global matching method followed by a depth refinement post-processing. Because the dynamic objects such as walking pedestrians or moving objects not support by multi-view geometry prerequisites, the authors deal with this problem by update the algorithm to remove the dynamic objects from the reconstruction mesh. On the other hand, several applications focused on 3D reconstruction for a specific category of moving objects such as full or part of the human body. <ns0:ref type='bibr' target='#b43'>(Shimada, 2020)</ns0:ref> illustrated markless 3D human motion capture from monocular videos. They concentrated on challenging 3D motion observed, such as foot sliding, foot-floor penetration, and unnatural body leaning. Also, <ns0:ref type='bibr' target='#b40'>(Peng, 2020)</ns0:ref> proposed 3D hand mesh reconstruction from a single RGB image. The authors relied on the graph convolution neural network (GCN) with two modules; hand localization and mask generation, to capture geometric details of 3D hand shape. On the level of a dynamic outdoor domain, <ns0:ref type='bibr' target='#b24'>(Ku, 2019.)</ns0:ref> introduced a monocular 3D object detection method that leverages proposals and shapes reconstruction. This method depends on three main processes; geometric priors, shape reconstruction, and depth prediction. The feature map is produced by image crop of the object and global context as input for the network. The orientation is predicted to estimate a proposal. Moreover, the point-cloud is generated and transformed into the global frame. <ns0:ref type='bibr' target='#b27'>(Kumar S. D., 2019)</ns0:ref> illustrates a technique to build 3D reconstruction of a complex dynamic scene using two frames by applying super-pixel over-segmentation to the image. A generically dynamic (hence non-rigid) scene with a piecewise planar and rigid approximation are presented. Moreover, they reduced the reconstruction problem to a '3D jigsaw puzzle' which takes pieces from an unorganized 'soup of super-pixels'. This work aims to create an accurate 3D scene model that recognizes every moving object from monocular RGB video frames without sensors. The 3D reconstruction process learns information (depth map, camera pose, and object motion) of the current RGB frame, previous frame, and keyframes to detect changes during objects motion. After that, the decision to add, update or remove a point from the 3D point-cloud is taken into consideration. Studying moving objects during video frames solves several problems such as occlusion and collision in a dynamic outdoor environment such as augmented realty.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Proposed Framework</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:12:55956:1:0:NEW 14 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this section, a novel framework for 3D dynamic scene reconstruction is proposed. This framework consists of two stages/modules. In the first stage, unsupervised learning is used to predict scene depth, camera pose, and object motion for a complex dynamic scene. Second, during the online refinement process, the previous information is used to create a point cloud for each single frame. Figure <ns0:ref type='figure'>2</ns0:ref> explains the framework stages starting from video frames till the generation of the full scene point cloud.</ns0:p></ns0:div>
<ns0:div><ns0:head>The online refinement process</ns0:head><ns0:p>This process was inspired by <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref>. The advantage of this approach is not only predicting scene depth and camera pose, but also considering the motion of objects in a dynamic scene. Moreover, the framework gives better performance in outdoors than indoor scenes. The purpose of this approach is to represent the learnt geometric structure in the learning process by modeling the scene and the individual objects. From monocular videos, the camera ego-motion and object motions are recognized. Furthermore, the online refinement method is used to adapt to learn on the fly to unknown domains. The depth function is a full convolution encoder-decoder architecture based on residual network (ResNet18) architecture <ns0:ref type='bibr' target='#b17'>(He, 2016)</ns0:ref>. It begins with pre-trained weights on ImageNet <ns0:ref type='bibr' target='#b6'>(Deng, 2009)</ns0:ref>, which produces a dense depth map from a single RGB frame. Moreover, the camera-motion neural network takes a sequence of two RGB images as input and generates an SE3 transform between the frames. For object motion, the segmentation mask is used between two RGB images for every individual object. RCNN architecture with initialized pre-trained on the COCO dataset <ns0:ref type='bibr' target='#b31'>(Lin, 2014)</ns0:ref>. In this stage, the framework predicts the transformation vector (Translation and rotation) for every single object in 3D space, which creates the detected object appearance in the respective target frame. The static background is generated by a single warp based on ego-motion.</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Cloud Fusion Phase</ns0:head><ns0:p>The estimated RGB depth, camera pose, and object motion learnt in the previous phase is transformed to the frame-wise point cloud and later an accurate 3D model is reconstructed for the dynamic environment. The predicted 6-dimensional transformation vector (ego-motion) is used to convert the estimated depth into a unified coordinate space and then fused into an accumulated global model. The global model is a list of 3D points with correlating characteristics. For each frame, the per-frame point cloud is generated. The stability of each pixel in the current frame is checked with all co-visible keyframes. If corresponding points are found at any co-visible keyframe, the most stable point is merged with the new estimated point using a weighted average insertion. If not found, the estimated point is added to the global model as an unstable point. The global model is cleaned up overtime to remove outliers due to stability and temporal constraints. The selection process of co-visible keyframes depends on the total number of video frames and positions of co-visible keyframes. Many experiments have been conducted to select a suitable number of co-visible keyframes with their corresponding positions. As a result, it is found that five co-visible keyframes distributed over 20 frames gives an acceptable accuracy.</ns0:p><ns0:p>In this section, the proposed framework for 3D model reconstruction in a complex dynamic scene using monocular video frames will be illustrated. The proposed framework is divided into two main phases; unsupervised learning techniques phase for depth, camera pose and object motion, and point cloud frame-wise phase for a sequence of monocular video frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Learning Techniques Phase for Frame Depth, Camera Pose, and Object Motion</ns0:head><ns0:p>The sequence of three RGB frames from monocular camera is used for the learning process . The camera intrinsic matrix is defined as . Depth prediction uses a single . The reconstruction loss for this approach is calculated as mentioned in (Casser, [𝑥,𝑦,1] 𝑇 )</ns0:p></ns0:div>
<ns0:div><ns0:head>2019.).</ns0:head><ns0:p>The object motion model is used to predict the motion of individual objects in 3D space. 𝜓𝑀 Similar to ego-motion architecture, it used a sequence of two RGB frames. The object motion applied a segmentation mask (RCNN architecture) for individual objects into the dynamic scene. The transformation vector per object is learned, which creates the detected object appearance in the respective target form. According to the ego-motion model, the static background is generated and then all segmented objects are added by their appearance. The advantage of this approach is not only modeling objects in 3D space, but also learning their motion on the fly.</ns0:p></ns0:div>
<ns0:div><ns0:head>3D Model Reconstruction with Point Cloud Fusion</ns0:head><ns0:p>After depth, camera pose and object motion are predicted in the previous stage. It is time to reconstruct a full 3D scene model using point cloud fusion. The point cloud generation is inspired by ORB-SLAM2 (Mur-Artal R. a., 2017), <ns0:ref type='bibr' target='#b47'>(Wang J. H., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b47'>(Wang J. H., 2018)</ns0:ref> used point cloud fusion to generate 3D model reconstruction suitable for augmented reality applications. The advantage of this approach is that it is easy to apply and convenient for real-time applications. The limitation of this approach is being limited to a static environment and is not tested in a dynamic environment. The proposed framework alleviates this limitation to consider objects motion' in dynamic environments. The per-frame point cloud is reconstructed by estimating depth for a single frame, the final Motion that is a combination of individual 𝑫 𝒊 𝑬 𝑭 𝒎 moving objects and Camera motion . Once the intrinsic camera calibration matrix is 𝜓𝑀 𝑜 𝜓𝐸 𝒊→𝒋 𝐾 given, the per-frame point cloud is calculated as following:</ns0:p><ns0:formula xml:id='formula_0'>𝑝 𝑖 = (𝐸 𝐹 𝑚 ) -1 𝜋(𝑢,𝐷 𝑖 ) (1)</ns0:formula><ns0:p>Where denote as homogeneous representation of a pixel and is the back 𝑢 𝑢 = (𝑥,𝑦,1) 𝑇 𝜋(𝑢) projection from image to camera coordinate, .</ns0:p><ns0:p>𝜋(𝑢,𝐷 𝑖 ) = 𝐾 -1 𝐷 𝑖 𝑢</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Association</ns0:head><ns0:p>The system holds a set of co-visible of keyframe selected according to the 𝑘 1 , 𝑘 2 …. 𝑘 𝑛 ∈ 𝐾𝐹 length of monocular video frames on the online refinement process. The visibility of pixels is checked by mapping each pixel of the current frame with all co-visible keyframes.</ns0:p><ns0:p>𝑢 𝑘 = 𝑓(𝐾 (𝐸 𝐹 𝑚 ) 𝑘 𝜋(𝑢 𝑖 ))</ns0:p><ns0:p>(</ns0:p><ns0:p>where , also, maintain such a mapping from every keyframe pixel to its 𝒇(𝒙) = ( where is the normalized radial distance of from the center of the camera, and = 𝒆 -𝜸 𝟐 /𝝈 𝟐 𝛾 𝐷 𝑖</ns0:p><ns0:p>. The new observation available in the latest frame according to the following 𝜎 = 0.6 𝒊 equations:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>𝑝 𝑛 𝑖 = (𝑤 𝐴 𝑝 𝑖 + 𝑤 0 (𝐸 𝐹 𝑚 ) -1 𝜋(𝑢 𝑖 ))/(𝑤 𝐴 + 𝑤 0 ) (4) 𝐶 𝑛 𝐶 = (𝑤 𝐴 𝐶 𝑐 + 𝑤 0 ‖(𝐸 𝐹 𝑚 ) -1 𝜋(𝑢 𝑖 ) -𝑝 𝑖 ‖)/(𝑤 𝐴 + 𝑤 0 )</ns0:formula><ns0:p>(5) 𝑤 𝑛 𝐴 = min (𝑤 𝐴 + 𝑤 0 ,𝑊 𝜀 )</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:1:0:NEW 14 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where means the newly updated point, is a constant equal to 1 and is the truncation threshold 𝑝 𝑛 𝑖 𝑤 0 𝑊 𝜀 equal to 100 <ns0:ref type='bibr' target='#b47'>(Wang J. H., 2018)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> represents the pseudo code of 3D model reconstruction process. The point association start from line 8 to 25; in which the stability of each pixel in the current frame is checked with all covisible keyframes. If the corresponding points are found, the point is updated and set as stable according to the value of its corresponding confidence counter. If it is not found, the estimated point is added to the global model as an unstable point and inserted to point map as new seed.</ns0:p><ns0:p>Probabilistic noise filter is applied at line 27 to maintain only stable points.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Monocular 3D Model Reconstruction Implementation</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset and implementation details</ns0:head><ns0:p>The proposed framework was evaluated by using KITTI dataset <ns0:ref type='bibr' target='#b14'>(Geiger, 2013)</ns0:ref>. The KITTI dataset is the most recent dataset used in different applications because it contains different objects, and it is considered a complex dynamic environment. The KITTI dataset has LIDAR sensor readings for evaluation only. It is used to evaluate predicted depth and egomotion. Moreover, the KITTI 3D point cloud is used as ground truth to evaluate the proposed 3D model reconstruction model.</ns0:p><ns0:p>The number of points per scan is not constant, on average each frame has a size of ∼ 1.9</ns0:p><ns0:p>MB which corresponds to ∼ 120, 000 3D points.</ns0:p><ns0:p>The proposed framework is divided into two modules/phases as mentioned in section 3.</ns0:p><ns0:p>The first module is responsible for predicting depth and ego-motion using unsupervised learning in a dynamic scene. The code implemented using TensorFlow, the actual size of input images is , the images are resized to , the same setting which 1224 × 368 416 × 128 described in <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The evaluation process used three methods:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:1:0:NEW 14 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_3'>= 𝐿 𝐸 1 𝑁 𝐶 ∑ 𝑁 𝐶 𝑗 𝑑𝑖𝑠(𝑝 𝑔𝑗 ,𝑝 𝑐𝑗 )</ns0:formula><ns0:p>where is the number of correctly detected points in .</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑁 𝐶 𝐺 𝑇</ns0:head><ns0:p>The FNE at localization error tolerance is defined as:</ns0:p><ns0:formula xml:id='formula_4'>𝑟 𝐹𝑁𝐸(𝑟) = 1 - 𝑁 𝐶 𝑁 𝐺</ns0:formula><ns0:p>The FPE at localization error tolerance is defined as:</ns0:p><ns0:formula xml:id='formula_5'>𝑟 𝐹𝑃𝐸(𝑟) = 𝑁 𝐹 𝑁 𝑝</ns0:formula><ns0:p>The number of false positives is normalized with the number of all true negatives, where is the number of false positives, and yields to 𝑁 𝐹 𝑁 𝐹 = 𝑁 𝑝 -𝑁 𝐶 2-Registration 3D point cloud between the output of the Velodyne laser scanner (ground truth) and the proposed technique which generates a 3D point cloud from the predicted depth and predicted ego-motion. Global registration <ns0:ref type='bibr' target='#b57'>(Zhou, 2016)</ns0:ref> and Iterative Closest Point ICP point-to-point <ns0:ref type='bibr' target='#b42'>(Rusinkiewicz, 2001.)</ns0:ref> <ns0:ref type='bibr' target='#b39'>(Paul, 1992)</ns0:ref> are used.</ns0:p><ns0:p>Moreover, evaluate the registration by calculating Fitness function which is used to measure the overlapping area (the number of inlier correspondences/number of points in ground truth). The Higher value of fitness is better. While for the Root Mean Square Error RMSE of all correspondences in range of r, the lower is better. a regularly used metric to measure the similarity between two-point clouds, which is define as:</ns0:p><ns0:formula xml:id='formula_6'>𝐶𝐷 = 1 𝑁 ∑ 𝑥 ^ 𝑖 ∈ 𝑃 𝑒𝑠𝑡 𝑚𝑖𝑛 𝑥 𝑗 ∈ 𝑃 𝑔𝑡 |𝑥 ^𝑖 -𝑥 𝑗 | + 1 𝑁 ∑ 𝑥 𝑗 ∈ 𝑃 𝑔𝑡 𝑚𝑖𝑛 𝑥 ^𝑖 ∈ 𝑃 𝑒𝑠𝑡 |𝑥 ^𝑖 -𝑥 𝑗 |</ns0:formula></ns0:div>
<ns0:div><ns0:head>Experimental Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Localization Accuracy</ns0:head><ns0:p>In this section, the proposed framework has been evaluated using different techniques. At first, localization error , FNE, and FPE using different localization error tolerance r 𝐿 𝐸 between ground truth and predicted 3D point cloud is calculated.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref> illustrates the average of localized error, FNE, and FPE with r in range 0.04 to 0.16 for 20 frames, and the output of the online refinement process. Form Figure <ns0:ref type='figure'>4</ns0:ref>, it is found that the proposed framework is succeeded in finding points close to ground-truth points with low localization error. Moreover, decreasing in FNE indicates that the framework catches the nearest points with a low localization error, while a rapid drop in FPE means that the framework does not return excessive interest points.</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Cloud Registration</ns0:head><ns0:p>The second approach using 3D point cloud registration between ground-truth and predicted points is used to check the accuracy of 3D reconstruction. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and density of the predicted 3D point cloud are less than the ground truth. Therefore, the predicted 3D point cloud is closed to the ground-truth in the selected area.</ns0:p><ns0:p>To evaluate the registration between ground-truth and predicted 3D point cloud, we used the state of the art algorithms such as Global registration <ns0:ref type='bibr' target='#b57'>(Zhou, 2016)</ns0:ref> and ICP point-topoint <ns0:ref type='bibr' target='#b42'>(Rusinkiewicz, 2001.</ns0:ref>) <ns0:ref type='bibr' target='#b39'>(Paul, 1992)</ns0:ref>. Figures 6 and 7 illustrate the average of RMSE and fitness of registration for 20 frames on online refinement process using Global registration and ICP point-to-point with different threshold.</ns0:p><ns0:p>As shown in figures 6 and 7, ICP point-to-point registration gets the lowest RMSE and higher fitness between ground truth and predicted 3D point cloud. This, is because the ICP point to point technique usually runs until convergence or reaches a maximum number of iterations (we used the default iteration 30). This indicates that the proposed framework succeeds to cover large number of an overlapping areas with a small mean square error.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> gives more details about ICP point-to-point registration during 20 frames with an acceptable RMSE and stander deviation with the increasing of r value.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> illustrates the 3D point cloud after mapping on RGB frames selected from different videos. Figure <ns0:ref type='figure'>9</ns0:ref> (b) is a referee to the ground-truth point, and Figure <ns0:ref type='figure'>9</ns0:ref> (c) to the predicted point from our framework. The performance of the proposed framework is compared with the state-of-the-art methods <ns0:ref type='bibr' target='#b28'>(Kumar S. Y., 2017)</ns0:ref> , which reported that MRE on KITTI dataset and with several monocular dynamic reconstruction methods, such as the Block Matrix Method (BMM) <ns0:ref type='bibr' target='#b4'>(Dai, 2014)</ns0:ref>, Point Trajectory Approach (PTA) <ns0:ref type='bibr' target='#b0'>(Akhter, 2010)</ns0:ref>, and Low-rank Reconstruction (GBLR) <ns0:ref type='bibr' target='#b12'>(Fragkiadaki, 2014)</ns0:ref> , Depth</ns0:p><ns0:p>Transfer (DT) <ns0:ref type='bibr' target='#b20'>(Karsch K. C., 2012.)</ns0:ref>, and (DMDE) <ns0:ref type='bibr' target='#b41'>(Ranftl, 2016)</ns0:ref>. Note that we used the reported result in <ns0:ref type='bibr' target='#b28'>(Kumar S. Y., 2017)</ns0:ref> as its implementation is not available publicly.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>10</ns0:ref> shows that the proposed framework delivers consistently superior reconstruction accuracy on KITTI dataset. Using unsupervised learning to predict scene depth and camera pose is a strong point of the proposed framework for generating an accurate 3D model reconstruction. Table <ns0:ref type='table'>1</ns0:ref> shows the improvement percentages between the proposed framework and the state-of-the-art methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Chamfer Distance</ns0:head><ns0:p>We calculate the Chamfer Distance (CD) between the point cloud and ground truth on the KITTI dataset and compare it with state-of-the-art modules proposed in <ns0:ref type='bibr' target='#b34'>(Lu, 2020)</ns0:ref>. The main idea of this metric is to predict future frames given the past point cloud sequence based on a motion-based neural network named MoNet. Two neural networks are used to predict scene point-cloud LSTM and GRU.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:1:0:NEW 14 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The main idea of these methods using point-cloud as an input for neural networks to estimate future frames. The MoNet integrates motion features into the prediction pipeline and combines them with content features. In this metric, we used the average CD for 5 frames to match with the module <ns0:ref type='bibr' target='#b34'>(Lu, 2020)</ns0:ref>. As shown in Table <ns0:ref type='table'>3</ns0:ref>, the CD of our framework is slightly better than Mon (LSTM) and Mon (GRU) methods. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion and Future Work</ns0:head><ns0:p>This paper proposes a novel framework for 3D model reconstruction from monocular video frames for a dynamic environment. It is based on no sensors, which are costly and have sometimes noise.</ns0:p><ns0:p>The proposed framework can deal with several problems in computer vision such as the occlusion awareness and collision problem. The proposed framework is divided into two main stages. First stage is based on an unsupervised learning technique to predict the depth map, camera pose, and object motion from 3D space. In the second stage, the 3D model reconstruction process is produced by frame-wise point cloud fusion and point association process. The results show that our framework can generate smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. Different evaluation error metrics are used, Localization error, RMSE, and MRE with average values 0.05, 0.067, and 0.073 respectively between ground truth and predicted point-cloud. Furthermore, a comparison between the proposed framework and state-of-the-art method using Chamfer Distance and obtain 11% improvement.</ns0:p><ns0:p>In future, we will concern with improving the framework performance to make it able to deal with real-time applications such as augmented reality by applying several optimization techniques using state of the art GPU and CUDA platform. In addition to, test 3D model reconstruction over along sequences of RGB frames.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is used such as learning rate 0.0002, L1 reconstruction weight 0.85, SSIM weight 0.15, smoothing weight 0.04, object motion constraint weight 0.0005, and batch size 4. The dataset is divided into training, validation, and testing (30542 monocular triplets for training, 3358 for validation, and 698 for testing). The framework has been executed on high-performance computing (HPC), PowerEdge Dell R740 (2x intel Xeon Gold 6248 2.5G) with Tesla V100 GPU. The second module generates a 3D point cloud to reconstruct a dynamic scene. This module is implemented in a framework of TensorFlow using Python, OpenGL, and open3D. During the online refinement process, 20 frames are selected to generate a 3D point cloud for each frame and finally integrated into one 3D model of the scene.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:1:0:NEW 14 Feb 2021)Manuscript to be reviewedComputer Science3-Accuracy is reported using mean relative error (MRE). Which defined as -truth depth and the estimated depth respectively with as 𝑧 𝑖 𝑔𝑡 , 𝑧 𝑖 𝑒𝑠𝑡 𝑃 the total number of 3D point Cloud. MRE is state of the art metric used to compare the proposed framework with several monocular dynamic reconstruction methods.Another used metric is Chamfer Distance (CD) between the ground truth 𝑃 𝑔𝑡 ∈ 𝑅𝑁 × 3 point cloud and the estimated point cloud . Chamfer Distance<ns0:ref type='bibr' target='#b34'>(Lu, 2020)</ns0:ref> is 𝑃 𝑒𝑠𝑡 ∈ 𝑅𝑁 × 3 </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 5 shows a selected frame from a sequence of 20 frames registered with ground-truth with two different points of view. The number of predicted 3D point cloud depends on the number of 2D pixels of the RGB frame. In our case, the input frame size is 416 x 128 (in range of 50000 points), counter to the 3D point cloud of ground truth that collected from the Velodyne laser scanner is in the average of 120000 points. As shown in Figure 5, the range PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:1:0:NEW 14 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Finally, the experimental</ns0:head><ns0:label /><ns0:figDesc>results show how the proposed framework achieves an accurate point-cloud to reconstruct a 3D model from monocular RGB video frames without uses expensive sensors. Several evaluation metrics are performed, such as Localization error, RMSE, and Fitness between ground truth and predicted point-cloud. Besides, we compared our framework with different state-of-the-art methods and achieved a 46% improvement in MRE error compared with the DJP technique. And on the other hand, we obtain 11% improvement using chamfer distance between the proposed framework and MonNet (GRU) method</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,215.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,228.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,203.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,374.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,365.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,375.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,203.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,373.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Translation and Rotation) between frames. Different warping operation in one frame is used to adjacent one in sequence. It allows predicting how the scene seems like with a</ns0:figDesc><ns0:table><ns0:row><ns0:cell>(𝐼 1 ,𝐼 2 ,𝐼 3 )</ns0:cell><ns0:cell cols='2'>𝐾 ∈ 𝑅 3 × 3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>frame to produce a depth map. The depth map encoder-decoder architecture 𝜃:𝑅 𝐻 × 𝑊 × 3 →𝑅 𝐻 × 𝑊 𝐷 𝑖 = 𝜃(𝐼 𝑖 ) (ResNet 18). In contrast, ego-motion is generated by a fully convolutional network takes sequence of two frames and produces a SE3 transform 𝜓𝐸 : 𝑅 2𝑥𝐻𝑥𝑤𝑥3 →𝑅 6</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>vector (different camera viewpoint. Using different frame warping operator ∅(𝐼 𝐼 𝑖→𝑗 𝑗</ns0:cell><ns0:cell>.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>image into given corresponding depth estimate and an ego-motion estimate 𝐼 𝑖 𝐼 𝑗 𝐷 𝑗</ns0:cell><ns0:cell>𝐸 𝑖→𝑗</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>In practice, performs the warping by reading from transformed frame pixel coordinates. The ∅</ns0:cell></ns0:row><ns0:row><ns0:cell>projected coordinates are calculated by setting</ns0:cell><ns0:cell>= 𝑖→𝑗 𝐼 𝑥𝑦 𝐼 𝑥,𝑦</ns0:cell><ns0:cell>where</ns0:cell></ns0:row></ns0:table><ns0:note>𝑖 ,𝐷 𝑗 ,𝐸 𝑖→𝑗 )→𝐼 𝑖→𝑗 where is the reconstructed -th image. This approach able to change any source RGB-𝑖→𝑗 [𝑥,𝑦,1] 𝑇 = 𝐾 𝐸 𝑖→𝑗 (𝐷 𝑥,𝑦 𝑗 . 𝐾 -1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>𝑟 (𝑝 𝑔 ) = {𝑝 𝑐 ∈ 𝑃 𝑝 | 𝑀𝑖𝑛(𝑑𝑖𝑠(𝑝 𝑔 ,𝑝 𝑐 )) < 𝑟}</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝐺 𝑇</ns0:cell><ns0:cell>, ground</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>truth point denoted by</ns0:cell><ns0:cell cols='2'>, 𝑝 𝑔 ∈ 𝐺 𝑇 𝑃 𝑝</ns0:cell><ns0:cell>is predicted points by the proposed technique.</ns0:cell><ns0:cell>𝑁 𝐺</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>is the number of points in , 𝐺 𝑇 𝑁 𝑝</ns0:cell><ns0:cell cols='2'>is the number of points in , and ( ) is geodesic 𝑃 𝑝 𝐶 𝑟 𝑝 𝑔</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>distance over a region, which is centered by point g and has radius of r. The data which</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>is contained in ( ) can be defined as: 𝐶 𝑟 𝑝 𝑔</ns0:cell></ns0:row><ns0:row><ns0:cell>where</ns0:cell><ns0:cell>𝑑𝑖𝑠(𝑝 𝑔 ,𝑝 𝑐 )</ns0:cell><ns0:cell cols='4'>is the Euclidian distance between the two points</ns0:cell><ns0:cell>𝑝 𝑔 𝑎𝑛𝑑 𝑝 𝑐</ns0:cell><ns0:cell>, and is 𝑟</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>the Maximum correspondence points-pair distance which controls the localization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>error. is considered to be correctly detected if there exists a detected point 𝑝 𝑐 such that is the minimum distance between the points ⋂ 𝐶 𝑟 (𝑝 𝑔 ) 𝑝 𝑐 𝑝 𝑔 𝑎𝑛𝑑 𝑝 𝑐 . The 𝑝 𝑐 ∈ 𝑃 𝑝 𝐿 𝐸</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>defined as follows:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>1-Localization Accuracy Error , FPE (False positive error), and FNE (False Negative 𝐿 𝐸 error) which were proposed by Refs<ns0:ref type='bibr' target='#b15'>(Hafiz, 2015)</ns0:ref>.Localization Accuracy, is defined as the amount of deviation of the detected point 𝐿 𝐸 from a ground truth point position. Let ground truth data set denoted by 𝐶</ns0:note></ns0:figure>
</ns0:body>
" | "Reviewer “1”
Authors thank for Reviews “1” for his reasonable comments that helps us to write our
manuscript better and clear.
Comments number
1
Comments and response
Reviewer:
The authors need to clearly state their motivation, goal and contributions at
least in the abstract and the introduction section of the paper.
Authors:
We made some updates in the abstract. Also, add some key features at the end of
the introduction to increase the clarity from lines (65-73).
Reviewer:
Literature review must be improved as it lacks details and in-depth
comparison to other state of the art techniques and applications.:
https://www.mdpi.com/1424-8220/19/7/1553
https://doi.org/10.2991/ijcis.d.190617.001
https://doi.org/10.1109/IECON43393.2020.9255086
https://doi.org/10.1109/JTEHM.2019.2946802
https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-018-0253-2
2
Rewrite adding a section on each item, how it works, how your solutions
differs, what has been achieved, end with how your goals correlate.
Authors:
In the related work section, we divided this section into three main components
related to our framework (depth estimation from a single view, camera pose
estimation, and 3D reconstruction using point cloud fusion). The 3D
reconstruction is used for several domains such as indoor and outdoor mapping,
self-drive cars, service robots, and reconstruct a single object. Our literature
survey is focused on approaches and applications that used 3D reconstruction to
rebuild the full scene from video frames (static or dynamic environment).
However, we added a new paragraph based on your suggestion to collect other
methods lines (111-123).
Reviewer:
Novelty should be better explained as similar solutions do exist:
https://openaccess.thecvf.com/content_CVPR_2019/html/
https://doi.org/10.1109/TVCG.2020.3023634
https://dl.acm.org/doi/10.1145/3414685.3417877
https://link.springer.com/article/10.1007/s00371-020-01908-3
Authors:
3
Thanks for your recommendation; We considered all of them in the related work
section (3D registration part) lines (133-156) in the manuscript and added the
following paragraph to explain the novelty of our work.
“This work aims to create an accurate 3D scene model that recognizes every
moving object from monocular RGB video frames without sensors. The 3D
reconstruction process learns information (depth map, camera pose, and object
motion) of the current RGB frame, previous frame, and keyframes to detect
changes during objects motion. After that, the decision to add, update or remove a
point from the 3D point-cloud is taken into consideration. Studying moving
objects during video frames solve several problems such as occlusion and
collision in a dynamic outdoor environment such as augmented realty” lines (162168) in the manuscript.
Reviewer:
4
Figure 9 is very low res and hard to see.
Authors:
Figure 9 is output from matplotlib. pyplot python library that makes some
changed to image resolution. We improved the image resolution as we can to be
clearer and attached it with the updated version of the manuscript.
Reviewer:
Figure 3 should be replaced with activity diagram. Add all configuration
parameters.
Authors:
5
Thank you for your comment, but we believe that pseudo code gives a better
understanding of the implementation wise parameters such as input, size of each
used data structures and so on. Also, we present in figure 2 the workflow of
pipeline processes.
Reviewer:
6
Get rid of trivial text on the methods, a link to original source is enough. Your
reader will be familiar as the approach is not unheard.
Authors:
we agree with you, we were satisfied with a small group of information about
KITTI datasets. We deleted the paragraph from lines (262-266) in the old
manuscript.
Reviewer:
Experimental design
1-The experiments should be contextualized better (the reader should not be left
to assume that they will get their own conclusions).
2- The experiments should be described more clearly (e.g. set up and carry out
process, results in raw format, etc.).
3- How was the accuracy of the model evaluated? By what metrics? Expert
knowledge? Add full and rigorous statistical reliability analysis proving that
recreated model is accurate. Add full performance metrics.
Authors:
7
1- We add a conclusion paragraph at the end of the experiment section to
conclude our results.
2- We updated this part in Experiment section.
3- In “Proposed Monocular 3D Model Reconstruction Implementation”
section we illustrated all metrics we used in evaluation such as
localization error, FPE, FNE, RMSE, Fitness and MRE between predicted
point cloud and ground truth which prove the accuracy of our framework
(with RMSE rate between 0.02 to 0.1), we explained mathematical
formulation for all and referred to the state-of-the-art work that used the
same metrics for evaluation. In addition, we added new metric for
evaluation (Chamfer Distance based on Reviewer 2 ‘s recommendation)
lines (372-377) and lines (432-452)
Reviewer:
Validity of the findings
Article contains no direct comparison to other works with KITTI dataset (and
there has been quite a few)
8
Authors:
1- Figure 10 presents a comparison between the proposed framework and the
more relative methods to us. By calculate MRE and compared with them.
Also, we added a new metric Chamfer Distance which is widely used
when dealing with point-clouds lines (372-377) and lines (432-452).
Reviewer:
9
Code attached does not work
Authors:
The attached code is not an executable file, it needs a specific environment and
each part needs a separate file to run for example:
In the first stage, you should run several python files for preprocessing, a file for
training, a file for online refinement, and files for evaluations. The outputs of
online refinement are the input for the point-cloud process. Also, there are several
files for rendering and evaluate the point-cloud process. For example
To run training:
ckpt_dir='your/checkpoint/folder'
data_dir='KITTI_SEQ2_LR/' # Set for KITTI
imagenet_ckpt='resnet_pretrained/model.ckpt'
python train.py \
--logtostderr \
--checkpoint_dir $ckpt_dir \
--data_dir $data_dir \
--architecture resnet \
--imagenet_ckpt $imagenet_ckpt \
--imagenet_norm true \
--joint_encoder false
Running depth/ego-motion inference:
input_dir='your/image/folder'
output_dir='your/output/folder'
model_checkpoint='your/model/checkpoint'
python inference.py \
--logtostderr \
--file_extension png \
--depth \
--egomotion true \
--input_dir $input_dir \
--output_dir $output_dir \
--model_ckpt $model_checkpoint
Running online-refinement:
prediction_dir='some/prediction/dir'
model_ckpt='checkpoints/checkpoints_baseline/model-199160'
handle_motion='true'
size_constraint_weight='0'
data_dir='KITTI_SEQ2_LR_EIGEN/'
triplet_list_file='$data_dir/test_files_eigen_triplets.txt'
triplet_list_file_remains='$data_dir/test_files_eigen_triplets_remains.txt'
ft_name='kitti'
python optimize.py \
--logtostderr \
--output_dir $prediction_dir \
--data_dir $data_dir \
--triplet_list_file $triplet_list_file \
--triplet_list_file_remains $triplet_list_file_remains \
--ft_name $ft_name \
--model_ckpt $model_ckpt \
--file_extension png \
--handle_motion $handle_motion \
--size_constraint_weight $size_constraint_weight
Running Point-Cloud fusion:
data_dir='path/ for camera/ intrinsic/ file'
input_dir='/output/from/online refinment/process/'
Point-Cloud Fusion.py --data_dir $data_dir \
--input_dir $input_dir\
Reviewer:
Conclusions are quite generic. Focus on the results only. Move the rest to
discussions.
10
Authors:
Your recommendation has been applied in the conclusion and experimental
sections.
Reviewer “2”
Authors thank Reviewer “2” for his/her careful and thoughtful comments about our paper
Comments number
Comments and response
Reviewer:
Improper formatting of graphs 4, 6, 7, 8; unlabeled axes; units; error bars.
1
Authors:
We updated figure 4 and 7, and regarding to figure 6,7 the axes are labeled by
“THRESHOLD”, above the figure’s data table.
Reviewer:
2
Line 349 “Thus” → “This”
Authors:
We corrected it, Sorry for this mistake
Reviewer:
Dataset split is not defined, what percentage of the dataset is used as training,
testing, validation.
Authors:
3
As we mentioned in the manuscript that we used the same setting which
described in (Casser, 2019.) for the first stage in the proposed framework.
But, we updated the manuscript by wrote the strategy we used for the
dataset “. The dataset is divided into training, validation, and testing
(30542 monocular triplets for training, 3358 for validation, and 698 for
testing)”. In Proposed Monocular 3D Model Reconstruction
Implementation, Section Lines (319-320)
Reviewer:
Because live recording of a camera feed is used. What are real-world
applications of the framework, how viable in terms of performance (FPS) is it?
Authors:
4
This paper scope focused on the accuracy of the predicted point-cloud to solve
some problems such as occlusion and collision as we mentioned in introduction
line 66. Because the generation of point-cloud depended on accumulative
matching between the current frame and keyframes as we presented in figure 3,
the process of generating a 3D point cloud takes time up to 15 minutes per frame.
But we have an extension work using parallel programming on GPU and CUDA
platform to improve time to be acceptable for real-time applications such as
augmented reality (as mentioned in conclusion and future work line 468).
Reviewer:
In experimental results you specify input frames as 416x128, which corresponds
to 50000 points in point-cloud. Assumptions are made that this is the
reconstructed point-cloud density. However, this is the first time it is mentioned.
Specify it earlier as it is important to understand this information when the
method is being described.
Authors:
5
The 50000 points is an average of the generated point-cloud. It may be more or
less according to the change of pixels between frames. We mention this to justify
why our density (Blue color) in figure 5 is less than the ground truth (Yellow
color).
Reviewer:
Euler distance as point-cloud estimation is used. While it is valid estimation
metric in the specific application case that the article uses, Chamfer Distance
and Earth Movers Distance are more widely used metrics when dealing with
point-clouds. Adding comparisons using those metrics would allow researches
working in the field more easily evaluate your results
6
Authors:
Thanks for the recommendation, we added another experiment evaluates the
predicted point cloud using Chamfer Distance and compare it with state of the art
which used LSTM and GRU to predict future frames using point cloud as in input.
The proposed framework obtains 11% improvement lines (432-452).
Reviewer:
Hard to tell what the reader is looking at in figure 5, some visual aids would be
helpful to parse the visuals.
7
Authors:
We agree with the reviewer. we have replaced it with a more detailed one.
Reviewer:
8
Substantiate your conclusion assertions using experimental results
Authors:
Your recommendation has been applied at the end of Experimental results and
conclusion sections.
" | Here is a paper. Please give your review comments after reading it. |
92 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Occlusion awareness is one of the most challenging problems in several fields such as multimedia, remote sensing, computer vision, and computer graphics. Realistic interaction applications are suffering from dealing with occlusion and collision problems in a dynamic environment. Creating dense 3D reconstruction methods is the best solution to solve this issue. However,these methods have poor performance in practical applications due to the absence of accurate depth, camera pose, and object motion.This paper proposes a new framework that builds a full 3D model reconstruction that overcomes the occlusion problem in a complex dynamic scene without using sensors' data. Popular devices such as a monocular camera are used to generate a suitable model for video streaming applications. The main objective is to create a smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames.</ns0:p><ns0:p>The framework is composed of two main phases. First, uses unsupervised learning technique to predict; scene depth, camera pose, and objects' motion from RGB monocular videos. Second, generates a frame-wise point cloud fusion to reconstruct a 3D model based on a video frame sequence. Several evaluation metrics are measured; Localization error, RMSE, and Fitness between ground truth (KITTI's sparse LiDAR points) and predicted point-cloud. Moreover, compared the framework with different widely used state-of-the-art evaluation methods such as MRE and Chamfer Distance. Experimental results showed that the proposed framework surpassed the other methods and proved to be a powerful candidate in 3D model reconstruction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Constructing a full 3D model from a complex dynamic scene data has many applications in motion capture, robot navigation, augmented reality, and autonomous driving. Moreover, it aims to provide solutions to solve realistic interaction problems such as occlusion and collision. There are many challenges to reconstruct 3D models from dynamic scenes, such as predict accurate depth from sensors or a sequence of 2D RGB. To achieve that it needs to consider the camera pose, and the motion of dynamic objects during navigation. There are many techniques in computer vision that introduce different image-based 3D modeling techniques such as simultaneous location and mapping (SLAM) (Mur-Artal R. J., 2015), Multiview stereo (MVS) <ns0:ref type='bibr' target='#b23'>(Kuhn, 2019)</ns0:ref>, photo tourism <ns0:ref type='bibr' target='#b12'>(Furukawa, 2009)</ns0:ref>, and an RGB-D videobased method <ns0:ref type='bibr' target='#b20'>(Keller, 2013.)</ns0:ref>. These methods use the point cloud representation to represent a realworld scene. A point cloud representation can be used for 3D inspection as it renders detailed 3D environments accurately. The depth camera such as (RGB-D) cameras, e.g., Microsoft Kinect, is widely used to reconstruct 3D indoor scenes <ns0:ref type='bibr' target='#b2'>(Chen, 2015)</ns0:ref>. However, Kinect-like scanning devices fail to capture reliable depth images from outdoor scenes. Moreover, RGB-D cameras may not be readily available in most robotics and mobile devices, and it may also introduce sensors noise. A variant of solutions has been developed using different kinds of cameras, for example, monocular <ns0:ref type='bibr' target='#b42'>(Tateno, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b46'>(Wang J. H., 2018)</ns0:ref>, and Stereo <ns0:ref type='bibr' target='#b15'>(Hassan, 2017)</ns0:ref>. Monocular cameras are most widely used because they are cheap, least restriction, and most ubiquitous for robots and mobile devices. However, the existing monocular 3D model reconstruction methods have poor performance due to the lack of accurate scene depth and camera pose. Therefore, most reconstruction directions resort to predict depth and camera pose using learning techniques. Generating a 3D point cloud from learnt depth and learnt camera pose might be the right choice that solves the problem of using expensive sensors and gives accurate results in a dynamic scene. Recently, deep neural network has been used in learning and succeeded to predict depth from a single image <ns0:ref type='bibr' target='#b32'>(Liu F. C., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b28'>(Laina, 2016)</ns0:ref>, and <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref>. One advantage of deep learning approaches is that the full scale can be predicted from a single image without the need of scene-based assumptions or geometric constraints. Nowadays, several realistic interaction applications still have limitations to deal with the occlusion problem in a real dynamic environment without using expensive sensors. one of the most effective solutions is to generate dense 3D reconstruction for the whole scene. However, the interaction in a dynamic environment requires a true depth map and explicit not only to detect the camera localization but also consider the moving objects into the scene with each other and with static objects in the background. The main objectives of our framework are: Creating a smooth and accurate 3D point-cloud for a dynamic environment using accumulative information from a sequence of RGB video frames. This method is used in solving several problems such as occlusion and collision. Due to the expensive cost of multi-sensors data, a monocular camera is used instead and compensated by unsupervised learning techniques to be suitable for video streaming applications. The framework consists of two stages. In the first stage, the online refinement process adapts new environment by integrating 3D motion of moving objects with depth and camera ego-motion. In the second stage, a full 3D model is reconstructed using frame-wise point cloud fusion. Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the proposed framework details. The rest of the paper is organized as follows: State of the art section describes the most relevant related work. The proposed framework section presents an overview of the proposed approach. Monocular 3D Model Reconstruction section, explains the Mathematical model for 3D model reconstruction. The proposed 3D model reconstruction implementation section illustrated dataset, experiment metrics. The Experimental results section prove the accuracy of the framework and give a comparison between our method and the state-of-the-art techniques. Finally, conclusion and future work are explained in the last section.</ns0:p></ns0:div>
<ns0:div><ns0:head>State of the Art</ns0:head><ns0:p>The development of the proposed framework has involved a review of research in the various computer vision field. This section is oriented towards three main subjects: (1) Estimate depth map from a single view. (2) Detect the camera position. (3) 3D reconstruction for a single object or multiple objects (full 3D scene) in a static and dynamic environment. Depth Estimation form single view, Scene depth estimation has gained increasing attention in the field of computer vision and robotics. Recently developed deep convolutional architectures for image-to-depth prediction has appeared fueled by the availability of rich feature representations, learned from raw data <ns0:ref type='bibr' target='#b6'>(Eigen D. a., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b28'>(Laina, 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref>. These approaches outperformed classical methods in terms of depth estimation accuracy <ns0:ref type='bibr' target='#b19'>(Karsch K. C., 2014)</ns0:ref>, <ns0:ref type='bibr' target='#b31'>(Liu B. S., 2010)</ns0:ref>. Numerous methods used supervised learning to estimate depth from a single view <ns0:ref type='bibr' target='#b47'>(Wang X. D., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>(Ocal, 2020)</ns0:ref>. Despite supervised learning receives wide fame with depth prediction, it needs costly depth sensors for the training process. Therefore, many methods turn to using unsupervised learning image-to-depth techniques. Unsupervised depth prediction models have shown to be more accurate and get better performance than sensor-supervised methods <ns0:ref type='bibr' target='#b54'>(Zhan, 2018.)</ns0:ref>. Several consequent works result in good performance with the monocular setting <ns0:ref type='bibr' target='#b51'>(Yang Z. P., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b52'>(Yin, 2018.)</ns0:ref>. However, these methods are still incomplete because they didn't handle object movements in dynamic scenes. These methods lead to failure as they cannot explain object motion in complex dynamic scenes. Camera Pose Estimation, Monocular Visual SLAM or Visual Odometry (VO) methods that include feature-based methods (Mur-Artal R. J., 2015), <ns0:ref type='bibr' target='#b21'>(Klein, 2008)</ns0:ref> and direct methods <ns0:ref type='bibr' target='#b8'>(Engel, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>(Forster, 2014)</ns0:ref> are considered as a key tracking method for motion estimation. However, these methods lack of accurate depth estimation and are unable to handle pure rotation motions. CNN-SLAM <ns0:ref type='bibr' target='#b42'>(Tateno, 2017)</ns0:ref> and ORB-SLAM2 (Mur-Artal R. a., 2017) solved monocular SLAM limitations by using deep neural networks to improve the scene depth. Nevertheless, these methods fail to give good performance in a dynamic scene. Casser at el <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref> have proposed a novel technique to solve the monocular depth and ego-motion problem by explicitly modeling 3D motions of moving objects, together with camera ego-motion, and adapts to new environments by learning with an online refinement of multiple frames.</ns0:p><ns0:p>3D Reconstruction, the 3D reconstruction approaches are used for several domains. Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> summaries the main characteristics of the most relevant publications to our proposed framework Nowadays, most state-of-the-art research used neural network techniques to reconstruct 3D objects from a single RGB image. Audrius et al. <ns0:ref type='bibr' target='#b24'>(Kulikajevas, 2019)</ns0:ref> applied hybrid neural network architecture to reconstruct polygonal meshes from a single depth frame using RGB-D sensors devices. Despite the RGB-D sensor still being capable to use. It is a lack to capture reliable depth images from the outdoor scene. Also, approach presented in <ns0:ref type='bibr' target='#b29'>(Li, 2019)</ns0:ref> used generation adversarial networks (GANs) to reconstruct a 3D object. GANs can generate 3D models by sampling from uniform noise distribution and get a significant performance. However, the authors succeeded to generate a 3D model using the GANs network but it is not tested to reconstruct full 3D scene (static or dynamic). Another approach focused on a single model in the medical domain is <ns0:ref type='bibr' target='#b49'>(Widya, 2019)</ns0:ref>. They illustrated a 3D reconstruction technique for the whole stomach. Structure-from-Motion (SfM) with a monocular endoscope is used. The authors study the combined effect of chromoendoscopy and color channel selection on SfM to increase the number of feature points and obtain more reliable reconstruction quality and completeness. Scene reconstruction (static or dynamic) from video frames is one of the most important problems in computer vision field. This is because not only needs to deal with the camera pose, but also the object motions. Most of the RGB-D cameras have the natural advantage of reconstructing dense models, and many exciting fusion schemes are proposed in this domain (J. Lee, 2016 ) (Z. <ns0:ref type='bibr' target='#b53'>Yan, 2017)</ns0:ref>. However, the received depth image from the depth camera contains noise or even outliers due to lighting conditions and spatially variant materials of the objects. <ns0:ref type='bibr' target='#b46'>(Wang J. H., 2018)</ns0:ref> improved this limitation by combing learning depth from RGB-D datasets with monocular SLAM and frame-wise point cloud fusion to build a dense 3D model of the scene. They can reconstruct smooth and surface-clear on various examples with a dedicated point cloud fusion scheme.</ns0:p><ns0:p>The 3D reconstruction of a dynamic scene is more challenging than the static scene. This is because it does not only need to deal with the camera pose, but it also deals with the object motion. In the past few years, there was a great interest to solve 3D scene reconstruction with moving objects using single or multiple Monocular camera RGB frames. Xingbin et al. <ns0:ref type='bibr' target='#b50'>(Yang X. L., 2020)</ns0:ref>, presented a real-time monocular 3D reconstruction system for mobile phone which used online incremental mesh generation for augmented reality application. For the 3D reconstruction process, they performed monocular depth estimation with a multi-view semi-global matching method followed by a depth refinement post-processing. Because the dynamic objects such as walking pedestrians or moving objects not support by multi-view geometry prerequisites, the authors deal with this problem by update the algorithm to remove the dynamic objects from the reconstruction mesh. On the other hand, several applications focused on 3D reconstruction for a specific category of moving objects such as full or part of the human body. <ns0:ref type='bibr' target='#b41'>(Shimada, 2020)</ns0:ref> illustrated markless 3D human motion capture from monocular videos. They concentrated on challenging 3D motion observed, such as foot sliding, foot-floor penetration, and unnatural body leaning. Also, <ns0:ref type='bibr' target='#b38'>(Peng, 2020)</ns0:ref> proposed 3D hand mesh reconstruction from a single RGB image. The authors relied on the graph convolution neural network (GCN) with two modules; hand localization and mask generation, to capture geometric details of 3D hand shape. On the level of a dynamic outdoor domain, several domains are interested in predicting future 3d scenes from existing ones. <ns0:ref type='bibr' target='#b33'>(Lu, 2020)</ns0:ref> and (Weng, 2020. )using neural networks such as LSTM and GRU to generate a full 3D point cloud from outdoor LiDAR datasets. The main idea is to use the motion-based neural network that integrates motion features between two consecutive point clouds. Further, <ns0:ref type='bibr' target='#b22'>(Ku, 2019.)</ns0:ref> introduced a monocular 3D object detection method that leverages proposals and shapes reconstruction. This method depends on three main processes; geometric priors, shape reconstruction, and depth prediction. The feature map is produced by image crop of the object and global context as input for the network. The orientation is predicted to estimate a proposal. Moreover, the point-cloud is generated and transformed into the global frame. <ns0:ref type='bibr' target='#b25'>(Kumar S. D., 2019)</ns0:ref> illustrates a technique to build 3D reconstruction of a complex dynamic scene using two frames by applying super-pixel over-segmentation to the image. A generically dynamic (hence non-rigid) scene with a piecewise planar and rigid approximation are presented. Moreover, they reduced the reconstruction problem to a '3D jigsaw puzzle' which takes pieces from an unorganized 'soup of super-pixels'. This work aims to create an accurate 3D scene model that recognizes every moving object from monocular RGB video frames without sensor data. The 3D reconstruction process learns information (depth map, camera pose, and object motion) from the current RGB frame, previous frame, and keyframes to detect changes during the object's motion. The 3D point cloud is continuously improved during this process by adding or removing some points according to a certain certainty filter. Studying moving objects from video frames solve several problems such as objects occlusion and collision in a dynamic outdoor environment such as augmented reality.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Proposed Framework</ns0:head><ns0:p>In this section, a novel framework for 3D dynamic scene reconstruction is proposed. This framework consists of two stages/modules. In the first stage, unsupervised learning is used to predict scene depth, camera pose, and object motion for a complex dynamic scene. Second, during the online refinement process, the previous information is used to create a point cloud for each single frame. Figure <ns0:ref type='figure'>2</ns0:ref> explains the framework stages starting from video frames till the generation of the full scene point cloud.</ns0:p></ns0:div>
<ns0:div><ns0:head>The online refinement process</ns0:head><ns0:p>This process was inspired by <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref>. The advantage of this approach is not only predicting scene depth and camera pose, but also considering the motion of objects in a dynamic scene. Moreover, the framework gives better performance in outdoors than indoor scenes. The purpose of this approach is to represent the learnt geometric structure in the learning process by modeling the scene and the individual objects. From monocular videos, the camera ego-motion and object motions are recognized. Furthermore, the online refinement method is used to adapt to learn on the fly to unknown domains. The depth function is a full convolution encoder-decoder architecture based on residual network (ResNet18) architecture <ns0:ref type='bibr' target='#b16'>(He, 2016)</ns0:ref>. It begins with pre-trained weights on ImageNet <ns0:ref type='bibr' target='#b5'>(Deng, 2009)</ns0:ref>, which produces a dense depth map from a single RGB frame. Moreover, the camera-motion neural network takes a sequence of two RGB images as input and generates an SE3 transform between the frames. For object motion, the segmentation mask is used between two RGB images for every individual object. RCNN architecture with initialized pre-trained on the COCO dataset <ns0:ref type='bibr' target='#b30'>(Lin, 2014)</ns0:ref>. In this stage, the framework predicts the transformation vector (Translation and rotation) for every single object in 3D space, which creates the detected object appearance in the respective target frame. The static background is generated by a single warp based on ego-motion.</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Cloud Fusion Phase</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The estimated RGB depth, camera pose, and object motion learnt in the previous phase is transformed to the frame-wise point cloud and later an accurate 3D model is reconstructed for the dynamic environment. The predicted 6-dimensional transformation vector (ego-motion) is used to convert the estimated depth into a unified coordinate space and then fused into an accumulated global model. The global model is a list of 3D points with correlating characteristics. For each frame, the per-frame point cloud is generated. The stability of each pixel in the current frame is checked with all co-visible keyframes. If corresponding points are found at any co-visible keyframe, the most stable point is merged with the new estimated point using a weighted average insertion. If not found, the estimated point is added to the global model as an unstable point. The global model is cleaned up overtime to remove outliers due to stability and temporal constraints. The selection process of co-visible keyframes depends on the total number of video frames and positions of co-visible keyframes. Many experiments have been conducted to select a suitable number of co-visible keyframes with their corresponding positions. As a result, it is found that five co-visible keyframes distributed over 20 frames gives an acceptable accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Monocular 3D Model Reconstruction</ns0:head><ns0:p>In this section, the proposed framework for 3D model reconstruction in a complex dynamic scene using monocular video frames will be illustrated. The proposed framework is divided into two main phases; unsupervised learning techniques phase for depth, camera pose and object motion, and point cloud frame-wise phase for a sequence of monocular video frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Learning Techniques Phase for Frame Depth, Camera Pose, and Object Motion</ns0:head><ns0:p>The sequence of three RGB frames from monocular camera is used for the learning process Manuscript to be reviewed Computer Science applied a segmentation mask (RCNN architecture) for individual objects into the dynamic scene. The transformation vector per object is learned, which creates the detected object appearance in the respective target form. According to the ego-motion model, the static background is generated and then all segmented objects are added by their appearance. The advantage of this approach is not only modeling objects in 3D space, but also learning their motion on the fly.</ns0:p></ns0:div>
<ns0:div><ns0:head>3D Model Reconstruction with Point Cloud Fusion</ns0:head><ns0:p>After depth, camera pose and object motion are predicted in the previous stage. It is time to reconstruct a full 3D scene model using point cloud fusion. The point cloud generation is inspired by ORB-SLAM2 (Mur-Artal R. a., 2017), <ns0:ref type='bibr' target='#b46'>(Wang J. H., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b46'>(Wang J. H., 2018)</ns0:ref> used point cloud fusion to generate 3D model reconstruction suitable for augmented reality applications. The advantage of this approach is that it is easy to apply and convenient for realtime applications. The limitation of this approach is being limited to a static environment and is not tested in a dynamic environment. The proposed framework alleviates this limitation to consider objects motion' in dynamic environments. The per-frame point cloud is reconstructed by estimating depth for a single frame, the final Motion that is a combination of individual 𝑫 𝒊 𝑬 𝑭 𝒎 moving objects and Camera motion . Once the intrinsic camera calibration matrix is 𝜓𝑀 𝑜 𝜓𝐸 𝒊→𝒋 𝐾 given, the per-frame point cloud is calculated as following:</ns0:p><ns0:p>𝑝 𝑖 = (𝐸 𝐹 𝑚 ) -1 𝜋(𝑢,𝐷 𝑖 )</ns0:p><ns0:p>(1)</ns0:p><ns0:p>Where denote as homogeneous representation of a pixel and is the back 𝑢 𝑢 = (𝑥,𝑦,1) 𝑇 𝜋(𝑢) projection from image to camera coordinate, .</ns0:p><ns0:p>𝜋(𝑢,𝐷 𝑖 ) = 𝐾 -1 𝐷 𝑖 𝑢</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Association</ns0:head><ns0:p>The system holds a set of co-visible of keyframe selected according to the 𝑘 1 , 𝑘 2 …. 𝑘 𝑛 ∈ 𝐾𝐹 length of monocular video frames on the online refinement process. The visibility of pixels is checked by mapping each pixel of the current frame with all co-visible keyframes.</ns0:p><ns0:p>𝑢 𝑘 = 𝑓(𝐾 (𝐸 𝐹 𝑚 ) 𝑘 𝜋(𝑢 𝑖 ))</ns0:p><ns0:p>(2)</ns0:p><ns0:p>where , also, maintain such a mapping from every keyframe pixel to its 𝒇(𝒙) = ( where is the normalized radial distance of from the center of the camera, and = 𝒆 -𝜸 𝟐 /𝝈 𝟐 𝛾 𝐷 𝑖</ns0:p><ns0:p>. The new observation available in the latest frame according to the following 𝜎 = 0.6 𝒊 equations:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_0'>𝑝 𝑛 𝑖 = (𝑤 𝐴 𝑝 𝑖 + 𝑤 0 (𝐸 𝐹 𝑚 ) -1 𝜋(𝑢 𝑖 ))/(𝑤 𝐴 + 𝑤 0 ) (4) 𝐶 𝑛 𝐶 = (𝑤 𝐴 𝐶 𝑐 + 𝑤 0 ‖(𝐸 𝐹 𝑚 ) -1 𝜋(𝑢 𝑖 ) -𝑝 𝑖 ‖)/(𝑤 𝐴 + 𝑤 0 )</ns0:formula><ns0:p>(5) 𝑤 𝑛 𝐴 = min (𝑤 𝐴 + 𝑤 0 ,𝑊 𝜀 ) Where means the newly updated point, is a constant equal to 1 and is the truncation threshold 𝑝 𝑛 𝑖 𝑤 0 𝑊 𝜀 equal to 100 <ns0:ref type='bibr' target='#b46'>(Wang J. H., 2018)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> represents the pseudo code of 3D model reconstruction process. The point association start from line 8 to 25; in which the stability of each pixel in the current frame is checked with all covisible keyframes. If the corresponding points are found, the point is updated and set as stable according to the value of its corresponding confidence counter. If it is not found, the estimated point is added to the global model as an unstable point and inserted to point map as new seed.</ns0:p><ns0:p>Probabilistic noise filter is applied at line 27 to maintain only stable points.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Monocular 3D Model Reconstruction Implementation</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset and implementation details</ns0:head><ns0:p>The proposed framework was evaluated by using KITTI dataset <ns0:ref type='bibr' target='#b13'>(Geiger, 2013)</ns0:ref>. The KITTI dataset is the most recent dataset used in different applications because it contains different objects, and it is considered a complex dynamic environment. The KITTI dataset has LIDAR sensor readings for evaluation only. It is used to evaluate predicted depth and egomotion. Moreover, the KITTI 3D point cloud is used as ground truth to evaluate the proposed 3D model reconstruction model.</ns0:p><ns0:p>The number of points per scan is not constant, on average each frame has a size of ∼ 1.9</ns0:p><ns0:p>MB which corresponds to ∼ 120, 000 3D points.</ns0:p><ns0:p>The proposed framework is divided into two modules/phases as mentioned in section 3.</ns0:p><ns0:p>The first module is responsible for predicting depth and ego-motion using unsupervised learning in a dynamic scene. The code implemented using TensorFlow, the actual size of input images is , the images are resized to , the same setting which 1224 × 368 416 × 128 described in <ns0:ref type='bibr' target='#b1'>(Casser, 2019.)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The evaluation process used three methods:</ns0:p><ns0:p>1-Localization Accuracy Error , FPE (False positive error), and FNE (False Negative 𝐿 𝐸 error) which were proposed by Refs <ns0:ref type='bibr' target='#b14'>(Hafiz, 2015)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>Localization</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The number of false positives is normalized with the number of all true negatives, where is the number of false positives, and yields to 𝑁 𝐹 𝑁 𝐹 = 𝑁 𝑝 -𝑁 𝐶 2-Registration 3D point cloud between the output of the Velodyne laser scanner (ground truth) and the proposed technique which generates a 3D point cloud from the predicted depth and predicted ego-motion. Global registration <ns0:ref type='bibr' target='#b55'>(Zhou, 2016)</ns0:ref> and Iterative Closest Point ICP point-to-point <ns0:ref type='bibr' target='#b40'>(Rusinkiewicz, 2001.)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Paul, 1992)</ns0:ref> are used.</ns0:p><ns0:p>Moreover, evaluate the registration by calculating Fitness function which is used to measure the overlapping area (the number of inlier correspondences/number of points in ground truth). The Higher value of fitness is better. While for the Root Mean Square</ns0:p><ns0:p>Error RMSE of all correspondences in range of r, the lower is better.</ns0:p><ns0:p>3-Accuracy is reported using mean relative error (MRE). Which defined as</ns0:p><ns0:formula xml:id='formula_2'>𝑀𝑅𝐸 = 1 𝑃 P ∑ 𝑖 = 1 |𝑧 𝑖 𝑔𝑡 -𝑧 𝑖 𝑒𝑠𝑡 | 𝑧 𝑖 𝑔𝑡 Let</ns0:formula><ns0:p>are the ground-truth depth and the estimated depth respectively with as 𝑧 𝑖 𝑔𝑡 , 𝑧 𝑖 𝑒𝑠𝑡 𝑃 the total number of 3D point Cloud. MRE is state of the art metric used to compare the proposed framework with several monocular dynamic reconstruction methods.</ns0:p><ns0:p>Another used metric is Chamfer Distance (CD) between the ground truth 𝑃 𝑔𝑡 ∈ 𝑅 𝑁 × 3 point cloud and the estimated point cloud . Chamfer Distance <ns0:ref type='bibr' target='#b33'>(Lu, 2020)</ns0:ref> is 𝑃 𝑒𝑠𝑡 ∈ 𝑅 𝑁 × 3 a regularly used metric to measure the similarity between two-point clouds, which is define as:</ns0:p><ns0:formula xml:id='formula_3'>𝐶𝐷 = 1 𝑁 ∑ 𝑥 ^ 𝑖 ∈ 𝑃 𝑒𝑠𝑡 𝑚𝑖𝑛 𝑥 𝑗 ∈ 𝑃 𝑔𝑡 |𝑥 ^𝑖 -𝑥 𝑗 | + 1 𝑁 ∑ 𝑥 𝑗 ∈ 𝑃 𝑔𝑡 𝑚𝑖𝑛 𝑥 ^𝑖 ∈ 𝑃 𝑒𝑠𝑡 |𝑥 ^𝑖 -𝑥 𝑗 |</ns0:formula></ns0:div>
<ns0:div><ns0:head>Experimental Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Localization Accuracy</ns0:head><ns0:p>In this section, the proposed framework has been evaluated using different techniques. At first, localization error , FNE, and FPE using different localization error tolerance r 𝐿 𝐸 between ground truth and predicted 3D point cloud is calculated. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>points with low localization error. Moreover, decreasing in FNE indicates that the framework catches the nearest points with a low localization error, while a rapid drop in FPE means that the framework does not return excessive interest points.</ns0:p></ns0:div>
<ns0:div><ns0:head>Point Cloud Registration</ns0:head><ns0:p>The second approach using 3D point cloud registration between ground-truth and predicted points is used to check the accuracy of 3D reconstruction. Figure <ns0:ref type='figure'>5</ns0:ref> shows a selected frame from a sequence of 20 frames registered with ground-truth with two different points of view. The number of predicted 3D point cloud depends on the number of 2D pixels of the RGB frame. In our case, the input frame size is 416 x 128 (in range of 50000 points), counter to the 3D point cloud of ground truth that collected from the Velodyne laser scanner is in the average of 120000 points. As shown in Figure <ns0:ref type='figure'>5</ns0:ref>, the range and density of the predicted 3D point cloud are less than the ground truth. Therefore, the predicted 3D point cloud is closed to the ground-truth in the selected area.</ns0:p><ns0:p>To evaluate the registration between ground-truth and predicted 3D point cloud, we used the state of the art algorithms such as Global registration <ns0:ref type='bibr' target='#b55'>(Zhou, 2016)</ns0:ref> and ICP point-topoint <ns0:ref type='bibr' target='#b40'>(Rusinkiewicz, 2001.</ns0:ref>) <ns0:ref type='bibr' target='#b37'>(Paul, 1992)</ns0:ref>. Figures 6 and 7 illustrate the average of RMSE and fitness of registration for 20 frames on online refinement process using Global registration and ICP point-to-point with different threshold.</ns0:p><ns0:p>As shown in figures 6 and 7, ICP point-to-point registration gets the lowest RMSE and higher fitness between ground truth and predicted 3D point cloud. This, is because the ICP point to point technique usually runs until convergence or reaches a maximum number of iterations (we used the default iteration 30). This indicates that the proposed framework succeeds to cover large number of an overlapping areas with a small mean square error.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref> gives more details about ICP point-to-point registration during 20 frames with an acceptable RMSE and stander deviation with the increasing of r value.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> illustrates the 3D point cloud after mapping on RGB frames selected from different videos. Figure <ns0:ref type='figure'>9</ns0:ref> (b) is a referee to the ground-truth point, and Figure <ns0:ref type='figure'>9</ns0:ref> (c) to the predicted point from our framework. The performance of the proposed framework is compared with the state-of-the-art methods <ns0:ref type='bibr' target='#b27'>(Kumar S. Y., 2017)</ns0:ref> , which reported that MRE on KITTI dataset and with several monocular dynamic reconstruction methods, such as the Block Matrix Method (BMM) <ns0:ref type='bibr' target='#b3'>(Dai, 2014)</ns0:ref>, Point Trajectory Approach (PTA) <ns0:ref type='bibr' target='#b0'>(Akhter, 2010)</ns0:ref>, and Low-rank Reconstruction (GBLR) <ns0:ref type='bibr' target='#b11'>(Fragkiadaki, 2014)</ns0:ref> , Depth</ns0:p><ns0:p>Transfer (DT) <ns0:ref type='bibr' target='#b18'>(Karsch K. C., 2012.)</ns0:ref>, and (DMDE) <ns0:ref type='bibr' target='#b39'>(Ranftl, 2016)</ns0:ref>. Note that we used the reported result in <ns0:ref type='bibr' target='#b27'>(Kumar S. Y., 2017)</ns0:ref> as its implementation is not available publicly.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>10</ns0:ref> shows that the proposed framework delivers consistently superior reconstruction accuracy on the KITTI dataset. Using unsupervised learning to predict scene depth and camera pose is a strong point of the proposed framework for generating an accurate 3D model reconstruction. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the improvement percentages between the proposed framework and the state-of-the-art methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Chamfer Distance</ns0:head><ns0:p>We calculate the Chamfer Distance (CD) between the point cloud and ground truth on the KITTI dataset and compare it with state-of-the-art modules proposed in <ns0:ref type='bibr' target='#b33'>(Lu, 2020)</ns0:ref>. The main idea of this metric is to predict future frames given the past point cloud sequence based on a motion-based neural network named MoNet. Two neural networks are used to predict scene point-cloud LSTM and GRU.</ns0:p><ns0:p>The main idea of these methods using point-cloud as an input for neural networks to estimate future frames. The MoNet integrates motion features into the prediction pipeline and combines them with content features. In this metric, we used the average CD for 5 frames to match with the module <ns0:ref type='bibr' target='#b33'>(Lu, 2020)</ns0:ref>. As shown in Table <ns0:ref type='table'>3</ns0:ref>, the CD of our framework is slightly better than Mon (LSTM) and Mon (GRU) methods.</ns0:p><ns0:p>Finally, the experimental results show how the proposed framework achieves an accurate 3D reconstructed point-cloud model from monocular RGB video frames without using expensive sensors. Several evaluation metrics are measured, such as Localization error, RMSE, and Fitness between ground truth and predicted point-cloud. Finally, the experimental results show how the proposed framework achieves an accurate 3D reconstructed point-cloud model l from monocular RGB video frames without using expensive sensors. Several evaluation metrics are measured, such as Localization error, RMSE, and Fitness between ground truth and predicted point-cloud. Moreover, we achieved 46% improvement in MRE error compared with the state-of-the-art method DJP. Besides, 11% and 14% improvement using chamfer distances metric compared with MonNet (GRU) and MoNet (LSTM) respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>The limitations:</ns0:head><ns0:p>The success of the presented framework depends on the accuracy of learning parameters such as depth map, camera pose, and object motion. In the case of the learning parameters are not processed accurately in phase one, the 3D reconstruction will fail. The other major limitation is the overall execution time. Because the generation of a 3D point cloud depends on accumulative matching between the current frame and a group of keyframes, this process takes up to 15 minutes. Moreover, the proposed framework had reconstructed a point cloud from a sequence of 20 RGB video frames which is considered a short sequence. However, this limitation could be overcome by using parallel programming to handle the most time-consuming part of the 3D point cloud reconstruction as mentioned in the future work section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion and Future Work</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This paper proposes a novel framework for 3D model reconstruction from monocular video frames for a dynamic environment. The framework didn't use any sensor data, which is costly and sometimes noisy. The results showed that the proposed framework is capable of generating smooth and accurate 3D point-cloud for a dynamic environment using cumulative information of a sequence of RGB video frames. Different evaluation metrics are used such as Localization error and RMSE with average values of 0.05 and 0.067 respectively between ground truth and predicted point-cloud. Moreover, the increase in fitness value indicates that the proposed framework succeeded to cover a large number of overlapping areas with a small mean square error. Furthermore, a comparison between the proposed framework and state-of-the-art method using MRE compared with the DJP technique and Chamfer Distance compared with two MoNet techniques with an improvement of 46% ,11%, and 14% respectively. In the future, we will be concerned with improving the overall execution time to make it able to deal with real-time applications such as augmented reality by applying several optimization techniques using state-ofthe-art GPU and CUDA platforms. In addition to, test 3D model reconstruction over long sequences of RGB frames.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>The proposed framework of 3D model reconstruction from monocular KITTI video images <ns0:ref type='bibr' target='#b13'>(Geiger, 2013)</ns0:ref> KITTI dataset is under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0</ns0:p><ns0:p>License. according to this link: http://www.cvlibs.net/datasets/kitti/ Manuscript to be reviewed The improvement percentages in MRE error between proposed framework and the state-of-the-art</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>The camera intrinsic matrix is defined as . Depth prediction uses a single (𝐼 1 ,𝐼 2 ,𝐼 3 ) 𝐾 ∈ 𝑅3 × 3 frame to produce a depth map. The depth map is generated by a fully convolutional 𝐷 𝑖 = 𝜃(𝐼 𝑖 ) encoder-decoder architecture (ResNet 18). In contrast, ego-motion 𝜃:𝑅 𝐻 × 𝑊 × 3 →𝑅 𝐻 × 𝑊 network takes sequence of two frames and produces a SE3 transform 𝜓𝐸 : 𝑅 2𝑥𝐻𝑥𝑤𝑥3 →𝑅 6 vector (Translation and Rotation) between frames. Different warping operation in one frame is used to adjacent one in sequence. It allows predicting how the scene seems like with a different camera viewpoint. Using different frame warping operator .,∅(𝐼 𝑖 ,𝐷 𝑗 ,𝐸 𝑖→𝑗 )→𝐼 𝑖→𝑗where is the reconstructed -th image. This approach able to change any source RGB-𝐼 𝑖→𝑗 𝑗 image into given corresponding depth estimate and an ego-motion estimate . 𝑦,1] 𝑇 = 𝐾 𝐸 𝑖→𝑗 (𝐷 𝑥,𝑦 𝑗 . 𝐾 -1. The reconstruction loss for this approach is calculated as mentioned in (Casser, [𝑥,𝑦,1] 𝑇 )2019.).The object motion model is used to predict the motion of individual objects in 3D space. 𝜓𝑀 Similar to ego-motion architecture, it used a sequence of two RGB frames. The object motion PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>is used such as learning rate 0.0002, L1 reconstruction weight 0.85, SSIM weight 0.15, smoothing weight 0.04, object motion constraint weight 0.0005, PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021) Manuscript to be reviewed Computer Science and batch size 4. The dataset is divided into training, validation, and testing (30542 monocular triplets for training, 3358 for validation, and 698 for testing). The framework has been executed on high-performance computing (HPC), PowerEdge Dell R740 (2x intel Xeon Gold 6248 2.5G) with Tesla V100 GPU. The second module generates a 3D point cloud to reconstruct a dynamic scene. This module is implemented in a framework of TensorFlow using Python, OpenGL, and open3D. During the online refinement process, 20 frames are selected to generate a 3D point cloud for each frame and finally integrated into one 3D model of the scene.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure4illustrates the average of localized error, FNE, and FPE with r in range 0.04 to 0.16 for 20 frames, and the output of the online refinement process. Form Figure4, it is found that the proposed framework is succeeded in finding points close to ground-truth</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,250.12,525.00,215.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,228.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,203.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,369.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,375.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,247.30,525.00,203.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,373.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>To create a smooth 3D point cloud and filter out the noise from the generated points, the probabilistic filter is used. Each 3D point in global model P is represented by , and the . The determines if 3D point evolves from unstable to stable state. Our weighted 𝐶 𝑐 average is calculated by applying a Gaussian weight to the current depth measurement as 𝑤 𝐴</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>𝒙 𝒛 , 𝒚 𝒛 ) 𝑻 corresponding 3D point</ns0:cell><ns0:cell>𝑀:𝑝 𝑖 →𝑢 𝑖 →𝑢 𝑘 →𝑃</ns0:cell><ns0:cell>. where P is a global model.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝑝 𝑛 𝑖</ns0:cell></ns0:row><ns0:row><ns0:cell>confidence counter</ns0:cell><ns0:cell>𝐶 𝑐</ns0:cell><ns0:cell cols='2'>is defined as how often the 3D point is observed in co-visible</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021) Manuscript to be reviewed Computer Science keyframes</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Accuracy, is defined as the amount of deviation of the detected point 𝐿 𝐸 𝐶 𝑟 𝑝 𝑔 𝐶 𝑟 (𝑝 𝑔 ) = {𝑝 𝑐 ∈ 𝑃 𝑝 | 𝑀𝑖𝑛(𝑑𝑖𝑠(𝑝 𝑔 ,𝑝 𝑐 )) < 𝑟}</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='6'>from a ground truth point position. Let ground truth data set denoted by</ns0:cell><ns0:cell>𝐺 𝑇</ns0:cell><ns0:cell>, ground</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>truth point denoted by</ns0:cell><ns0:cell>, 𝑝 𝑔 ∈ 𝐺 𝑇 𝑃 𝑝</ns0:cell><ns0:cell>is predicted points by the proposed technique.</ns0:cell><ns0:cell>𝑁 𝐺</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>is the number of points in , 𝐺 𝑇 𝑁 𝑝</ns0:cell><ns0:cell>is the number of points in , and ( ) is geodesic 𝑃 𝑝 𝐶 𝑟 𝑝 𝑔</ns0:cell></ns0:row><ns0:row><ns0:cell>where</ns0:cell><ns0:cell cols='3'>𝑑𝑖𝑠(𝑝 𝑔 ,𝑝 𝑐 )</ns0:cell><ns0:cell cols='2'>is the Euclidian distance between the two points</ns0:cell><ns0:cell>𝑝 𝑔 𝑎𝑛𝑑 𝑝 𝑐</ns0:cell><ns0:cell>, and is 𝑟</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>the Maximum correspondence points-pair distance which controls the localization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>error. is considered to be correctly detected if there exists a detected point 𝑝 𝑐 such that is the minimum distance between the points ⋂ 𝐶 𝑟 (𝑝 𝑔 ) 𝑝 𝑐 𝑝 𝑔 𝑎𝑛𝑑 𝑝 𝑐 . The 𝑝 𝑐 ∈ 𝑃 𝑝 𝐿 𝐸</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>defined as follows:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝐿 𝐸</ns0:cell><ns0:cell>=</ns0:cell><ns0:cell>1 𝑁 𝐶 ∑ 𝑁 𝐶 𝑗 𝑑𝑖𝑠(𝑝 𝑔𝑗 ,𝑝 𝑐𝑗 )</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell>𝑁 𝐶</ns0:cell><ns0:cell cols='3'>is the number of correctly detected points in . 𝐺 𝑇</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>The FNE at localization error tolerance is defined as: 𝑟 𝑁 𝐶 𝐹𝑁𝐸(𝑟) = 1 -𝑁 𝐺</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>The FPE at localization error tolerance is defined as: 𝑟</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝐹𝑃𝐸(𝑟) =</ns0:cell><ns0:cell>𝑁 𝐹 𝑁 𝑝</ns0:cell></ns0:row></ns0:table><ns0:note>distance over a region, which is centered by point g and has radius of r. The data which is contained in ( ) can be defined as:PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The main characteristics of the most relevantState-of-the-art </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Published</ns0:cell><ns0:cell>Single</ns0:cell><ns0:cell>Single</ns0:cell><ns0:cell>Static /</ns0:cell><ns0:cell>Input type</ns0:cell><ns0:cell>Methods</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>/Multiple</ns0:cell><ns0:cell>/Multiple</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>frame</ns0:cell><ns0:cell>object</ns0:cell><ns0:cell>object</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Kulikajevas, 2019)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Static object</ns0:cell><ns0:cell>RGB-D</ns0:cell><ns0:cell>Hyper neural</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sensor</ns0:cell><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell>(Kulikajevas, 2019)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Static object</ns0:cell><ns0:cell>3D Models</ns0:cell><ns0:cell>GANs neural</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>network</ns0:cell></ns0:row><ns0:row><ns0:cell>(Widya, 2019)</ns0:cell><ns0:cell>Multiple (2</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Static object</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>Structure from</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>image</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>endoscope</ns0:cell><ns0:cell>motion (SfM)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>sequences)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Wang J. H., 2018)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Static object</ns0:cell><ns0:cell>RGB-D</ns0:cell><ns0:cell>Monocular</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>sensor</ns0:cell><ns0:cell>SLAM</ns0:cell></ns0:row><ns0:row><ns0:cell>(Yang X. L., 2020)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Multiple</ns0:cell><ns0:cell>Static scene</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>Online</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(remove</ns0:cell><ns0:cell>RGB</ns0:cell><ns0:cell>incremental</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(full scene)</ns0:cell><ns0:cell>dynamic</ns0:cell><ns0:cell /><ns0:cell>mesh</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>objects)</ns0:cell><ns0:cell /><ns0:cell>generation</ns0:cell></ns0:row><ns0:row><ns0:cell>(Shimada, 2020)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>Markless 3D</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>object</ns0:cell><ns0:cell>RGB</ns0:cell><ns0:cell>human motion</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>capture</ns0:cell></ns0:row><ns0:row><ns0:cell>(Peng, 2020)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Single object</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>GCN network</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>object</ns0:cell><ns0:cell>RGB</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Ku, 2019.)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Corp single</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>geometric</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>object</ns0:cell><ns0:cell>object</ns0:cell><ns0:cell>RGB</ns0:cell><ns0:cell>priors, shape</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>reconstruction,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>and depth</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>prediction</ns0:cell></ns0:row><ns0:row><ns0:cell>(Lu, 2020)</ns0:cell><ns0:cell>Multiple (two</ns0:cell><ns0:cell>Multiple</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>LSTM and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>consecutive</ns0:cell><ns0:cell /><ns0:cell>objects</ns0:cell><ns0:cell>LiDAR</ns0:cell><ns0:cell>GRU networks</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>point-cloud)</ns0:cell><ns0:cell>(full scene)</ns0:cell><ns0:cell /><ns0:cell>datasets</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>(Weng, 2020. )</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Multiple</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Outdoor</ns0:cell><ns0:cell>Predict next</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>objects</ns0:cell><ns0:cell>LiDAR</ns0:cell><ns0:cell>scene using</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(full scene)</ns0:cell><ns0:cell /><ns0:cell>datasets</ns0:cell><ns0:cell>LSTM</ns0:cell></ns0:row><ns0:row><ns0:cell>(Akhter, 2010)</ns0:cell><ns0:cell>Single frame</ns0:cell><ns0:cell>Multiple</ns0:cell><ns0:cell>Dynamic</ns0:cell><ns0:cell>Monocular</ns0:cell><ns0:cell>Structure from</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>objects</ns0:cell><ns0:cell>objects</ns0:cell><ns0:cell>RGB</ns0:cell><ns0:cell>motion</ns0:cell></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:55956:2:1:NEW 6 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
</ns0:body>
" | "Academic Editor
Authors thank the academic editor for his useful comments and constructive suggestions
Comments
number
1
2
3
Comments and response
Academic Editor:
The authors should once again check the comments of reviewers and
revise the paper more carefully while addressing each comment in a
thorough and rigorous way.
Authors:
Thank you for your support. We revise all reviewer '1' comments and try
to do our best to respond to his/her concerns about the manuscript in the
rebuttal letter.
Academic Editor:
More current research should be discussed in the related works section.
Authors:
In the related work section, we added a new table (attached in Appendix
1) that summarizes the main characteristics of the most relevant
publications to our proposed framework and we also added two recent
publications.
Line 118 “Table 1 summaries the main characteristics of the most relevant
publications to our proposed framework.”.
And line (160-164).
“On the level of a dynamic outdoor domain, several domains are
interested in predicting future 3d scenes from existing ones. (Lu, 2020)
and (Weng, 2020.) using neural networks such as LSTM and GRU to
generate a full 3D point cloud from outdoor LiDAR datasets. The main
idea is to use the motion-based neural network that integrates motion
features between two consecutive point clouds.”
Academic Editor:
Before the paper contribution in Section 1, the author should clearly
mention the research gap available in the literature and the importance
of the proposed model.
Authors:
Thanks for your recommendation. We added the research gap in
introduction section as below:
“Nowadays, several realistic interaction applications still have limitations
to deal with the occlusion problem in a real dynamic environment without
using expensive sensors. one of the most effective solutions is to generate
dense 3D reconstruction for the whole scene. However, the interaction in
a dynamic environment requires a true depth map and explicit not only to
detect the camera localization but also consider the moving objects into
the scene with each other and with static objects in the background. “.
From line (67-72) in manuscript.
Academic Editor:
The limitations of the proposed study need to be discussed before the
conclusion.
Authors:
New section is added to consider all limitations of the proposed
framework.
“The success of the presented framework depends on the accuracy of
learning parameters such as depth map, camera pose, and object motion.
In the case of the learning parameters are not processed accurately in
phase one, the 3D reconstruction will fail. The other major limitation is
the overall execution time. Because the generation of a 3D point cloud
depends on accumulative matching between the current frame and a
group of keyframes, this process takes up to 15 minutes. Moreover, the
proposed framework had reconstructed a point cloud from a sequence of
20 RGB video frames which is considered a short sequence. However,
this limitation could be overcome by using parallel programming to
handle the most time-consuming part of the 3D point cloud reconstruction
as mentioned in the future work section. “from line (470-479)
4
5
Academic Editor:
The conclusions should be based on the results of this study.
Authors
Your recommendation has been applied in the conclusion sections.
“This paper proposes a novel framework for 3D model reconstruction
from monocular video frames for a dynamic environment. The
framework didn’t use any sensor data, which is costly and sometimes
noisy. The results showed that the proposed framework is capable of
generating smooth and accurate 3D point-cloud for a dynamic
environment using cumulative information of a sequence of RGB video
frames. Different evaluation metrics are used such as Localization error
and RMSE with average values of 0.05 and 0.067 respectively between
ground truth and predicted point-cloud. Moreover, the increase in fitness
value indicates that the proposed framework succeeded to cover a large
number of overlapping areas with a small mean square error.
Furthermore, a comparison between the proposed framework and stateof-the-art method using MRE compared with the DJP technique and
Chamfer Distance compared with two MoNet techniques with an
improvement of 46% ,11%, and 14% respectively. In the future, we will
be concerned with improving the overall execution time to make it able
to deal with real-time applications such as augmented reality by applying
several optimization techniques using state-of-the-art GPU and CUDA
platforms. In addition to, test 3D model reconstruction over long
sequences of RGB frames.” From line (482-497)
Reviewer “1”
Authors thank for Reviews “1” for his reasonable comments that help us to write our manuscript
better and clearer.
Comments
number
Comments and response
Reviewer:
The authors need to clearly state their motivation, goal and
contributions at least in the abstract and the introduction section of the
paper.
Authors:
1
Abstract is rewrite as below
“Occlusion awareness is one of the most challenging problems in several
fields such as multimedia, remote sensing, computer vision, and computer
graphics. Realistic interaction applications are suffering from dealing with
occlusion and collision problems in a dynamic environment. Creating
dense 3D reconstruction methods is the best solution to solve this issue.
However, these methods have poor performance in practical applications
due to the absence of accurate depth, camera pose, and object motion. This
paper proposes a new framework that builds a full 3D model reconstruction
that overcomes the occlusion problem in a complex dynamic scene without
using sensors’ data. Popular devices such as a monocular camera are used
to generate a suitable model for video streaming applications. The main
objective is to create a smooth and accurate 3D point-cloud for a dynamic
environment using cumulative information of a sequence of RGB video
frames. The framework is composed of two main phases. First, use
unsupervised learning technique to predict; scene depth, camera pose, and
objects’ motion from RGB monocular videos. Second, generates a framewise point cloud fusion to reconstruct a 3D model based on a video frame
sequence. Several evaluation metrics are measured; Localization error,
RMSE, and Fitness between ground truth (KITTI’s sparse LiDAR points)
and predicted point-cloud. Moreover, compared the framework with
different widely used state-of-the-art evaluation methods such as MRE and
Chamfer Distance. Experimental results showed that the proposed
framework surpassed the other methods and proved to be a powerful
candidate in 3D model reconstruction.” From line (20-37) in manuscript.
Also, we added a Research gap in the introduction section.
“Nowadays, several realistic interaction applications still have limitations
to deal with the occlusion problem in a real dynamic environment without
using expensive sensors. one of the most effective solutions is to generate
dense 3D reconstruction for the whole scene.
However, the interaction in a dynamic environment requires a true depth
map and explicit not only to detect the camera localization but also consider
the moving objects into the scene with each other and with static objects in
the background. “. From line (67-72) in manuscript
Moreover, some key features at the end of the introduction to increase the
clarity.
“The main objectives of our framework are:
Creating a smooth and accurate 3D point-cloud for a dynamic environment
using accumulative information from a sequence of RGB video frames.
This method is used in solving several problems such as occlusion and
collision. Due to the expensive cost of multi-sensors data, a monocular
camera is used instead and compensated by unsupervised learning
techniques to be suitable for video streaming applications. The framework
consists of two stages. In the first stage, the online refinement process
adapts a new environment by integrating 3D motion of moving objects with
depth and camera ego-motion. In the second stage, a full 3D model is
reconstructed using frame-wise point cloud fusion.” from lines (73-81).
Reviewer:
2
Literature review must be improved as it lacks details and in-depth
comparison to other state of the art techniques and applications.:
https://www.mdpi.com/1424-8220/19/7/1553
https://doi.org/10.2991/ijcis.d.190617.001
https://doi.org/10.1109/IECON43393.2020.9255086
https://doi.org/10.1109/JTEHM.2019.2946802
https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640018-0253-2
Rewrite adding a section on each item, how it works, how your solutions
differs, what has been achieved, end with how your goals correlate.
Authors:
In the related work section, we added Table 1 (attached in Appendix 1)
which summaries the main characteristics of the most relevant publications
to our proposed framework. In addition, a description paragraph of the
listed methods/ techniques in Table1 is found in lines (120-132)
Reviewer:
Novelty should be better explained as similar solutions do exist:
https://openaccess.thecvf.com/content_CVPR_2019/html/
https://doi.org/10.1109/TVCG.2020.3023634
https://dl.acm.org/doi/10.1145/3414685.3417877
https://link.springer.com/article/10.1007/s00371-020-01908-3
Authors:
3
Thanks for your recommendation; We considered all of them in the related
work section (3D registration part)
“In the past few years, there was a great interest to solve 3D scene
reconstruction with moving objects using single or multiple Monocular
camera RGB frames. Xingbin et al. (Yang X. L., 2020), presented a realtime monocular 3D reconstruction system for mobile phones which used
online incremental mesh generation for augmented reality application. For
the 3D reconstruction process, they performed monocular depth estimation
with a multi-view semi-global matching method followed by a depth
refinement post-processing. Because the dynamic objects such as walking
pedestrians or moving objects are not supported by multi-view geometry
prerequisites, the authors deal with this problem by updating the algorithm
to remove the dynamic objects from the reconstruction mesh.
On the other hand, several applications focused on 3D reconstruction for a
specific category of moving objects such as full or part of the human body.
(Shimada, 2020) illustrated markless 3D human motion capture from
monocular videos. They concentrated on challenging 3D motion observed,
such as foot sliding, foot-floor penetration, and unnatural body leaning.
Also, (Peng, 2020) proposed 3D hand mesh reconstruction from a single
RGB image. The authors relied on the graph convolution neural network
(GCN) with two modules; hand localization and mask generation, to
capture geometric details of 3D hand shape.” lines (147-162) and
comparison table in the manuscript (attached in Appendix 1) and added
the following paragraph to explain the novelty of our work.
“This work aims to create an accurate 3D scene model that recognizes
every moving object from monocular RGB video frames without sensor
data. The 3D reconstruction process learns information (depth map, camera
pose, and object motion) from the current RGB frame, previous frame, and
keyframes to detect changes during the object's motion. The 3D point cloud
is continuously improved during this process by adding or removing some
points according to a certain certainty filter. Studying moving objects from
video frames solve several problems such as objects occlusion and collision
in a dynamic outdoor environment such as augmented reality” lines (179185) in the manuscript.
Reviewer:
Figure 9 is very low res and hard to see.
Authors:
4
Figure 9 is output from matplotlib. pyplot python library that makes some
changes to image resolution. We improved the image resolution as much
as we could to be clearer and more inserted it into the updated version of
the manuscript.
Reviewer:
Figure 3 should be replaced with activity diagram. Add all configuration
parameters.
Authors:
5
Thank you for your comment, but we believe that pseudo code gives a
better understanding of implementation wise issues such as input, size of
each used data structure and so on. Also, we present in figure 2 the
workflow of pipeline processes.
Reviewer:
Get rid of trivial text on the methods, a link to original source is
enough. Your reader will be familiar as the approach is not unheard.
6
Authors:
We try to make the manuscript self-contained to make it easy for readers to
find all information they need. However, we found that the information
according to the KITTI dataset may be useless, so we removed most of it
and left a brief information only. We deleted the paragraph from lines (262266) in the old manuscript.
7
Reviewer:
Experimental design
1-The experiments should be contextualized better (the reader should not
be left to assume that they will get their own conclusions).
2- The experiments should be described more clearly (e.g. set up and
carry out process, results in raw format, etc.).
3- How was the accuracy of the model evaluated? By what metrics?
Expert knowledge? Add full and rigorous statistical reliability analysis
proving that recreated model is accurate. Add full performance metrics.
Authors:
1- We add a conclusion paragraph at the end of the experiment
section to conclude our results.
“Finally, the experimental results show how the proposed
framework achieves an accurate 3D reconstructed point-cloud
model l from monocular RGB video frames without using
expensive sensors. Several evaluation metrics are measured, such
as Localization error, RMSE, and Fitness between ground truth
and predicted point-cloud. Moreover, we achieved 46%
improvement in MRE error compared with the state-of-the-art
method DJP. Besides, 11% and 14% improvement using chamfer
distances metric compared with MonNet (GRU) and MoNet
(LSTM) respectively.” From line (458-465) in manuscript.
2- In
“Proposed Monocular 3D Model
Reconstruction
Implementation” section we illustrated all performance metrics we
used in evaluation such as localization error, FPE, FNE, RMSE,
Fitness and MRE between predicted point cloud and ground. The
RMSE rate is between 0.02 to 0.1which highly indicates accuracy
of the proposed framework. we explained mathematical
formulation for all metrics and referred to the state-of-the-artwork
that used the same metrics for evaluation.
Evaluation Metrics
The evaluation process used three methods:
1- Localization Accuracy Error 𝐿𝐸 , FPE (False positive error),
and FNE (False Negative error) which were proposed by Refs
(Hafiz, 2015).
Localization Accuracy, 𝐿𝐸 is defined as the amount of
deviation of the detected point from a ground truth point
position. Let ground truth data set denoted by 𝐺𝑇 , ground truth
point denoted by 𝑝𝑔 ∈ 𝐺𝑇 , 𝑃𝑝 is predicted points by the
proposed technique. 𝑁𝐺 is the number of points in 𝐺𝑇 , 𝑁𝑝 is the
number of points in 𝑃𝑝 , and 𝐶𝑟 (𝑝𝑔 ) is geodesic distance over a
region, which is centered by point g and has radius of r. The
data which is contained in 𝐶𝑟 (𝑝𝑔 ) can be defined as:
𝐶𝑟 (𝑝𝑔 ) = {𝑝𝑐 ∈ 𝑃𝑝 | 𝑀𝑖𝑛 (𝑑𝑖𝑠(𝑝𝑔 , 𝑝𝑐 )) < 𝑟}
where 𝑑𝑖𝑠(𝑝𝑔 , 𝑝𝑐 ) is the Euclidian distance between the two
points 𝑝𝑔 𝑎𝑛𝑑 𝑝𝑐 , and 𝑟 is the Maximum correspondence
points-pair distance which controls the localization error. 𝑝𝑐 is
considered to be correctly detected if there exists a detected
point 𝑝𝑐 ∈ 𝑃𝑝 ⋂ 𝐶𝑟 (𝑝𝑔 ) such that 𝑝𝑐 is the minimum distance
between the points 𝑝𝑔 𝑎𝑛𝑑 𝑝𝑐 . The 𝐿𝐸 defined as follows:
1
𝑁
𝐿𝐸 =√ ∑𝑗 𝐶 𝑑𝑖𝑠 (𝑝𝑔 , 𝑝𝑐 𝑗 )
𝑁
𝑗
𝐶
where 𝑁𝐶 is the number of correctly detected points in 𝐺𝑇 .
The FNE at localization error tolerance 𝑟 is defined
as:
𝑁𝐶
𝐹𝑁𝐸(𝑟) = 1 −
𝑁𝐺
The FPE at localization error tolerance 𝑟 is defined
as:
𝑁𝐹
𝑁𝑝
The number of false positives is normalized with the number
of all true negatives, where 𝑁𝐹 is the number of false
positives, and yields to
𝑁𝐹 = 𝑁𝑝 − 𝑁𝐶
𝐹𝑃𝐸(𝑟) =
2- Registration 3D point cloud between the output of the
Velodyne laser scanner (ground truth) and the proposed
technique which generates a 3D point cloud from the predicted
depth and predicted ego-motion. Global registration (Zhou,
2016) and Iterative Closest Point ICP point-to-point
(Rusinkiewicz, 2001.) (Paul, 1992) are used. Moreover,
evaluate the registration by calculating Fitness function which
is used to measure the overlapping area (the number of inlier
correspondences/number of points in ground truth). The Higher
value of fitness is better. While for the Root Mean Square Error
RMSE of all correspondences in range of r, the lower is better.
3- Accuracy is reported using mean relative error (MRE). Which
defined as
P
𝑖
𝑖
|𝑧𝑔𝑡
− 𝑧𝑒𝑠𝑡
|
1
𝑀𝑅𝐸 = ∑
𝑖
𝑃
𝑧𝑔𝑡
𝑖=1
𝑖
𝑖
Let 𝑧𝑔𝑡
, 𝑧𝑒𝑠𝑡
are the ground-truth depth and the estimated depth
respectively with 𝑃 as the total number of 3D point Cloud. MRE is
state of the art metric used to compare the proposed framework
with several monocular dynamic reconstruction methods.
Another used metric is Chamfer Distance (CD) between the
ground truth 𝑃𝑔𝑡 ∈ 𝑅 𝑁×3 point cloud and the estimated point cloud
𝑃𝑒𝑠𝑡 ∈ 𝑅 𝑁×3 . Chamfer Distance (Lu, 2020) is a regularly used
metric to measure the similarity between two-point clouds, which
is define as:
1
𝑖
𝐶𝐷 =
∑ 𝑚𝑖𝑛𝑥 𝑗∈𝑃𝑔𝑡 |𝑥 ^ − 𝑥 𝑗 |
𝑁 ^𝑖
𝑥
∈𝑃𝑒𝑠𝑡
+
1
∑ 𝑚𝑖𝑛𝑥 ^𝑖 ∈𝑃𝑒𝑠𝑡 |𝑥 ^𝑖 − 𝑥 𝑗 |
𝑁 𝑗
𝑥 ∈𝑃𝑔𝑡
Reviewer:
Validity of the findings
Article contains no direct comparison to other works with KITTI dataset
(and there has been quite a few)
Authors:
8
Figure 10 presents a comparison between the proposed framework and the
most relevant methods to our work, by calculating MRE and comparing it
with corresponding values of MRE. Also, we added a new metric;
Chamfer Distance (according to other reviewer’s recommendation) which
is widely used when dealing with point-clouds. lines (436-471).
1- “The performance of the proposed framework is compared
with the state-of-the-art methods (Kumar S. Y., 2017) , which
reported that MRE on KITTI dataset and with several
monocular dynamic reconstruction methods, such as the Block
Matrix Method (BMM) (Dai, 2014), Point Trajectory
Approach (PTA) (Akhter, 2010), and Low-rank
Reconstruction (GBLR) (Fragkiadaki, 2014) , Depth Transfer
(DT) (Karsch K. C., 2012.), and (DMDE) (Ranftl, 2016). Note
that we used the reported result in (Kumar S. Y., 2017) as its
implementation is not available publicly.
Figure 10 shows that the proposed framework delivers
consistently superior reconstruction accuracy on the KITTI
dataset. Using unsupervised learning to predict scene depth
and camera pose is a strong point of the proposed framework
for generating an accurate 3D model reconstruction. Table 2
shows the improvement percentages between the proposed
framework and the state-of-the-art methods.”
2- “We calculate the Chamfer Distance (CD) between the point
cloud and ground truth on the KITTI dataset and compare it
with state-of-the-art modules proposed in (Lu, 2020). The main
idea of this metric is to predict future frames given the past
point cloud sequence based on a motion-based neural network
named MoNet. Two neural networks are used to predict scene
point-cloud LSTM and GRU.
The main idea of these methods using point-cloud as an input
for neural networks to estimate future frames. The MoNet
integrates motion features into the prediction pipeline and
combines them with content features. In this metric, we used
the average CD for 5 frames to match with the module (Lu,
2020). As shown in Table 3, the CD of our framework is
slightly better than Mon (LSTM) and Mon (GRU) methods.”
3- “Finally, the experimental results show how the proposed
framework achieves an accurate 3D reconstructed point-cloud
model from monocular RGB video frames without using
expensive sensors. Several evaluation metrics are measured,
such as Localization error, RMSE, and Fitness between ground
truth and predicted point-cloud. Finally, the experimental
results show how the proposed framework achieves an
accurate 3D reconstructed point-cloud model l from monocular
RGB video frames without using expensive sensors. Several
evaluation metrics are measured, such as Localization error,
RMSE, and Fitness between ground truth and predicted pointcloud. Moreover, we achieved 46% improvement in MRE
error compared with the state-of-the-art method DJP. Besides,
11% and 14% improvement using chamfer distances metric
compared with MonNet (GRU) and MoNet (LSTM)
respectively.”
Reviewer:
Code attached does not work
Authors:
The attached code is not an executable file, it needs a specific
environment and each part needs a separate file to run for example:
In the first stage, you should run several python files for preprocessing, a
file for training, a file for online refinement, and files for evaluations. The
outputs of online refinement are the input for the point-cloud process. Also,
there are several files for rendering and evaluating the point-cloud process.
For example
To run training:
ckpt_dir='your/checkpoint/folder'
data_dir='KITTI_SEQ2_LR/' # Set for KITTI
imagenet_ckpt='resnet_pretrained/model.ckpt'
9
python train.py \
--logtostderr \
--checkpoint_dir $ckpt_dir \
--data_dir $data_dir \
--architecture resnet \
--imagenet_ckpt $imagenet_ckpt \
--imagenet_norm true \
--joint_encoder false
Running depth/ego-motion inference:
input_dir='your/image/folder'
output_dir='your/output/folder'
model_checkpoint='your/model/checkpoint'
python inference.py \
--logtostderr \
--file_extension png \
--depth \
--egomotion true \
--input_dir $input_dir \
--output_dir $output_dir \
--model_ckpt $model_checkpoint
Running online-refinement:
prediction_dir='some/prediction/dir'
model_ckpt='checkpoints/checkpoints_baseline/model-199160'
handle_motion='true'
size_constraint_weight='0'
data_dir='KITTI_SEQ2_LR_EIGEN/'
triplet_list_file='$data_dir/test_files_eigen_triplets.txt'
triplet_list_file_remains='$data_dir/test_files_eigen_triplets_remains.txt'
ft_name='kitti'
python optimize.py \
--logtostderr \
--output_dir $prediction_dir \
--data_dir $data_dir \
--triplet_list_file $triplet_list_file \
--triplet_list_file_remains $triplet_list_file_remains \
--ft_name $ft_name \
--model_ckpt $model_ckpt \
--file_extension png \
--handle_motion $handle_motion \
--size_constraint_weight $size_constraint_weight
Running Point-Cloud fusion:
data_dir='path/ for camera/ intrinsic/ file'
input_dir='/output/from/online refinment/process/'
Point-Cloud Fusion.py --data_dir $data_dir \
--input_dir $input_dir\
Reviewer:
Conclusions are quite generic. Focus on the results only. Move the rest
to discussions.
10
Authors:
Your recommendation has been applied in the conclusion sections.
“This paper proposes a novel framework for 3D model reconstruction from
monocular video frames for a dynamic environment. The framework didn’t
use any sensor data, which is costly and sometimes noisy. The results
showed that the proposed framework is capable of generating smooth and
accurate 3D point-cloud for a dynamic environment using cumulative
information of a sequence of RGB video frames. Different evaluation
metrics are used such as Localization error and RMSE with average values
of 0.05 and 0.067 respectively between ground truth and predicted pointcloud. Moreover, the increase in fitness value indicates that the proposed
framework succeeded to cover a large number of overlapping areas with a
small mean square error.
Furthermore, a comparison between the proposed framework and state-ofthe-art method using MRE compared with the DJP technique and Chamfer
Distance compared with two MoNet techniques with an improvement of
46% ,11%, and 14% respectively. In the future, we will be concerned with
improving the overall execution time to make it able to deal with real-time
applications such as augmented reality by applying several optimization
techniques using state-of-the-art GPU and CUDA platforms. In addition to,
test 3D model reconstruction over long sequences of RGB frames.”. From
line (480-495)
Reviewer “2”
Authors would like to thank Reviewer “2” for his/her careful and thoughtful comments about our
paper.
Appendix 1:
Published
(Kulikajevas, 2019)
Single
/Multiple
frame
Single frame
Single
/Multiple
object
Single object
Static /
Dynamic
object
Static object
(Kulikajevas, 2019)
Single frame
Single object
Static object
(Widya, 2019)
Single object
Static object
Monocular
endoscope
(Wang J. H., 2018)
Multiple (2
image
sequences)
Single frame
Single object
Static object
(Yang X. L., 2020)
Single frame
Multiple
(full scene)
(Shimada, 2020)
Single frame
Single object
Static scene
(remove
dynamic
objects)
Dynamic
object
RGB-D
sensor
Monocular
RGB
(Peng, 2020)
Single frame
Single object
Dynamic
object
Input type
Methods
RGB-D
sensor
3D Models
Hyper neural
network
GANs neural
network
Structure from
motion (SfM)
Monocular
RGB
Monocular
RGB
Monocular
SLAM
Online
incremental
mesh
generation
Markless 3D
human motion
capture
GCN network
(Ku, 2019.)
Single frame
Corp single
object
Dynamic
object
Monocular
RGB
(Lu, 2020)
Multiple (two
consecutive
point-cloud)
Multiple
(full scene)
Dynamic
objects
Outdoor
LiDAR
datasets
(Weng, 2020. )
Single frame
Multiple
(full scene)
Dynamic
objects
(Akhter, 2010)
Single frame
Multiple
objects
Single object
Dynamic
objects
Dynamic
object
Outdoor
LiDAR
datasets
Monocular
RGB
Monocular
RGB
(Fragkiadaki, 2014)
Multiple
frames
(Ranftl, 2016)
Multiple
frames (two
consecutive)
Multiple
(full scene)
Dynamic
object
Monocular
RGB
Kumar S. D., 2019)
Multiple (2
frames)
Multiple
(full scene)
Dynamic
objects
Monocular
RGB
Proposed framework
Multiple
(whole video
frames
sequence)
Multiple
(full scene)
Dynamic
objects
Monocular
RGB
geometric
priors, shape
reconstruction,
and depth
prediction
LSTM and
GRU networks
Predict next
scene using
LSTM
Structure from
motion
non-rigid
structure-frommotion
(NRSfM)
segments the
optical flow
field into a set
of motion
models
Super pixel
over
segmentation
Unsupervised
learning and
point cloud
fusion
" | Here is a paper. Please give your review comments after reading it. |
93 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In an interactive online learning system (OLS), it is crucial for the learners to form the questions correctly in order to be provided or recommended appropriate learning materials. The incorrect question formation may lead the OLS to be confused, resulting in providing or recommending inappropriate study materials, which, in turn, affects the learning quality and experience and learner satisfaction. In this paper, we propose a novel method to assess the correctness of the learner's question in terms of syntax and semantics. Assessing the learner's query precisely will improve the performance of the recommendation. A tri-gram language model is built, and trained and tested on corpuses of 2533 and 634 questions on Java, respectively, collected from books, blogs, websites, and university exam papers. The proposed method has exhibited 92% accuracy in identifying a question as correct or incorrect. Furthermore, in case the learner's input question is not correct, we propose an additional framework to guide the learner leading to a correct question that closely matches her intended question. For recommending correct questions, soft cosine based similarity is used. The proposed framework is tested on a group of learners' real-time questions and observed to accomplish 85% accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Online learning systems (OLSs) have brought great advantages to all kinds of formal and informal learning modes <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> [2] <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Over the years, OLSs have evolved from simple static information delivery systems to interactive, intelligent <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, and context-aware learning systems <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, virtually incorporating real-life teaching and learning experience <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. In today's OLSs, much of the emphasis is given on designing and delivering learner-centric learning <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> in terms of the learning style, learning approaches, and progress of a particular learner <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. Like every learning process, one key aspect of an OLS is interaction, which makes learning more practical and dynamic <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. But, despite the advantages, due to high-cost and complexity, contents developed for OLSs have limited or no interaction. The basic (or one way) interaction is incorporated in most of the OLSs through demonstration or illustration, which can be useful for very elementary learning options like remembering and comprehending. To achieve advanced learning skills like analyzing, evaluating, creating, and applying, a higher level of interactions like discussion, hands-on experiments, exchanging views with experts, etc., are required <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. The best possible way of interaction in an OLS is to devise real-time interaction between the learner and the expert/trainer <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. In the absence of audio-video based interaction, the best option is to go for a question-answer based OLS <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> since questions are the most natural and implacable way a human enquires about information.</ns0:p><ns0:p>Interacting with a computer through natural language and to make it interpret the meaning of the communicated text has many implicit challenges associated with human-computer interaction.</ns0:p><ns0:p>Existing applications like search engines, question-answering based systems <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>, chatbots <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>, etc., work over user queries to deliver the required information. Fundamentally, these systems process the input query to determine its structure and semantics to understand the intention of the query. Therefore, the correctness of the semantics of the query determines the response given by these automated systems.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Significance of the Correctness of the Input Question in an Interactive Learning</ns0:head></ns0:div>
<ns0:div><ns0:head>Systems</ns0:head><ns0:p>For efficient information retrieval, most of the recommendation systems focus on improving the efficiency of the recommendation engine. But, how ever efficient the recommendation engine is, if the query itself is incorrect, the search engine will not be able to retrieve the suitable information that was actually intended by the user.</ns0:p><ns0:p>Similarly, in an OLS, while interacting, if the learner inputs an incorrect question, due to the absence of the cognitive ability of the search and recommendation engine of the OLS, it will try to find the learning materials against the wrong input. This will lead to in appropriate learning material recommendations, which will in effect dissatisfy the learner, and the purpose of the OLS will not be fulfilled. Therefore, it is important that the OLS understands the learner's actual intention when she inputs a question while interacting.</ns0:p><ns0:p>Hence, in an OLS, framing the right question in terms of grammar, word use, and semantics is an absolute requirement. But often, people frame questions incorrectly, leading to ambiguous information retrieval, which misleads learners. Generally, the following are the two reasons for an incorrect question framing: a. Language incompetency: The lack of expertise in communicative language may cause a learner to frame a question with incorrect grammatical structure, spelling mistakes, and the inability to use appropriate words. For instance, the non-native English-speaking people having poor knowledge of English most often find it difficult to compose questions in English. For example, a question given by such a user, 'HTML in how Java', demonstrates the incorrect framing of the question. What is being asked is not understandable. It could be the programming of HTML script through Java language, or it could be the application of Java program on an HTML page. The question lacks adequate articulation, due to which the desired meaning cannot be recognized. This makes correct parsing of the question impossible.</ns0:p><ns0:p>b. Lack of domain knowledge: Insufficient domain knowledge also leads to frame an incorrect question. For example, the question 'how a parent class inherits a child class' is syntactically correct but semantically (or technically) incorrect. Exchanging the phrases 'parent class' and 'child class' would make the question correct. Ignorance or the lack of domain knowledge can reason these types of semantically incorrect framing of questions.</ns0:p><ns0:p>In this case, the question might be parsed successfully, but the learner will get unintended a) To assess the correctness of the learners' questions:</ns0:p><ns0:p> We built two sets of corpuses comprising 2533 (for training) and 634 (for testing) questions on core Java.</ns0:p><ns0:p> We generated a tri-gram model language.</ns0:p><ns0:p> We created a classifier to identify the correct and incorrect questions based on the tri-gram model language.</ns0:p><ns0:p> The classification is evaluated on the test corpus data.</ns0:p><ns0:p> The efficacy of the classifier was compared with other n-gram models as well as with other research works. b) To address the issue of incorrect question:</ns0:p><ns0:p> We proposed a framework for suggesting correct questions to the learner.</ns0:p><ns0:p> We designed a web-based client/server model to implement the framework.</ns0:p><ns0:p> The efficacy of the framework is assessed by a group of learners.</ns0:p><ns0:p> The proposed similarity model used in the framework is compared with other existing similarity measures.</ns0:p><ns0:p> The performance of the framework is assessed by Shannon's diversity and equitability indices.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.6'>Paper Organization</ns0:head><ns0:p>Section 2 mentions related work discussing the different error-checking methods and their limitation. Section 3 presents the correctness assessment methodology of the learners' questions. Guiding the learner to find the correct question is presented in Section 4. The experiments and the result analysis of both the proposed methods are discussed separately in their respective sections. Section 5 concludes the paper. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related Work</ns0:head><ns0:p>Identifying the correctness of a question is related to determining the errors in the sentential text. Sentential errors are not limited to semantics of the text but to other different types of error like the wrong usage of words, spelling mistake, punctuation marks, grammatical error, etc. Madhavi et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> categorized the errors in a sentence as:  Semantic error: The error that makes the sentence senseless or meaningless due to the wrong choice of words and their placing.</ns0:p><ns0:p>Among these five error types, detecting sentence structure error, syntax error, and semantic errors are the significant ones for finding the correctness of a question sentence used in a query based interactive online recommendation system. Different approaches and strategies are found in literature for detecting the different type of errors in textual sentence. These different error detection approaches can be categorized as a rule-based approach, statistical approach and hybrid approach <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. These different error detection categories that are adopted in some notable research work that have been carried out for detecting the significant errors in textual sentence is shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1. Related work categorization based on error type and resolving approach</ns0:head><ns0:p>It is seen that the rule-based approach has been quite effective in detecting sentence structure error, syntax error and punctuation error. While, the statistical approach works well to find the structure errors, spelling errors and semantic errors (word usage and placement error). Most of the research works for detecting an error in a textual sentence are limited to word ordering error, wrong usage of words, word collocation errors and grammatical errors in a sentence.</ns0:p><ns0:p>The sentence structure errors due to disarrangement of words (misplaced words) and incorrect organization of the sentence's POS components have been mitigated differently. Rule-based approach was used by Malik et al. <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> by applying POS identification and NLP production rule to check the grammatical error in the sentence. Chang et al. <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> proposed rule-based database approach to detect word error, word disorder error and missing word error. Similarly, Lee et al. <ns0:ref type='bibr' target='#b28'>[29]</ns0:ref> manually created a list of 60 rules to detect sentence structure error. In another approach, Tezcean et al. <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> proposed a rule-based dependency parser which queries a tree bank for detecting sentence structure error. In statistical approach, n-gram based <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref> and machine learning based <ns0:ref type='bibr' target='#b31'>[32]</ns0:ref> techniques are followed to determine the errors. Islam et al. <ns0:ref type='bibr' target='#b32'>[33]</ns0:ref> proposed sequence to sequence learning model which uses encoder-decoder architecture for resolving missing word error and incorrect arrangement of words in the sentence. The decoder is a recurrent neural network (RNN) along with long and short term memory (LSTM) for decoding the correct substitute for the grammatical errors. Sun et al. <ns0:ref type='bibr' target='#b53'>[54]</ns0:ref> followed a hybrid approach to resolve the sentence structure error. They used an NLP based POS tagging and parse tree to determine the features of an incorrect sentence and then classified for grammatical error using the classifiers like support vector machine (SVM) and Naïve Bayes (NB).</ns0:p><ns0:p>The syntax errors are due to wrong or inappropriate use of language grammar. Over the years, different approaches (e.g., rule-based, statistical and hybrid) have been explored in research works. For syntax error detection, rule-based techniques like NLP production rule <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref>, rulebased database approach <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref>, and rule-based dependency parser <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> have found to be applied. Othman et al. <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref> proposed a model based on a set of Arabic grammatical rule and regular expression. Among the different statistical techniques, the use of neural network found very effective in determining syntax error <ns0:ref type='bibr' target='#b39'>[40]</ns0:ref>. Different advanced variations of neural network like bi-directional RNN with bidirectional LSTM <ns0:ref type='bibr'>[38] [41]</ns0:ref>, neural sequence to sequence model with encoder and decoder <ns0:ref type='bibr' target='#b38'>[39]</ns0:ref>, etc. are proposed for error detection in a sentence. Sonawane et al. <ns0:ref type='bibr' target='#b43'>[44]</ns0:ref> introduced a multilayer convolution encoder-decoder model for detecting and correcting syntactical errors. Beside neural network, other machine learning technique like SVM <ns0:ref type='bibr' target='#b46'>[47]</ns0:ref> is also found to be used for detecting syntax error. The features that are considered for learning by various machine learning approaches are prefix, suffix, stem and POS of each individual token <ns0:ref type='bibr' target='#b42'>[43]</ns0:ref>. The error detection and correction are often carried out at individual token level of each sentence <ns0:ref type='bibr' target='#b41'>[42]</ns0:ref>. Besides the rule and statistical based approach, hybrid approaches are also followed for syntax error detection thereby taking the advantages of both the approaches. Kao et al <ns0:ref type='bibr' target='#b54'>[55]</ns0:ref> used NLP and statistical methods to detect collocation errors. Sentences were parsed to find the dependency and POS of every word in the sentence. Subsequently the collocation was matched through a collocation database to find errors. Similarly, Lee et al. <ns0:ref type='bibr' target='#b55'>[56]</ns0:ref> applied rulebased and n-gram based technique for judging the correctness of a Chinese sentence. A total 142 expert-made rule were used to check the potential rule violation in the sentence, while the ngram method determines the correctness of the sentence.</ns0:p><ns0:p>The semantic error detection has largely carried out by statistical approach using techniques like n-gram methods or machine learning. The use of RNN is quite popular in semantic error detection <ns0:ref type='bibr' target='#b52'>[53]</ns0:ref>. Zheng et al. <ns0:ref type='bibr' target='#b34'>[35]</ns0:ref> and Yeh et al. <ns0:ref type='bibr' target='#b35'>[36]</ns0:ref> used a LSTM-based RNN to detect error like redundant words, missing words, bad word selection and disordered words. While, Cheng et al. <ns0:ref type='bibr' target='#b50'>[51]</ns0:ref> proposed conditional random fields (CRF) models to detect word ordering error (WOE) in textual segments. Zan et al. <ns0:ref type='bibr' target='#b44'>[45]</ns0:ref> proposed syntactic and semantic error detection in Chinese language by using BERT, BiLSTM and CRF in sequence. Similarly, Agarwal et al. <ns0:ref type='bibr' target='#b45'>[46]</ns0:ref> applied LSTM neural network architecture to make error detection classifier for detecting two types of error -syntax and semantic error like repeated word error, subject-verb agreement, word ordering and missing verb are identified. For detecting grammatical error with long sentence, Rei et al. <ns0:ref type='bibr' target='#b51'>[52]</ns0:ref> proposed a neural sequence labeling framework. The authors found bi-directional LSTM outperforms other neural network architecture like convolution and bidirectional recurrent. Shiue et al. <ns0:ref type='bibr' target='#b47'>[48]</ns0:ref> claimed that among the other classifier the decision tree yields better performance for morphological error and usage error. Yu et al. <ns0:ref type='bibr' target='#b49'>[50]</ns0:ref> proposed an SVM model for error detection like an adverb, verb, subject, object ordering and usage error, prepositional phase error and pronoun, adjective ordering error. In <ns0:ref type='bibr' target='#b33'>[34]</ns0:ref>, it is found that supervised ensemble classifier -Random Feature space using POS tri-gram probability offers better performance for semantic error detection in comparison to other supervised classifiers. Ferraro et al. <ns0:ref type='bibr' target='#b36'>[37]</ns0:ref> saw the different grammatical error like sentence structure, syntax and semantic error as collocation errors. A collocation match in a corpus would able to detect collocation errors. Besides machine learning models, a statistical model based on sequential word pattern mining has been quite effective in detecting grammatical errors <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref>. The statistical modeling and machine learning, though easy to implement, are sometimes outperformed by rule-based techniques. In <ns0:ref type='bibr' target='#b56'>[57]</ns0:ref> [54], it is found that rule-based techniques for detecting grammatical errors yield a better result for the Chinese language.</ns0:p><ns0:p>The choice of error detection technique depends much upon the rules and science of the text language under consideration. Error detection using rule-based techniques demands human expertise in framing the rules. In a language with plethora of possibilities for sentence making, leads to difficulty in framing rules to capture the different types of error. Moreover, this technique can be specific to a domain or application context and cannot be generalized.</ns0:p><ns0:p>Unlike rule-based techniques, the error detection using machine learning demands huge dataset, which may not be available for all types of application scenarios. Recently, it is found that most of the syntax and semantic error detection in text is carried by LSTM, RNN, Sequence to Sequence modeling techniques. But these techniques require corpus with incorrect and their corresponding correct sentence data with appropriate annotation or labeling. Creation of such corpus is non-trivial task. Moreover, the models do not generalize well. This means if a sentence in the corpus is not large enough, the source sentence for error detection may appear strange to the model. Even though lot of work has been done in error detection in Chinese language, but there is a huge lacking of work for semantic error detection for English language.</ns0:p><ns0:p>Various works have been done for detecting the sentence structure, syntactical and semantic errors in a sentence, but none have been found for assessing the correctness of question framing. Questions are actually textual sentence, but the way they are interpreted in comparison to the other textual sentences requires a different approach for error checking. Comprehending a question generally requires knowing 'what is being asked', 'which key concepts are involved' and 'how the key concepts are related in context to the question'. Thus, identifying the error in question framing involves issues like identifying specific ordering of the semantic words (key concepts) and identifying the verbs. The verbs and other grammatical words which relates the key concepts orchestrate the meaning of the question. Detecting these two is important in interpreting the meaning of the question and subsequently assessing the error or wrong question framing. The characteristic features which differentiate the error checking strategy of questions from other textual sentences are given in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2. Differentiating characteristic feature of question in relation to textual sentence</ns0:head><ns0:p>Finding or detecting error in question lead to two possibilities for correction -a) automatic error correction and b) recommending correct question. The automatic error correction techniques have not reached its maturity yet. It fails to correct sentences which are complex (logical or conceptual) and furthermore it cannot align with the intent of learner. Mostly the automatic error correction fails to correct semantic errors.</ns0:p><ns0:p>The other possibility is recommending correct question. Recommending correct question is suggesting the probable correct questions to the learner against the incorrect input question. This facilitates the learner to navigate through the suggested question to choose the correct question which matches her intended question.</ns0:p><ns0:p>Most of the works on question recommendation are limited to Community Question Answer (CQA) which basically recommends the unanswered question to user to be answered correctly <ns0:ref type='bibr' target='#b57'>[58]</ns0:ref>. The question recommendation is made based on the learner's dynamic interest <ns0:ref type='bibr' target='#b58'>[59]</ns0:ref>, previous interest <ns0:ref type='bibr' target='#b59'>[60]</ns0:ref>, expertise <ns0:ref type='bibr' target='#b58'>[59]</ns0:ref> [61], load <ns0:ref type='bibr' target='#b60'>[61]</ns0:ref>, user model. Beside CQA system, the question recommendation is commonly used in frequently asked question (FAQ) based system, where question similar or related to user questions are retrieved and recommended from base. For finding similar questions, cosine similarity <ns0:ref type='bibr' target='#b61'>[62]</ns0:ref>, syntactic similarity <ns0:ref type='bibr' target='#b62'>[63]</ns0:ref>, concept similarity <ns0:ref type='bibr' target='#b62'>[63]</ns0:ref>, and TFIDF, knowledge based, Latent Dirichllet Allocation (LDA) <ns0:ref type='bibr' target='#b63'>[64]</ns0:ref>, recurrent and convolution model <ns0:ref type='bibr' target='#b64'>[65]</ns0:ref> are commonly used. Despite our best effort, we did not find work on correct question recommendation for a given incorrect question.</ns0:p><ns0:p>The only work which is close to our framework is the work done by Giffle et al. <ns0:ref type='bibr' target='#b65'>[66]</ns0:ref>. It is a question answering system developed with much focus given on completeness of the user input question. Mostly factoid-based questions like 'wh' questions and true or false questions are accepted in the system. Every time a user inputs a question it is lexically and syntactically analyzed to find the named entities -what is being asked and what is the subject of the question. The input question strength is calculated as score based on its completeness. If the score is high, suitable answers are recommended from the base. When the score is less than a threshold, the user is given feedback on restructuring the question and the entire process cycle is repeated until the input score is high than the threshold. The system has two big shortcomings as:  It does not check whether the input question is correct or not. It considers only the question is complete or not.</ns0:p><ns0:p> Based on the question score, the system give feedback. This put forwards a big issue. If the learner lacks knowledge and language skill, she will not be able to frame logical or conceptual questions completely or correctly. This leads to different answers which the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science learner may not agree with.</ns0:p><ns0:p>To address the issue of checking the question correctness we have proposed a methodology which is more precise and practical. Further, an automatic navigation system is proposed that allows the learner to select the correct question nearly matching with her intent.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Assessing the Correctness of the Learners' Input Questions</ns0:head><ns0:p>In this section, we present the proposed work for assessing whether the learner's input questions to the query-based learning system are correct or not.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Theoretical Background</ns0:head><ns0:p>The fundamental concepts that we adopted to assess the correctness of a question are the n-gram and sequential pattern mining. Basics of these concepts are briefed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1'>N-gram</ns0:head><ns0:p>The n-gram is a sequence of n items adjacent to each other in a string of tokens (text). The items in the string could be letter, syllables, or words. The size of n can be 1 (uni-gram), 2 (bi-gram), 3 (tri-gram) and so on. For example, in the string 'the world is a beautiful place', the possible bigrams are 'the world', 'world is', 'is a', 'a beautiful', and 'beautiful place'. Similarly, for a sentence 'a document consists of many sentences', the word-based tri-grams will be 'a document consists', 'of many sentences'. The tri-grams can also be overlapping like 'a document consists', 'document consists of', 'consists of many', and 'of many sentences'. Same applies for the other higher-level n-grams.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2'>Sequential Pattern Mining</ns0:head><ns0:p>The sequential pattern is a set of items which occur in a specific order <ns0:ref type='bibr' target='#b66'>[67]</ns0:ref> <ns0:ref type='bibr' target='#b67'>[68]</ns0:ref>. Sequential data patterns reflect the nature and situation of data generation activity over time. The existence of frequent subsequence totally or partially ordered are very useful to get insight knowledge. These patterns are common and natural, for example, genome sequence, computer network, and characters in a text string <ns0:ref type='bibr' target='#b68'>[69]</ns0:ref>.</ns0:p><ns0:p>Sequential pattern mining (SPM) is the process for extracting items of a certain sequential pattern from base or repository <ns0:ref type='bibr' target='#b66'>[67]</ns0:ref>. Additionally, it helps to find the sequence of events that has occurred and the relationship between them and the specific order of occurrences. Formally, the problem of subsequence in SPM is described as, for a sequence is an ordered list of events, denoted < α 1 α 2 … α n >. Given two sequences P = < x 1 x 2 … x n > and Q = < y 1 y 2 … y m >, then P is called a subsequence of Q, denoted as P⊆ Q, if there exist integers 1≤ j 1 < j 2 <…< j n ≤m such that x 1 ⊆ y j1 , x 2 ⊆ y j2 , …, and x n ⊆ y jn <ns0:ref type='bibr' target='#b67'>[68]</ns0:ref> [70].</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Need for Using Tri-gram based Pattern Matching</ns0:head><ns0:p>In this section, we justified the application of n-gram pattern matching and specifically the trigram for assessing the correctness of a learner question. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.1'>N-gram based Pattern Matching for Question's Correctness Assessment</ns0:head><ns0:p>Typically, the faults in an ill-framed user question lie in the sentence structure (missing subject or verb/phrase error), syntactic structure (grammatical error like subject-verb agreement, error related to the article, plural, verb form, preposition) and semantic errors (incorrect usage and placement of word). Domain-specific questions are interrogative sentences which specify entities, concepts and relations (between themselves) in a particular sequence. The sequential pattern focuses on how the concepts and entities are related and what interrogative meaning can be inferred from them (the question intention). Word collocation like words around the entities, concepts, relations together make word clusters. The link between the different word clusters in sentence subsequences would enable to get insight on the structural and semantic aspects of a question. In this direction pattern match for finding the correct word clusters and their sequences could be a prospective approach in the assessment of a question.</ns0:p><ns0:p>The n-gram language model allows for pattern matching and probability estimation of n-words in a sentence. The high probability of n-gram pattern similarity match could lead us to assume that n-word cluster for a subsequence in a sentence is correct for their syntactic structure and semantic composition. If the entire sentence is split into ordered sequence of n-gram subsequences, the aggregated probability estimation of correctness for each n-gram could lead us to assume the correctness of the entire question. Hypothetically, if we consider the probability estimation of the correctness is a cumulative assessment of individual n-gram sequences in the question, then which n-gram should be chosen for optimum result? We shall try to find the answer of this in the next subsection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Tri-gram: The Preferred Choice for Language Modeling</ns0:head><ns0:p>In n-gram, increasing the n value would result in clustering increased number of words as a sequence and thus decreasing the total number of subsequences in a sentence. This leads to an increase in biasness toward similarity pattern matching and thereby decreases similarity matching probability of diverse sequence patterns. Whereas decreasing n increases the number of subsequences in sentence thereby increasing the probability of similarity match at smaller sentences, but fails to find cohesion among word clusters and hence decreases the probability of accuracy for the larger sentences.</ns0:p><ns0:p>A tri-gram is a perfect capture for the desired features of the sentences and at the same time maintaining the optimum complexity factor of the program. While resoluting the sense from a group of words in sequence, it is observed that tri-gram (given one word on either side of the word) is more effective than two words on either side <ns0:ref type='bibr'>(5-gram)</ns0:ref>. It is also found that increasing or reducing the word on either side of a given word does not significantly make it better or worse in n-gram sequencing <ns0:ref type='bibr' target='#b70'>[71]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Question's Correctness Assessment Using Tri-gram Approach</ns0:head><ns0:p>In this section, we present the proposed approach for assessing the correctness of the learner question using tri-gram. The method includes building a tri-gram language model that is trained to assess the correctness of a question on Java, and devising a classification method to separate correctly and incorrectly framed questions. The details are described in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Tri-gram Language Model Generation</ns0:head><ns0:p>The specific procedures for generating the tri-gram based language model are explained in the following. The process flow of the language model generation is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2.</ns0:head><ns0:p>Steps for language model generation</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1.1'>Data Collection and Corpus Preparation</ns0:head><ns0:p>The language model is designed, trained and tested on a corpus of sentences. To build the needed corpus, we collected a total number of 2533 questions on the various topics of Java from books (available as hardcopy and softcopy), blogs, websites, and university exam papers. We adopted both manual and automatic approaches to extract and collect the questions. A group of four experts in Java language were involved in the manual collection of questions. For automatic extraction, we used a web crawler with a question parser. The crawler, a HTML parsing program designed in Python language, reads the webpage and spawns across other inbound webpages. Using the appropriate regular expression, the expected question sentences were extracted from the parsed pages. The returned texts were then manually verified and corrected, if required, to obtain meaningful questions.</ns0:p><ns0:p>To test the efficiency of the proposed method in rightly identifying correct and incorrect question, we needed a set of wrong questions as well. A number of incorrectly framed questions were collected from learners' interaction with the online learning portals and institutional online learning system, and questions asked by the students in the class. The incorrect questions contain grammatical errors (sentence structure and syntactic errors) and semantic errors. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to develop a suitable language model for questions. Text preprocessing typically includes steps like stopword removal, lemmatization, etc. Stopwords are frequently used words like 'I', 'the', 'are', 'is', 'and', etc., which provide no useful information. Removing these from a question optimizes the text for further analysis. However, sometimes certain domain-specific keywords coincide with the stopwords, removal of which may result in a loss of information from the questions. Therefore, we modified the list of stopwords by removing the domain-specific keywords from the Natural Language Toolkit (NLTK 1 ) stopword list to avert eliminating the required stopwords. The modified NLTK stopword list is used to remove stopwords from the questions excluding those which are meant for Java language.</ns0:p><ns0:p>Each question is broken down in the form of tokens using the regular expression tokenizer, which is present in the NLTK library. Each of these tokens is converted into their stem (root word) form using the Wordnet Lemmatizer to reduce any inflectional form of words. The steps for preprocessing a input question is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3. Typical steps for preprocessing a question</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.1.3'>Language Modeling</ns0:head><ns0:p>The preprocessed questions are broken down into sets of distinct uni-, bi-and tri-gram sequences. The uni-gram set is built on individual tokens in the questions. Whereas, the bi-and tri-grams are formed using overlapping word sequences sentence, along with the usual two-and three-token sequences, respectively, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 4. Generating uni-gram, bi-gram and tri-gram sequences from a question</ns0:head><ns0:p>The respective count of each n-gram occurrences is obtained from the question corpus. Along with the count, based on the relative occurrences in the corpus, the unconditional log probabilities of each uni-gram, as represented by Eq. 1, and conditional log probabilities of each bi-and tri-gram, as represented by Eq. 2 and Eq. 3 respectively, are calculated.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝑃(𝑤 1 ) = log ( 𝐶(𝑤 1 ) 𝐶(𝑤 𝑛 ) )</ns0:formula><ns0:p>Where w n represents the words in the corpus and c(w n ) returns the count of total number words in the corpus.</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>𝑃(𝑤 2 |𝑤 1 ) = log ( 𝐶(𝑤 1 ,𝑤 2 ) 𝐶(𝑤 1 ) ) (3) 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = log ( 𝐶(𝑤 1 ,𝑤 2 ,𝑤 3 ) 𝐶(𝑤 1 ,𝑤 2 ) )</ns0:formula><ns0:p>The log probabilities in Eq. 1 and Eq. 2 allow transforming higher fractional probability values to lower ones, which are easy to be used in the computation. A sample representation of the language model is shown in Table <ns0:ref type='table'>3</ns0:ref>. The entire language model derived from question corpus is saved in ARPA 2 format. Table <ns0:ref type='table'>3</ns0:ref>. Uni-gram, bi-gram and tri-gram probabilities for a question</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Classifying Correct and Incorrect Questions</ns0:head><ns0:p>The correctness of a question is estimated based on its syntactical and semantic aspects and accordingly is classified as correct or incorrect. The complete process of identifying correct and incorrect questions is pictorially shown in Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2.1'>Preprocessing the Learners' Input Questions</ns0:head><ns0:p>The input questions from the learner are preprocessed to remove the stopwords and the irrelevant words. Also, lemmatization is carried over the input question.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2.2'>Probability Estimation for Question Correctness based on the Syntactical Aspect</ns0:head><ns0:p>After preprocessing, the question is broken down into overlapping tri-gram sequences. Each trigram sequence is estimated for probability by maximum likelihood estimation (MLE) from the language model. If a tri-gram sequence of the question is not present in the language model, it will lead to zero estimation. However, though the entire tri-gram sequence may not occur in the language model, a partial word sequence, a lower-order n-gram (bi-gram) of it could be valid. The Backoff approach <ns0:ref type='bibr' target='#b71'>[72]</ns0:ref> [73] is considered for tri-grams to take into account of sequence which counts to zero. The tri-gram sequences which estimate to zero are further estimated for their bigrams. The probability of a tri-gram is depicted in Eq. 4 (4)</ns0:p><ns0:formula xml:id='formula_2'>𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = { 𝑐(𝑤1, 𝑤2, 𝑤3) 𝑐(𝑤1,𝑤2) , if 𝑐(𝑤1, 𝑤2, 𝑤3) > 0 0.5 × ( 𝐶(𝑤 1 , 𝑤 2 ) 𝐶(𝑤 1 ) + 𝐶(𝑤 2 , 𝑤 3 ) 𝐶(𝑤 2 ) ) , if 𝑐(𝑤1, 𝑤2, 𝑤3) = 0</ns0:formula><ns0:p>The probability of each tri-gram ranges from 0<=P<=1, higher probability refers to more correctness and higher occurrence. The entire probability of syntactic correctness of the sentence can be obtained as the addition of probability of each tri-gram in the question in Eq. 5, where k is the number of tri-grams in the question and P i is the probability of the i th tri-gram sequence in the sentence.</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_3'>𝐸 𝑠𝑦 = 1 𝑘 ∑ 𝑘 𝑖 = 1 𝑃 𝑖</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.2.3'>Probability Estimation for Question Correctness based on Semantic Aspect</ns0:head><ns0:p>The correctness of question semantic is assessed by estimating the validity of individual overlapping tri-gram sequences of the sentence. The validity of the tri-gram is assessed by the probability estimation of each tri-gram sequence in question found matches in the language model, as shown in Eq. 6. The semantic correctness of a question is estimated on the full similarity match of each tri-gram sequence. More the number of subsequences of the question sentence matches to language model, more is the chance of the question being semantically correct. The overlapping tri-gram sequences reflect the cohesion among words in the sentence subsequences. Thus, increasing the number of matching of the tri-gram sequences establishes higher probability of semantic accuracy of the question. The semantic correctness of the question is calculated as the summative average of probabilities of each tri-gram sequence in the sentence is shown in Eq. 7.</ns0:p><ns0:p>(6)</ns0:p><ns0:formula xml:id='formula_4'>𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = { 1, if 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) > 0 0, if 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = 0 (7) 𝐸 𝑠𝑚 = 1 𝑘 ∑ 𝑘 𝑖 = 1 𝑃 𝑖</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.2.4'>Classification</ns0:head><ns0:p>The correctness of a question is calculated by Eq. 8, where E sy and E sm are the probability estimates of syntactical and semantic correctness of the sentence, respectively. A syantatically correct question has E sy = 1, and E sm = 1 for semantically correct. Hence, the standard score for a correct question is 1 + 1 = 2. Thus the degree of correctness (C d ) of the question with respect to the complete correctness (i.e., 2) is assessed by adding the calculated probability estimates E sy and E sm and subtracting from 2. We considered the question is correctly structured, if C d ≤ 20; otherwise, the framing of the question is not correct. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Experiment and Performance Evaluation for Question's Correctness Assessment</ns0:head><ns0:p>The evaluation of the performance measure of the proposed approach for assessing the correctness of the learner question is done on a corpus of 634 annotated questions, where 52% of questions are correctly framed. The performance of the tri-gram approach for classifying questions as correct or incorrect is measured based on the metrics: true positive, true negative, false negative, and false positive, and the performance measures: Accuracy, Precision, Recall, F1-Score, as shown in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4. Performance measures of the proposed approach</ns0:head><ns0:p>In the experiment, we attempted to distinguish between correct and incorrect questions based on the probabilistic calculation proposed by our approach. The experimental results show that our method fails to classify 50 of these questions correctly. Out of these 50 questions, 32 were correct questions but are identified as incorrect. Further analysis of these false-negative questions reveals that after preprocessing and stopword removal, the length of most of the questions are reduced to less than three. These questions fail to generate any tri-grams to perform the probabilistic calculation. So, these questions by convention get marked as incorrect. Some of these false-negative questions even belong to domains that are not present in the training dataset. As a result, the proposed method fails to identify these questions correctly. The other set of incorrectly classified questions comprises of incorrect questions which are marked as correct. The false-positive questions primarily have misplaced punctuation marks which results in the structure of the incorrect question identical to the correct questions in the training set. They form tri-grams or bi-grams which perfectly match the tri-grams or bi-grams from the language model and render a high probabilistic score for the question. A margin of 8% error shows the efficiency of the proposed approach.</ns0:p><ns0:p>The efficacy of the tri-gram model approach was compared with other n-grams. The models were trained over the same question dataset for keeping the experiment bias-free. Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows a comparison of the accuracy measures obtained for each n-gram approach over the same statistical calculation. It is evidently seen that the accuracy of tri-gram is far better than other ngrams. The accuracy decreases with increasing value of n in n-gram. It leads to biased higherorder word sequence pattern search and less option for pattern assessment at lower orders. This causes restricted pattern search and decrease in accuracy. Similarly decreasing n leads to word sequence pattern search at lower order, which restricts the probability of correctness of the word sequences at higher orders. This typically reduces the accuracy. The comparative experiment thus concludes that use of the tri-gram model for question assessment leads to better assessment result. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The result of the proposed approach is compared with the result other research work <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref>. In <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref> Ganesh et al. applied tri-gram based approach to detect error in English language sentence. Table <ns0:ref type='table'>5</ns0:ref> shows the result comparison in terms of four assessment metrics. From the table, it is evident that the accuracy of our proposed approach is much better. While the precision of the both the approaches is same. This establishes the true positive and true negative identification cases are better in our approach for detecting the errors and thus the correctness or incorrectness of the question sentences.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Comparative results of the proposed approach and the solution given in <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref> 4 Guiding the Learner to the Probable Correct Question In the previous section (Section 3), we checked if the question given as input by the learner to the query-based learning system is syntactically and semantically correct or not. If the question is not correct, we guide the learner to the probable correct question that she actually intended to ask through one or multiple steps of question suggestions. The details methodology and framework of the proposed work are discussed in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Similarity-based Recommendation for Mitigating Incorrect Learner Question</ns0:head><ns0:p>Computationally auto-correcting the incorrectly framed question is one of the acclaimed ways followed in literature. But the success is limited and restricted to correcting only a few types of errors or mistakes. The typical mistakes a learner commits while articulating a question are shown in Fig. <ns0:ref type='figure'>7</ns0:ref>. For instance, inappropriate word selection may not reflect the exact intention of the learner. Similarly, insufficient keywords may not express the intended concept.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 7. Typical mistakes made by the learner in a question</ns0:head><ns0:p>In regard to these, except for grammatical and sequential ordering errors, auto-correction for other types of errors is not possible. The other way round the problem is suggesting correct questions to the learner which are near to what she intended to ask. Suggesting correct questions which are similar to information and morphological structure to the given question could lead to having a chance that learner may found the right question which she intends to ask. Considering the information like the concepts and functional words which are used in compiling the question is best of her knowledge in the current information seeking situation, the learner could be recommended appropriate questions which are aligned to/with the information they are seeking for. Thus, suggesting correct questions in contrast to the incorrect question imposed by learner is through similarity-based recommendation is an effective way to overcome the incorrect question </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Issues in Similarity-Based Recommendation of Questions</ns0:head><ns0:p>Cosine and Jaccard similarity techniques are the two text-based similarity approach which has been widely incorporated for finding similar text <ns0:ref type='bibr' target='#b73'>[74]</ns0:ref> <ns0:ref type='bibr' target='#b74'>[75]</ns0:ref>. But these approaches, when applied to question-based corpus for identifying similar question text, lead to the recommendation issues, as discussed in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Information Overload</ns0:head><ns0:p>Text similarity based on word match searches for similarity for every occurring word in the source sentence-incorrect question text for an exact match in the questions present in the question corpus. The needful comparison based on matching words occurrence among the sentences returns similar text. Since, the question framing is incorrect, taking a part of the entire sentence which seemingly founds to be correct and convey the learner intent could lead to a better similarity match. However, the prevailing constraint and limitations of NLP fail it to analyze and identify the parts of the source sentence, which are correct as per learner intention. Failing to determine this leads to ambiguity in identifying the parts of a sentence that are to be taken correctly for similarity match. Without this knowledge, the similarity search is made for each occurring words (assuming they are correct as per the learner intent) in the question against the questions in corpus lead to a huge set of information. For example, a learner questions on Java with incorrect word ordering and missing words like 'What different are interface implement', when runs for similarity match like Jaccard similarity on a question corpus returns a lot of information, as shown in Table <ns0:ref type='table' target='#tab_0'>6</ns0:ref>. With this amount of information, the learner may get confused and lost. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Diverse Information</ns0:head><ns0:p>A learner when composing a question, intends to seek information limited to a particular topic(s). Text similarity based on word match searches for similarity for every occurring word in the source sentence for an exact match into the question corpus. For similarity measurement weightage is given to word occurrence frequency rather than on their subject domain relevancy. No consideration is given to individual token belonging to a topic of a domain. Since a question is made up of functional words (noun or verb) along to concepts (domain keywords), the word match found for every functional word in the corpus leads to different questions having different topics which the learner does not intends to seek. This result in questions which are beyond the search topic boundary, leading to diversification of information. For example, the similarity search for an incomplete question like 'access modifier in Java' using Jaccard similarity returns questions of different topics, as shown in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_5'>8</ns0:ref> Manuscript to be reviewed Computer Science questions belonging to different topics for the given similarity recommendation. A large number of questions are on a different topic than that of the input question. This may put the learner in jeopardy and confusion. Conclusively, the similarity match on functional words of the source question in the corpus may result in diversification instead of convergence.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>. Recommended list of question and their topic retrieved using Jaccard similarity for the incorrect input question 'access modifier in Java' </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.3'>Biased to Exact Word Match</ns0:head><ns0:p>While framing a question, keywords and functional words are integrated and sequenced in an appropriate manner to make meaning out of the question. The use of these words by learner is the natural outcome of learner's knowledge and communication skill. And as a reason, lacking in learner's expertise does not assure correctness of question framing. The similarity assessment technique performs exact word match. This will return only those questions the words of which are exactly matched (word-by-word) with the learner's input question. This results in obscuring other many similar questions which are having different words but similar or near to similar meaning. And thus, many of the questions having similar meaning but having different word construction are ignored, resulting in poor efficiency.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Proposed Framework for Correct Question Suggestion to the Learner</ns0:head><ns0:p>Considering the above mentioned three problems, we have adopted the soft cosine technique to find similar sentences. The similarity matching is augmented by question selection and iteration pass. We propose a similarity assessment framework for suggesting the correct question for a given incorrect question on a particular domain. The framework consists of three phases of working, as discussed below. The framework is shown in Fig. <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>, while the process flow is shown in Fig. <ns0:ref type='figure' target='#fig_20'>10</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Selecting Questions with Similar Concepts</ns0:head><ns0:p>Selection of questions with similar concepts limits the search boundary and hence the diverse information issue can be addressed. Learners impose question using the best of their knowledge. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This makes them use concepts which are more aligned towards the information they are trying to seek. Though not all the concepts which are articulated in the question are rightly chosen, the probability of having the required concept in the question also persists. And thus, claiming all questions from the corpus having same concept(s) as present in the source question could increase the likelihood of finding the right intended question. This also reduces the probability of recommending questions which are completely on different topic(s) or concept(s) not relating to the concept(s) present in the source question. As a reason, the concept-wise selection of questions will reduce the diversification of information recommendation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Similarity Assessment and Correct Question Recommendation</ns0:head><ns0:p>A learner may compose an incorrect question due to the following three reasons: a) There are insufficient keywords to express the question. b) Insufficient number of words used to express the question.</ns0:p><ns0:p>c) The selection of words and their usage may be incorrect.</ns0:p><ns0:p>In all the cases, we need to find the alternative questions closest to the learner's intended question. For estimating the similarity, we suggested looking for the questions that have the same or similar word features as of the learner's question. A hard similarity (word to word) match for word features between the incorrect and alternative question reduces the chances of getting a more accurate alternative. Moreover, conducting a hard similarity search in the word feature space of correct question, the source question's inappropriate words would be of no use. Rather a soft similarity (synonym or close related words) match would give a high probability of finding the questions that are meaningfully aligned to learner's intent. To address the similarity match problem and to find the correct question, we applied soft cosine measures. Soft cosine allows finding the questions that are significantly similar in terms of the semantic matching, irrespective of the exact word match.</ns0:p><ns0:p>The similarity measure sim (f i , f j ) in soft cosine calculates the similarity for synonym or relatedness between the features f i and f j of the vectors under consideration. Here, the vector is a question, and the words of the question represent its features. A dictionary approach like WordNet::Similarity is being used to calculate the similarity (or relatedness) among the features <ns0:ref type='bibr' target='#b75'>[76]</ns0:ref>.</ns0:p><ns0:p>From the n-dimensional vector space model's perspective, the soft cosine measures the semantic comparability between two vectors. It captures the orientation (the angle) between the two vectors. But unlike cosine similarity, the features are projected in an n-dimensional space so that similar features are close by with very less angle difference. This causes the meaningfully similar words (features) of vectors (questions) to have minimal angle differences <ns0:ref type='bibr' target='#b76'>[77]</ns0:ref>, as shown in Fig. <ns0:ref type='figure' target='#fig_20'>11</ns0:ref>. The equation for soft cosine is given in Eq. 9. Where, S ij is the similarity between the features i and j, and p and q are the input question and the correct question, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.3'>Iteration and Question Selection</ns0:head><ns0:p>To overcome the issue of information overload, ten questions whose similarities are found more than 50% in relation to the source question text are enlisted to choose by the learner. This allows the learner to focus much on what he is actually seeking rather than getting overwhelmed by the huge information which would have been recommended otherwise. Since the approach is probabilistic, chances are there that no right question which is close to learner intention is found in the list. In such a case, selecting a question from the recommended list nearer to the question which learner intends to seek would allow the system to have a better-informed data. The learner selected question that in turn, act as a seed for further similarity search. Considering the selected question (seed question) as new input for further similarity search would actually converge the search boundary and increase the homogeneity of information. This will reduce the diversification. With every recommendation pass, the degree of concept-wise similarity increases, which, in turn, increases the range of similar questions. This makes the question suggestion to shift closer to learner's intention. The complete process is presented in Algorithm 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 1: Finding the correct question as per learner intent Incorrect question Wq Input:</ns0:head><ns0:p>Corpus crp Output: The intended question The server contains the web application (WebApp) with the requisite HTML and Python file, Flask 3 framework and Python (version 3.8). Flask is a web application microframework which allows to delegate web pages over the network and handle learner's input requests. The framework is glued as a layer to Python for executing the processes. The model is implemented in Python and is deployed in WebApp as Python file. Further, the learner's different interactions with the system are stored as the experimental data in the SQlite database, which comes default with Python.</ns0:p><ns0:p>The web server is connected to the client devices over internet or LAN to exchange HTTP Requests and HTTP Responses. And, the learner (client) interacts with the model through the webpage, as shown in Fig. <ns0:ref type='figure' target='#fig_20'>13</ns0:ref>. The reason behind choosing this web model for the experiment is for q in similar_correct_question then similarity = score_similarity(q) if similarity > 0.50 then print q end if end for print 'input the question and abort/search' input q, status if status == 'Abort' then print q, 'is the intended question' else if Wq = q goto Label 1 end if Selection of the questions based on the concept, and followed by similarity assessment is carried out in the server. Three similarity assessment techniques -soft cosine, Jaccard and cosine similarity are used to find the intended correct questions from the corpus. These three techniques are followed in parallel for assessing their performance for the given incorrect input questions. For this experiment, we used the complete training corpus (i.e., 2533 questions).</ns0:p><ns0:p>To select the probable correct question from the recommend similarity list, a threshold of 0.5 is considered as the minimum similarity score for soft cosine, while 0.2 is considered for Jaccard and cosine. It was found that Jaccard and cosine similarity techniques returned either no or a very few (one or two) similar questions which were not suitable for carrying out the experiment. Further, in some cases, while searching for similar questions to the given incorrect question, same question is iteratively returned for each consecutive pass. As a reason, in the cases of Jaccard and cosine, the threshold for similarity score is reduced to lower value of 0.2. This gave some outputs needed to carry out the experiment and compared to the result of soft cosine.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 13. User interface for learner interaction</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.4.2'>Learner Verification</ns0:head><ns0:p>The performance of the framework for similarity-based recommendation to find the intended question was verified by manual assessment. The assessment was carried by a group of learners. A total of 34 students of CSE department at Bengal Institute of Technology, studying Java in their 6 th semester of B.Tech degree program were selected. The students chosen were low scorer in the subject. The rationale behind choosing these students was that we wanted to select learners who are aware of Java language and its terminology but are neither expert nor good in the subject. This made them suitable candidates as they were susceptible to compose incorrect questions.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Each student was instructed to inputs approximately three incorrect questions, totaling to 100. Corresponding to each question, three recommendations are made using the soft cosine, Jaccard and cosine similarity techniques, as shown in Fig. <ns0:ref type='figure' target='#fig_20'>13</ns0:ref>. If the student found the correct intended question, the iteration was stopped for the respective similarity technique. If the intended question was not found in the recommended list, the student chose a question from the list as a seed question which was close to the intended question, and another iteration or pass was followed. If the intended question was not found within three passes, the recommendation process for the respective individual similarity technique was stopped. The purpose of using three similarity techniques is to make a comparison and find the best performance among the three.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Result and Analysis</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.5.1'>Accuracy</ns0:head><ns0:p>The learner input and feedback on a total of 100 incorrect questions are shown in Table <ns0:ref type='table'>8</ns0:ref>. The learner acceptance result of finding the intended correct question against the incorrect input question is summarized and shown in Fig. <ns0:ref type='figure' target='#fig_16'>14</ns0:ref>. The summarization is made on the basis of whether the learner finds the intended question or not for each of the three similarities-based recommendations. Based on learner input and the system feedback, the framework is evaluated for the accuracy metric. The accuracy is an intuitive performance measure, a ratio of correct observation made to the total observation made. The accuracy is defined in percentage by Eq. 10. The accuracy results for learner's accepting the recommended question show that soft cosine similarity outperforms the cosine and Jaccard similarities. In the given experimental data set, the soft cosine based recommendation returns the correct result in two or more passes for 12 input questions. While, for other 73 input questions, it returns the result in one pass. Therefore, it can be concluded that though the soft cosine similarity-based recommendation returns intended question in one pass for the maximum number of questions, recommending results in two or more passes is unavoidable. It is observed that input questions lacking in sufficient information cause the recommendation system to iterate multiple passes of learner's interaction to reach the intended question. The hefty size of the corpus might be another reason for the increased number of passes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 8. Similarity recommendation against learner questions</ns0:head><ns0:p>The results also show that for 15 input questions, the soft cosine similarity-based recommendation fails to find the correct question matching to learner's intent. It is observed that in very few cases where the words in the input question are highly scrambled or out of sequence, it may cause the soft cosine to fail to find the correct questions. In this case, the Jaccard similarity outperforms the soft cosine. The other reason which contributes to soft cosine to fail is the string length of the input question. If the string length is reduced to one or two words after stopword removal in question preprocessing, the soft cosine based recommendation is unable to find the exact intended question from the huge number of questions within a limited number (three passes) of learner's interaction. Perhaps a greater number of interactions were needed. Besides these two structural issues on input question, the soft cosine has some inherent limitation which causes the recommendation setting to fail in retrieving the appropriate questions near to learner intention. Even though it is claimed that soft cosine works well on word-similarity, actually it does not do well for multiple synonyms while matching for similarity. The other inherent issue is that the soft cosine fails to infer the common-sense meaning from a sequence of words or phrases to find semantical similarity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5.2'>Diversity and Evenness</ns0:head><ns0:p>Soft cosine technique with every iteration converges the search for questions on a particular topic. This causes the recommended questions to be very much focused on the intent of the input question. For assessing the effectiveness of soft cosine, along to each pass, the iteration result of the recommended list of questions obtained by the three similarity assessment techniques is analyzed for diversity and evenness. The diversity specifies how the questions in the recommended list are diverse in terms of topic. Where the evenness specifies how evenly the topic information (concepts) are spread (distribution) in the recommended list. The diversity and the evenness of information in the recommended list of questions in each pass are calculated by Shannon's diversity index (H) and Shannon's equitability (E H ), respectively, as given by Eq. 11. and Eq. 12.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Where, n is the number of topic category and P i is the proportion of the number of i th topic relative to the total count of individual topics for all questions in the recommended list. ( <ns0:ref type='formula'>12</ns0:ref>)</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_5'>𝐸 𝐻 = 𝐻 ln 𝑆</ns0:formula><ns0:p>Where, S is the total count of individual topics for all questions in the recommended list. The evenness value assumes between 0 and 1, where 1 denoting completely even. In an ideal situation, H ≈ 0 specifies that topic in recommendation question list is not diverse and all recommended question focuses on one topic. Similarly, E H ≈ 0 specifies zero dispersion of topics in the recommended question list.</ns0:p><ns0:p>The changes in diversity and equitability indices along to each pass for a given incorrect question 'java not have destroy and how garbage collect' are discussed below. a) Each keyword in the source question denotes a concept which in turn relates to a topic.</ns0:p><ns0:p>The keywords in the question are used to select and group questions from corpus belonging to same topic domains. The incorrect question is matched with the grouped question using the soft cosine measure. The set of suggested question returned by the soft cosine similarity measure in the first pass is shown in Table <ns0:ref type='table'>9</ns0:ref>. Each keyword in the recommended similar question list reflects a concept which accounts for count of the respective topics. Based on which the H and E H are calculated for the list as given in Table <ns0:ref type='table'>10</ns0:ref>.</ns0:p><ns0:p>b) The learner chooses the question 'explain garbage collection in java programming' from the recommended list of questions which is closest to her intent as the seed question for further searching. c) In the second pass, again based on the keywords from the source question, the questions on the same topic are selected and grouped from the corpus. The set of suggested question returned by the soft cosine similarity for the selected question against the selected source question is shown in Table <ns0:ref type='table' target='#tab_1'>11</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>9</ns0:ref>. Suggested similar questions from first iteration (pass 1) Table <ns0:ref type='table'>10</ns0:ref>. Diversity and evenness measures from pass 1 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on the individual topic count and the total topic count, the H and E H are calculated for the list, as given in Table <ns0:ref type='table'>12</ns0:ref>. It is evident that the diversity index H=1.02 in pass 1 is reduced to H=0.85 in pass 2. This implies that diversity of topic information found in the recommended list decreases along with the passes. This signifies the search information space converges, which give learner to be focused and better options to select the question from the list. Further, the evenness E H =0.985 in pass 1 is reduced to E H =0.781 in pass 2. This implies that unevenness of topic distribution among the question increases. This signifies that the distribution of the intended topic among the question increases which give a high probability of finding the right question.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 12. Diversity and evenness measures from pass 2</ns0:head><ns0:p>The keyword-based selection and grouping of question from corpus eliminates the otherwise irrelevant question and there by restricts it to a reduced topic search space. Further, soft cosine measure based similarity concretely shrinking the search to more meaningful question close to learner's intent and thereby decreasing the diversity.</ns0:p><ns0:p>From the results, a sample of nine questions that passed two iterations, applying the soft cosine similarity, was considered. Table <ns0:ref type='table'>13</ns0:ref> shows the diversity and evenness calculated on the topic information for the recommended question list obtained after each pass corresponding to the three similarity assessment techniques for a given question. Here, diversity and evenness equating to 0 indicates that the suggested question list belongs to the same topic. Some question search using the similarity-based technique led the learner to find the intended question in the first pass. This made the second pass for the question search a not applicable (NA) case. From the table, it is quite clear that with every pass the diversity in the recommended list of the question, obtained by soft cosine in comparison to other, decreases. This made us to conclude that with the progression of search iteration, the search space becomes narrower; in other words, the search converges. This ensures the search result to be focused on the intended topic, which helps the learner in reaching the intended question quickly.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 13. Diversity index and equitability on recommended questions 5 Conclusions</ns0:head><ns0:p>A lot of emphases are given in developing and structuring the contents so that it can be attractive and motivating to learners. Due to the high-cost factor and difficulty in managing peer to peer support, the learner-expert based interaction is being less encouraged in online systems. Questions or questions are one of the key forms of natural language interaction with computers which gives the learner an upper hand in interacting with computers more broadly. Composing correct question or question is essential from this perspective. A rightly composed question allows a clear understanding of what the learner wants to know. An incorrectly composed question raises ambiguity and diversions, which results in incorrect information. This often misleads the learner. For determining the intent and objective and hence the semantics of question, it is important to know whether the question is composed correctly to its semantics. Determining whether the input question is incorrectly or rightly composed would increase the accuracy of information retrieval. This put the absolute requirement for verifying whether the question framing is and by semantics is correct or not, before it can be used for information retrieval.</ns0:p><ns0:p>This paper proposes an approach for assessing the validity of framing the question and its semantics. A tri-gram based language model is used for assessing the question correctness in terms of syntax and semantics. The model outperforms the other n-gram approaches and establishes the fact that tri-gram optimally performs well in assessing the questions. The tri-gram language model exhibits an accuracy of 92%, which is way higher than the accuracy shown by 2gram, 4-gram and 5-gram over the same test data assessment.</ns0:p><ns0:p>The work also proposes an interactive framework for correct question recommendation. The framework uses a soft cosine based similarity technique for recommending the correct question to the learner. The proposed framework is assessed by learner questions and compared with other similarity assessment techniques, viz. cosine and Jaccard. The soft cosine similarity technique recommends the correct question way better than the other two, achieving an accuracy of 85%. In the case of multi-pass interaction, as the number of passes increased the information diversity is reduced, and the search is converged to the intended question quickly.</ns0:p><ns0:p>In conclusion, incorporating the presented work in an interactive OLS will not only improve the performance of the system significantly but will also enhance the learner satisfaction and learning focus, leading to a boosted quality of learning. </ns0:p></ns0:div>
<ns0:div><ns0:head>Rulebased approach</ns0:head><ns0:p>The rule-based approach calls for the application of linguistic rule devised by a linguistic expert for assessing the sentence to find the errors. The rule-based approach includes NLP techniques, tree parsing, etc. [27]</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical approach</ns0:head><ns0:p>The statistical approach uses different statistical and modelling techniques to know more about the existing patterns to infer knowledge. The statistical approach includes techniques like machine learning, pattern matching and mining.</ns0:p><ns0:p>[ Each of the approaches has shortcoming and advantages in comparison to each for detecting an error in the text. Since the implicit working procedure for these techniques is not competent enough to identify the errors, thus the techniques are often combined as a hybrid approach to overcome the limitation of each other. <ns0:ref type='bibr' target='#b53'>[54]</ns0:ref> [55] <ns0:ref type='bibr'>[</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Layout of the proposed work and the implementational environment</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>Sentence structure error: The error in a sentence generates due to different organization of POS components in a sentence.  Spelling error: The error which generates due to wrong spelling of words or meaningless strings in sentence.  Syntax error: The error in sentence due to wrong/violation of grammar. The syntax error is of the following types: o Subject-verb error o Article or determiner error o Noun number error o Verb tense or verb form error o Preposition error  Punctuation error: The error in sentence which generates due to misplacing or missing of punctuation marks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>The details of the question datasets are as following:Number of questions in training dataset: 2533 (all correct)  Number of questions in testing dataset: 634  Number of correct questions in testing dataset: 334  Number of incorrect questions in testing dataset: 300 3.3.1.2 Data Preprocessing for Language Model Generation As the collected questions consisted of many redundancies and anomalies, we preprocessed them PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The flow diagram for identifying correct and incorrect questions</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>( 8 )</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>𝐶 𝑑 = (2 -(𝐸 𝑠𝑦 + 𝐸 𝑠𝑚 )) × 50 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy comparison of the four n-gram approaches</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science problem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>shows the share of the number of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Returned similar questions belonging to different topics by Jaccard similarity</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. The proposed framework for correct question suggestion to the learner</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>( 9 )</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>𝑆𝑜𝑓𝑡_𝑐𝑜𝑠𝑖𝑛𝑒(𝑝,𝑞) = ∑ 𝑁 𝑖,𝑗 𝑆 𝑖𝑗 𝑝 𝑖 𝑞 𝑗 ∑ 𝑁 𝑖𝑗 𝑆 𝑖𝑗 𝑝 𝑖 𝑝 𝑗 ∑ 𝑁 𝑖𝑗 𝑆 𝑖𝑗 𝑞 𝑖 𝑞 𝑗</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 11 . 4 . 4</ns0:head><ns0:label>1144</ns0:label><ns0:figDesc>Figure 11. Comparison between cosine and soft cosine A) orientation between vectors in cosine similarity B) orientation between vectors in soft cosine measure</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>3Label 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>https://flask.palletsprojects.com/en/1.1.x/ concepts[] = get_concept(Wq) Selected_question[] = search_question(crp, concepts) Similar_correct_question[] = soft_cosine_similarity(Selected_question, Wq)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The web (client/server) model used to implement the proposed framework</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Comparing the correct question recommendation based on three similarity metrics A) Soft-cosine B) Cosine C) Jaccard</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>A is the number of observations made where learner finds the correct intended question. B is the total number of questions taken for observation.The overall accuracy result of the framework corresponding to the soft cosine, Jaccard and cosine similarity techniques is shown in Fig.15.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Accuracy comparison for similar question recommendation of three similarity measures</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 1 Layout</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,250.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,217.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,240.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,128.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,419.25,125.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,175.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,351.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,179.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,178.87,525.00,159.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,213.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,178.87,525.00,117.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Similar questions returned by Jaccard similarity for the learner question 'what different are interface implement'</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Suggested similar questions from first iteration (pass 2)</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b55'>56]</ns0:ref> </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>Unigram interface? Unigram probability Accuracy Bi-gram Proposed approach (%) Result of [31] (%) Bigram probability 92.11 83.33 Tri-gram 52 What is the interface of legacy? probability Tri-gram</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>what 23 What interface must an object implement 0.069 what different 0.034 Precision 94.00</ns0:cell><ns0:cell>what different type 94.11 53 What is different between iterator and 0.294</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>different 0.007 Recall before it can be written to a stream as an different type 0.157 89.80</ns0:cell><ns0:cell>different type operator 0.117 80.00 listiterator?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>type</ns0:cell><ns0:cell cols='3'>0.008 F1-Score type operator 0.023 91.88 object?</ns0:cell><ns0:cell>type operator use 86.48</ns0:cell><ns0:cell>0.333</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='3'>operator 0.006 24 What is applet stub interface? operator use</ns0:cell><ns0:cell>0.067</ns0:cell><ns0:cell>operator use Java 54 What are different collection views 0.166</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>use</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>use Java</ns0:cell><ns0:cell>0.024</ns0:cell><ns0:cell>provided by map interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>Java 25 How interface is different from a class. 0.042</ns0:cell><ns0:cell>55 What is comparable and comparator</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>26 What is an interface?</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>56 What will happen if one of the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>members in the class doesn't</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>implement serializable interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>29 What is interface?</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>57 What is serializable interface in Java?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>30 How interface is different from class?</ns0:cell><ns0:cell /><ns0:cell>58 What is externalizable interface?</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Recommended list of question and their topic retrieved using Jaccard similarity for the incorrect input question 'access modifier in Java' Why java is important to the internet 1(0.81) 1(0.66) 1(0.66) I S : Insufficient information E G : Grammatical error E S : Sequential error NF: Not found</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://www.nltk.org/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:note>
<ns0:note place='foot' n='2'>http://www.speech.sri.com/projects/srilm/manpages/ngram-format.5.html PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:1:1:NEW 11 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "22nd Feb, 2021
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments.
We thank you and all the reviewers for the generous comments on the manuscript and valuable suggestions for improvement.
We have edited the manuscript significantly to address all the concerns. The point-by-point response to the editor’s and reviewers’ comments are given below in a tabular format.
We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
Best regards,
Pijush Kanti Dutta Pramanik
Dept. of Computer Science & Engineering
National Institute of Technology, Durgapur, India
(On behalf of all authors)
Response to Editor’s Comment
Editor’s comments
Response
A more comprehensive and clearer conclusion is expected.
Conclusion is updated.
The paper can be further improved and proper analysis can be shown to prove the strength of the approach.
Discussion and analysis of the experiments are elaborately discussed and compared with other method/work wherever applicable.
More description of the technical details will help to improve the quality.
The paper is significantly revised, addressing the editor’s suggestion.
Improving the presentation to emphasize the author's goal will help to improve the quality of the final paper in the final camera-ready version.
The Introduction is significantly rewritten to explicitly specify the scope of the work.
Including some discussions with existing method which could prove the proposed method's effectiveness will improve the quality of the paper.
The experiments are elaborately discussed and compared with other method/work wherever applicable.
The approach can be discussed with some other existing techniques.
Sentences / English polishing will help to improve the quality of the final paper in the final camera-ready version.
The manuscript is thoroughly rechecked for English and grammatical mistakes.
The references in this manuscript are somewhat out-of-date. Include more recent research in this field.
Several recent references are included.
Response to Reviewer’s Comments:
Reviewer 1:
Reviewer comments
Response
Basic reporting
The first thing I noticed in the paper is its main objective, which is not specific and so general to study. As well, the researcher does not determine the scope of the study. For example: In this paper, propose a novel method to assess the correctness of the user query, in terms of syntax and semantics. The question is: “Where is this paper conducting?”
The abstract is modified. The purpose of the work is written clearly.
Secondly, the researcher did not mention any of the references at the end of the paragraph in the first chapter (introduction), and it is so important to mention the name followed with the year of the reference, which these information are taken from it. For example: Online learning systems (OLSs) have brought great advantages to all kind of formal and informal learning modes. Over the years, OLSs have evolved from simple static information delivery systems to interactive, intelligent, and context-aware learning systems, virtually incorporating real-life teaching and learning experience. In today's OLSs, much of the emphasis is given on designing and delivering learner-centric learning, in terms of the learning style, learning approaches, and progress of a particular learner.
Suitable references are included in the Introduction section.
Experimental design
A shortage of information and data regarding the participated sample conducted in this study including its sample number or any specific information related to them. I did not find that in the methodology section. And in the abstract section, the researcher only briefly mentions it. For example: A trigram language model is built and trained for assessing the correctness of learners' queries on Java.
The details of the data collection and corpus creation are presented in Section 3.3.1.1. It is mentioned in the abstract as well.
The details of the experiment of the proposed method to guide the learner to a correct question (if she entered an incorrect question) along with the sample size of learners and the tested incorrect questions are given in Section 4.4.
The researcher does not explain the web model (client or server) well, as this model is used in the experimental procedure in this paper. For example, the proposed framework for the correct question is implemented as a web model (client/server) for experimentation and performance analysis. And this paper also does not explain the reasons behind choosing this model for this paper.
The web model used in the experiment and the reason for choosing the model is explained in Section 4.4.1.
There are concepts in this paper that do not have a specific definition or explanation that makes this paper hard to understand and appears too complex to read. Such as the Softcosine technique. This technique does not have enough information about how it works, and how to benefit from it to achieve the paper objective
The soft cosine is elaborated in Section 4.3.2. Some references where soft cosine is used are also included.
Validity of the findings
A lot of significant information mentioned in this paper but without a clear and specific sequence, which makes the paper hard to understand and the information are not found effectively. And that means there is no benefits from most of the results of this paper.
The layout of the scope of the proposed work and its interaction with the query-based learning system has been shown in Figure 1. This should clarify the scope of the work. In addition to this, newly introduced Figure 5 and Figure 10 provide the details flow of the works presented in Section 3 and Section 4.
Every section and subsection are introduced with the purpose and contents.
The research questions, objectives do not clearly formulate in this paper.
The research objectives and motivations are explicitly mentioned in Introduction section.
No discussion section for the results found and comparing to what was discovered in the previous studies regarding the paper issue.
We have compared our correctness assessment method with a recent work (reference given). The comparative analysis proofs the superiority of our method.
However, to the best of our effort we could not find any related work for recommending correct question that can be compared with our proposed work. Nevertheless, the result analysis of the experiment is discussed in details in Section 4.5.
Comments for the Author
The whole paper should be rearranged with specific main titles and sub-titles. And each title must completely explain all the significant information that may benefit in this section.
The titles and subtitles are revised wherever applicable. The arrangement of the sections and subsections are rechecked.
The research question, objective, and sub-objectives and (hypothesis of the research paper if needed) should formulate.
The research objectives and motivations are explicitly mentioned in Introduction section.
More in-depth information about the introduction and background of the research issue in the first chapter should be mentioned, to give the reader more information and data about the research issue.
The Introduction has been rewritten significantly, addressing the reviewer’s concerns.
The layout of the scope of the proposed work and its interaction with the query-based learning system has been shown in Figure 1.
The sample method and model used in this research paper should discuss and explain in what way can be beneficial in achieving the research paper objectives.
This paper has two main objectives: a) to assess the correctness of a learner’s input question and b) if the question is not correct then suggest the learner some probable correct question.
The methods and models to attain both the objectives are elaborately presented in Section 3.3 and 4.3, respectively.
It should mention all the models used in this paper in the methodology section and the framework of this research paper should be explained in more detail by determining the dependent and in-depended variables for the research paper.
Every section and subsection are introduced with the purpose and contents. Suitable figures and flow-diagrams are given to make the contents more readable and clearer. The proposed frameworks are illustrated with suitable figures.
The researcher must discuss the results with the previous outcomes found in past related studies.
We found only one work that somehow matches with the complete work of this paper. The limitations of this work are mentioned in details in Section 2.
We have compared our correctness assessment method with a recent work (reference given). The comparative analysis proofs the superiority of our method.
However, to the best of our effort we could not find any related work for recommending correct question that can be compared with our proposed work. Nevertheless, the result analysis of the experiment is discussed in details in Section 4.5.
It is so important to determine the scope of the study to make the research more beneficial.
The scope of the study is discussed in Introduction section.
All references must be cited in the paper, if there is a piece of information it must be cited for easy checkout.
All the reference are cited.
It must formulate some recommendations before the conclusion section, to contribute in improving the educational system using the results found in this paper.
The use and the need of the proposed work is mentioned in Introduction as well as in Conclusion sections.
Long sentences:
• Page 6 line 52: “Over the years, OLSs have evolved from simple static information delivery systems to interactive, intelligent, and context-aware learning systems, virtually incorporating real-life teaching and learning experience.”
• Page 6 line 61: “To achieve advanced learning skills like analyzing, evaluating, creating, and applying, a higher level of interactions like discussion, hands-on experiments, exchanging views with experts, etc. are required.”
• Page 16 line 399: “Further analysis of these false-negative questions reveals that most of them after preprocessing and stopword removal the question length is reduced to less than three.”
• Page 16 line 417: “Similarly decreasing N leads to word sequence pattern search at lower order, and this restricts the probability of correctness of the word sequences at higher orders. This typically lowers accuracy.”
• Page 18 line 459: “Failing to determine which parts of the question and thus their semantics are aligned to learner's intent lead to ambiguity in identifying the parts of a sentence to be taken correctly for similarity matching.”
The sentences are checked and restructured wherever doable.
Better words:
• Page 6 line 57: “one key aspect of an OLS is interactivity”
o One key aspect of an OLS is interaction.
• Page 6 line 58: “But, despite the advantages, due to high-cost factor and complexity, contents developed for OLSs have limited or no interactivity.”
o Limited or no interaction.
• Page 6 line 66: “The best option is to opt for a question-answer based OLS”
o The best option is to choose a question-answer based OLS.
• Page 7 line 73: Fundamentally, these systems process the input query to parse its structure and semantics to understand the intention of the query.
o Fundamentally, these systems process the input query to determine its structure and semantics to understand the intention of the query.
• Page 7 line 82: “For instance, more often than not, the non-native English-speaking people having poor knowledge of English find it difficult to compose queries in English.”
o Most often.
• Page 9 line172: Error checking in the text is a problem since long back.
o Error testing the text has been a problem for a long time ago.
• Page 14 line 345: “The correctness of a question is estimated based on its syntactical and semantic aspects and accordingly is classified as correct or wrong.”
o As right or wrong.
We sincerely thank the reviewer for suggesting the corrections/alternatives. All the sentences are corrected as per suggestion.
World abbreviation:
• Page 9 line 158: what’s POS refer to?
Thanks for pointing out. The full form is given in the text.
Citation:
• It need more citation, there is no citation on most of the paper.
• Page 9 line 196: “Pattern recognition is one of the successful ways of detecting errors in the textual sentence. In [3], a learning model is generated from labelled sequential patterns of both correct and wrong sentences.”
o better citation.
• Page 10 line 205: “To detect the word usage error bidirectional long short-term memory (LSTM) model is proposed in [5].”
o better citation.
• Page 17 line 448: “Cosine and Jaccard similarity techniques are the two text-based similarity approach which has been widely incorporated for finding similar text.”
o Need citation.
As suggested, references are added where applicable.
The Related Work section is completely rewritten.
Reviewer 2 (Ammar Abdallah):
Reviewer comments
Response
Basic Reporting
Improve English:
1. Who is 'her' in this paragraph?
'The proposed model has exhibited 92% accuracy while assessed on the test data. Furthermore, in case the query is not correct, we also propose an approach to guide the user leading to a correct question complying her intent.'
Here, ‘her’ refers to ‘learner’.
2. Not recommended to use 'more often than not' in a scientific paper.
Thanks for pointing out. Corrected.
3. In the context of this paper: 'interactivity' is not clear and fuzzy. Provide a coherent definition. 'Like every learning process, one key aspect of an OLS is interactivity,'
Corrected.
4. 'Insufficient domain knowledge also leads to frame a wrong question' INSTEAD OF 'Insufficient domain knowledge also leads to wrong framing of a question.'
Thanks for the suggestion. Corrected.
5. The title of this section should be rephrased '1.2 Plausible Way Outs and their Limitations'. It is not suitable for a scientific paper.
The title is renamed.
Experimental design
Add references:
This section '1.2 Plausible Way Outs and their Limitations' is describing some limitations in NLP and Pattern matching. However, no references that support these limitations.
Suitable references are added to support the limitations.
'1.3 Proposed Solution' needs to be designed based on facts supported with references.
The limitations of the existing technologies and works are mentioned in Introduction section. The Proposed Solution Approach is completely rewritten.
The Related Work section is also completely rewritten.
It would be interesting to add references in 2020
References from 2020 are added wherever applicable. The Related Work section is completely rewritten.
Is 'Table 1' a research result of another research paper? If yes, then add the reference. If no, then explain how this table was found. Example: did the author conduct a content analysis? what is the methodology so that the results in this table can be supported.
Table 1 summarizes the prominent works for detecting the sentence structure error, syntax error and semantic errors along with the approaches adopted. We analyzed each paper and categorized them based on error type and resolving approach.
Validity of the findings
no comments
Comments for the Author
no comments
Reviewer 3:
Reviewer comments
Response
Basic reporting
In the abstract there is a confusion of terms like method, approach, or framework.
The method and approach are used in different contexts. The term ‘framework’ is used to denote the proposed model for suggesting the learner the correct question.
In the abstract there is an ambiguity in the word 'Java'.
Clarified.
Abstract does not include the source of dataset used in the paper.
The details of the data collection are presented in Section 3.3.1.1. It is mentioned in the abstract as well.
The problem statement and the objective is not defined clearly.
The research objectives and motivations are explicitly mentioned in Introduction section.
The missing of the phase model training of the proposed model in the proposed method.
In the proposed method (Section 3) we did not use any such machine learning based training model, rather the corpus building procedure can be considered as the training phase.
The measure of the correctness of the query is not enough and not clear.
The correctness measure of a question is explained further in Section 3.3.2.4.
Experimental design
In line 299 the sentence 'we corrected a total number of 2533 questions of core-Java from 300 different websites and books'; it is not clear how the questions are corrected.
Thanks for pointing out the mistake. It should be ‘collected’. The sentence is corrected.
In line 351, the phrase 'backof model'; there is no such model in the literature!
It is Backoff. Appropriate references are given.
In the section 4.3 line 525 to 533 the justification of the using of soft-cosine similarity is unclear.
The justification of the using of soft-cosine similarity is elaborated in Section 4.3.2.
The authors must refer to the following reference 'Soft similarity and soft cosine measure: Similarity of features in vector space model'
Thanks for the suggestion. The paper is cited.
Section 4.4: there is no justification of using thresholds 0.5, 0.2, 0.2 for soft-cosine, Jaccard and Cosine respectively.
The justification is given in Section 4.4.1.
Table 8, line 658; need more justifications and discussions on how the passes changes the values of the diversity and evenness measures. and the meaning of these values.
The meaning of the values of diversity and evenness are specified. The value changed in each pass are extensively explained in Section 4.5
Validity of the findings
No comment
" | Here is a paper. Please give your review comments after reading it. |
94 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In an interactive online learning system (OLS), it is crucial for the learners to form the questions correctly in order to be provided or recommended appropriate learning materials. The incorrect question formation may lead the OLS to be confused, resulting in providing or recommending inappropriate study materials, which, in turn, affects the learning quality and experience and learner satisfaction. In this paper, we propose a novel method to assess the correctness of the learner's question in terms of syntax and semantics. Assessing the learner's query precisely will improve the performance of the recommendation. A tri-gram language model is built, and trained and tested on corpora of 2533 and 634 questions on Java, respectively, collected from books, blogs, websites, and university exam papers. The proposed method has exhibited 92% accuracy in identifying a question as correct or incorrect. Furthermore, in case the learner's input question is not correct, we propose an additional framework to guide the learner leading to a correct question that closely matches her intended question. For recommending correct questions, soft cosine based similarity is used. The proposed framework is tested on a group of learners' real-time questions and observed to accomplish 85% accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Online learning systems (OLSs) have brought great advantages to all kinds of formal and informal learning modes <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> [2] <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Over the years, OLSs have evolved from simple static information delivery systems to interactive, intelligent <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, and context-aware learning systems <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, virtually incorporating real-life teaching and learning experience <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. In today's OLSs, much of the emphasis is given on designing and delivering learner-centric learning <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> in terms of the learning style, learning approaches, and progress of a particular learner <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. Like every learning process, one key aspect of an OLS is interaction, which makes learning more practical and dynamic <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. But, despite the advantages, due to high cost and complexity, contents developed for OLSs have limited or no interaction. The basic (or one way) interaction is incorporated in most of the OLSs through demonstration or illustration, which can be useful for very elementary learning options like remembering and comprehending. To achieve advanced learning skills like analyzing, evaluating, creating, and applying, a higher level of interactions like discussion, hands-on experiments, exchanging views with experts, etc., are required <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>. The best possible way of interaction in an OLS is to devise real-time interaction between the learner and the expert/trainer <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref> <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. In the absence of audio-video based interaction, the best option is to go for a question-answer based OLS <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref> since questions are the most natural and implacable way a human enquires about information.</ns0:p><ns0:p>Interacting with a computer through natural language and to make it interpret the meaning of the communicated text has many implicit challenges associated with human-computer interaction. a) To assess the correctness of the learners' questions:</ns0:p><ns0:p> We built two sets of corpora comprising 2533 (for training) and 634 (for testing) questions on core Java.</ns0:p><ns0:p> We generated a tri-gram model language.</ns0:p><ns0:p> We created a classifier to identify the correct and incorrect questions based on the tri-gram model language.</ns0:p><ns0:p> The classification is evaluated on the test corpus data.</ns0:p><ns0:p> The efficacy of the classifier was compared with other n-gram models as well as with other research works. b) To address the issue of incorrect question:</ns0:p><ns0:p> We proposed a framework for suggesting correct questions to the learner.</ns0:p><ns0:p> We designed a web-based client/server model to implement the framework.</ns0:p><ns0:p> The efficacy of the framework is assessed by a group of learners.</ns0:p><ns0:p> The proposed similarity model used in the framework is compared with other existing similarity measures.</ns0:p><ns0:p> The performance of the framework is assessed by Shannon's diversity and equitability indices.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.6'>Paper Organization</ns0:head><ns0:p>Section 2 mentions related work discussing the different error-checking methods and their limitation. Section 3 presents the correctness assessment methodology of the learners' questions. Guiding the learner to find the correct question is presented in Section 4. The experiments and the result analysis of both the proposed methods are discussed separately in their respective sections. Section 5 concludes the paper. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related Work</ns0:head><ns0:p>Identifying the correctness of a question is related to determining the errors in the sentential text. Sentential errors are not limited to the semantics of the text but to other different types of errors like the wrong usage of words, spelling mistakes, punctuation marks, grammatical errors, etc. Madhavi et al. <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref> categorized the errors in a sentence as:</ns0:p><ns0:p> Sentence structure error: The error in a sentence generates due to different organizations of POS components in a sentence.</ns0:p><ns0:p> Spelling error: The error which is generated due to the wrong spelling of words or meaningless strings in a sentence.  Semantic error: The error that makes the sentence senseless or meaningless due to the wrong choice of words and their placing.</ns0:p><ns0:p>Among these five error types, detecting sentence structure error, syntax error, and semantic errors are the significant ones for finding the correctness of a question sentence used in a querybased interactive online recommendation system. Different approaches and strategies are found in the literature for detecting the different types of errors in a textual sentence. These different error detection approaches can be categorized as a rule-based approach, statistical approach, and hybrid approach <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. These different error detection categories that are adopted in some notable research work that has been carried out for detecting the significant errors in a textual sentence are shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1. Related work categorization based on error type and resolving approach</ns0:head><ns0:p>It is seen that the rule-based approach has been quite effective in detecting sentence structure error, syntax error, and punctuation error. While, the statistical approach works well to find the structure errors, spelling errors, and semantic errors (word usage and placement error). Most of the research works for detecting an error in a textual sentence are limited to word ordering error, wrong usage of words, word collocation errors, and grammatical errors in a sentence.</ns0:p><ns0:p>The sentence structure errors due to the disarrangement of words (misplaced words) and incorrect organization of the sentence's POS components have been mitigated differently. A rulebased approach was used by Malik et al. <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref> by applying POS identification and NLP production rule to check the grammatical error in the sentence. Chang et al. <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref> proposed a rulebased database approach to detect word error, word disorder error, and missing word error. Similarly, Lee et al. <ns0:ref type='bibr' target='#b27'>[28]</ns0:ref> manually created a list of 60 rules to detect sentence structure errors. In another approach, Tezcean et al. <ns0:ref type='bibr' target='#b28'>[29]</ns0:ref> proposed a rule-based dependency parser that queries a treebank for detecting sentence structure error. In the statistical approach, n-gram based <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref> and machine learning based <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref> techniques are followed to determine the errors. Islam et al. <ns0:ref type='bibr' target='#b31'>[32]</ns0:ref> proposed sequence to sequence learning model which uses encoder-decoder architecture for resolving missing word error and incorrect arrangement of words in the sentence. The decoder is a recurrent neural network (RNN) along with long and short-term memory (LSTM) for decoding the correct substitute for grammatical errors. Sun et al. <ns0:ref type='bibr' target='#b32'>[33]</ns0:ref> followed a hybrid approach to resolve the sentence structure error. They used an NLP-based POS tagging and parse tree to determine the features of an incorrect sentence and then classified for grammatical error using the classifiers like support vector machine (SVM) and Naïve Bayes (NB).</ns0:p><ns0:p>The syntax errors are due to wrong or inappropriate use of language grammar. Over the years, different approaches (e.g., rule-based, statistical, and hybrid) have been explored in research works. For syntax error detection, rule-based techniques like the NLP production rule <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref>, rulebased database approach <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref>, and rule-based dependency parser <ns0:ref type='bibr' target='#b28'>[29]</ns0:ref> have been found to be applied. Othman et al. <ns0:ref type='bibr' target='#b33'>[34]</ns0:ref> proposed a model based on a set of Arabic grammatical rules and regular expressions. Among the different statistical techniques, the use of neural networks was found very effective in determining syntax error <ns0:ref type='bibr' target='#b34'>[35]</ns0:ref>. Different advanced variations of a neural network like bi-directional RNN with bidirectional LSTM <ns0:ref type='bibr'>[36] [37]</ns0:ref>, neural sequence to sequence model with encoder and decoder <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref>, etc., are proposed for error detection in a sentence. Sonawane et al. <ns0:ref type='bibr' target='#b39'>[39]</ns0:ref> introduced a multilayer convolution encoder-decoder model for detecting and correcting syntactical errors. Besides neural networks, another machine learning technique like SVM <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref> is also found to be used for detecting syntax errors. The features that are considered for learning by various machine learning approaches are prefix, suffix, stem, and POS of each individual token <ns0:ref type='bibr' target='#b41'>[41]</ns0:ref>. The error detection and correction are often carried out at the individual token level of each sentence <ns0:ref type='bibr' target='#b42'>[42]</ns0:ref>. Besides the rule and statistical-based approach, hybrid approaches are also followed for syntax error detection, thereby taking the advantages of both approaches. Kao et al. <ns0:ref type='bibr' target='#b43'>[43]</ns0:ref> used NLP and statistical methods to detect collocation errors. Sentences were parsed to find the dependency and POS of every word in the sentence. Subsequently, the collocation was matched through a collocation database to find errors. Similarly, Lee et al. <ns0:ref type='bibr' target='#b44'>[44]</ns0:ref> applied rule-based and n-gram based techniques for judging the correctness of a Chinese sentence. A total of 142 expert-made rules were used to check the potential rule violation in the sentence, while the n-gram method determines the correctness of the sentence.</ns0:p><ns0:p>The semantic error detection has largely carried out by statistical approach using techniques like n-gram methods or machine learning. The use of RNN is quite popular in semantic error detection <ns0:ref type='bibr' target='#b45'>[45]</ns0:ref>. Zheng et al. <ns0:ref type='bibr' target='#b46'>[46]</ns0:ref> and Yeh et al. <ns0:ref type='bibr' target='#b47'>[47]</ns0:ref> used an LSTM-based RNN to detect errors like redundant words, missing words, bad word selection, and disordered words. While, Cheng et al. <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref> proposed conditional random fields (CRF) models to detect word ordering error (WOE) in textual segments. Zan et al. <ns0:ref type='bibr' target='#b49'>[49]</ns0:ref> proposed syntactic and semantic error detection in the Chinese language by using BERT, BiLSTM, and CRF in sequence. Similarly, Agarwal et al. <ns0:ref type='bibr' target='#b50'>[50]</ns0:ref> applied LSTM neural network architecture to make an error detection classifier for detecting two types of error -syntax and semantic error like repeated word error, subject-verb agreement, word ordering, and missing verb are identified. For detecting a grammatical error with a long sentence, Rei et al. <ns0:ref type='bibr' target='#b51'>[51]</ns0:ref> proposed a neural sequence labeling framework. The authors found bi-directional LSTM outperforms other neural network architecture like convolution and bidirectional recurrent. Shiue et al. <ns0:ref type='bibr' target='#b52'>[52]</ns0:ref> claimed that among the other classifier, the decision tree yields better performance for morphological error and usage error. Yu et al. <ns0:ref type='bibr' target='#b53'>[53]</ns0:ref> proposed an SVM model for error detection like an adverb, verb, subject, object ordering and usage error, prepositional phase error, and pronoun and adjective ordering error. In <ns0:ref type='bibr' target='#b54'>[54]</ns0:ref>, it is found that supervised ensemble classifier -Random Feature space using POS tri-gram probability offers better performance for semantic error detection in comparison to other supervised classifiers. Ferraro et al. <ns0:ref type='bibr' target='#b55'>[55]</ns0:ref> saw the different grammatical errors like sentence structure, syntax, and semantic errors as collocation errors. A collocation match in a corpus would able to detect collocation errors. Besides machine learning models, a statistical model based on sequential word pattern mining has been quite effective in detecting grammatical errors <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref>. Statistical modeling and machine learning, though easy to implement, are sometimes outperformed by rule-based techniques. In <ns0:ref type='bibr' target='#b56'>[56]</ns0:ref> [33], it is found that rule-based techniques for detecting grammatical errors yield a better result for the Chinese language.</ns0:p><ns0:p>The choice of error detection technique depends much upon the rules and science of the text language under consideration. Error detection using rule-based techniques demands human expertise in framing the rules. A language with a plethora of possibilities for sentence making leads to difficulty in framing rules to capture the different types of error. Moreover, this technique can be specific to a domain or application context and cannot be generalized.</ns0:p><ns0:p>Unlike rule-based techniques, error detection using machine learning demands a huge dataset, which may not be available for all types of application scenarios. Recently, it is found that most of the syntax and semantic error detection in the text is carried by LSTM, RNN, Sequence to Sequence modeling techniques. But these techniques require corpus with incorrect and their corresponding correct sentence data with appropriate annotation or labeling. The creation of such corpus is a non-trivial task. Moreover, the models do not generalize well. This means if a sentence in the corpus is not large enough, the source sentence for error detection may appear strange to the model. Even though a lot of work has been done in error detection in the Chinese language, but there is a huge lacking of work for semantic error detection for the English language.</ns0:p><ns0:p>Various works have been done for detecting the sentence structure, syntactical and semantic errors in a sentence, but none have been found for assessing the correctness of question framing. Questions are actually textual sentences, but the way they are interpreted in comparison to the other textual sentences requires a different approach for error checking. Comprehending a question generally requires knowing 'what is being asked', 'which key concepts are involved,' and 'how the key concepts are related in context to the question'. Thus, identifying the error in question framing involves issues like identifying specific ordering of the semantic words (key concepts) and identifying the verbs. The verbs and other grammatical words which relate to the key concepts orchestrate the meaning of the question. Detecting these two is important in interpreting the meaning of the question and subsequently assessing the error or wrong question PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science framing. The characteristic features which differentiate the error checking strategy of questions from other textual sentences are given in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2. Differentiating characteristic feature of question in relation to textual sentence</ns0:head><ns0:p>Finding or detecting an error in question leads to two possibilities for correction -a) automatic error correction and b) recommending correct question. The automatic error correction techniques have not reached their maturity yet. It fails to correct sentences that are complex (logical or conceptual), and furthermore, it cannot align with the intent of the learner. Mostly the automatic error correction fails to correct semantic errors.</ns0:p><ns0:p>The other possibility is recommending the correct question, i.e., suggesting the probable correct questions to the learner against the incorrect input question. This facilitates the learner to navigate through the suggested question to choose the correct question which matches her intended question.</ns0:p><ns0:p>Most of the works on question recommendation are limited to Community Question Answer (CQA), which basically recommends the unanswered question to the user to be answered correctly <ns0:ref type='bibr' target='#b57'>[57]</ns0:ref>. The question recommendation is made based on the learner's dynamic interest <ns0:ref type='bibr' target='#b58'>[58]</ns0:ref>, previous interest <ns0:ref type='bibr' target='#b59'>[59]</ns0:ref>, expertise <ns0:ref type='bibr' target='#b58'>[58]</ns0:ref> [60], load <ns0:ref type='bibr' target='#b60'>[60]</ns0:ref>, user model. Besides the CQA system, the question recommendation is commonly used in a frequently asked question (FAQ) based system, where questions similar or related to user questions are retrieved and recommended from the base. For finding similar questions, cosine similarity <ns0:ref type='bibr' target='#b61'>[61]</ns0:ref>, syntactic similarity <ns0:ref type='bibr' target='#b62'>[62]</ns0:ref>, concept similarity <ns0:ref type='bibr' target='#b62'>[62]</ns0:ref>, and TFIDF, knowledge-based, Latent Dirichllet Allocation (LDA) <ns0:ref type='bibr' target='#b63'>[63]</ns0:ref>, recurrent and convolution model <ns0:ref type='bibr' target='#b64'>[64]</ns0:ref> are commonly used. Despite our best effort, we did not find work on the correct question recommendation for a given incorrect question.</ns0:p><ns0:p>The only work which is close to our framework is the work done by Giffle et al. <ns0:ref type='bibr' target='#b65'>[65]</ns0:ref>. It is a question answering system developed with much focus given on the completeness of the user input question. Mostly factoid-based questions like 'wh' questions and true or false questions are accepted in the system. Every time a user inputs a question, it is lexically and syntactically analyzed to find the named entities -what is being asked and what is the subject of the question. The input question strength is calculated as a score based on its completeness. If the score is high, suitable answers are recommended from the base. When the score is less than a threshold, the user is given feedback on restructuring the question, and the entire process cycle is repeated until the input score is high than the threshold. The system has the following two big shortcomings:  It does not check whether the input question is correct or not. It considers only the question is complete or not.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Based on the question score, the system gives the feedback. This puts forward a big issue.</ns0:p><ns0:p>If the learner lacks knowledge and language skills, she will not be able to frame logical or conceptual questions completely or correctly. This leads to different answers which the learner may not agree with.</ns0:p><ns0:p>To address the issue of checking a question's correctness, we have proposed a methodology that is more precise and practical. Further, an automatic navigation system is proposed that allows the learner to select the correct question nearly matching with her intent.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Assessing the Correctness of the Learners' Input Questions</ns0:head><ns0:p>In this section, we present the proposed work for assessing whether the learner's input questions to the query-based learning system are correct or not.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Theoretical Background</ns0:head><ns0:p>The fundamental concepts that we adopted to assess the correctness of a question are the n-gram and sequential pattern mining. The basics of these concepts are briefed below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1'>N-gram</ns0:head><ns0:p>The n-gram is a sequence of n items adjacent to each other in a string of tokens (text). The items in the string could be letters, syllables, or words. The size of n can be 1 (uni-gram), 2 (bi-gram), 3 (tri-gram), and so on. For example, in the string 'the world is a beautiful place', the possible bigrams are 'the world', 'world is', 'is a', 'a beautiful', and 'beautiful place'. Similarly, for the sentence 'a document consists of many sentences', the word-based tri-grams will be 'a document consists', 'of many sentences'. The tri-grams can also be overlapping like 'a document consists', 'document consists of', 'consists of many', and 'of many sentences'. The same applies to the other higher-level n-grams.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2'>Sequential Pattern Mining</ns0:head><ns0:p>The sequential pattern is a set of items that occur in a specific order <ns0:ref type='bibr' target='#b66'>[66]</ns0:ref> <ns0:ref type='bibr' target='#b67'>[67]</ns0:ref>. Sequential data patterns reflect the nature and situation of data generation activity over time. The existence of frequent subsequence totally or partially ordered is very useful to get insight knowledge. These patterns are common and natural, for example, genome sequence, computer network, and characters in a text string <ns0:ref type='bibr' target='#b68'>[68]</ns0:ref>.</ns0:p><ns0:p>Sequential pattern mining (SPM) is the process of extracting items of a certain sequential pattern from a base or repository <ns0:ref type='bibr' target='#b66'>[66]</ns0:ref>. Additionally, it helps to find the sequence of events that have occurred and the relationship between them, and the specific order of occurrences. Formally, the problem of subsequence in SPM is described as, for a sequence is an ordered list of events, denoted < α 1 α 2 … α n >. Given two sequences P = < x 1 x 2 … x n > and Q = < y 1 y 2 … y m >, then P is called a subsequence of Q, denoted as P⊆ Q, if there exist integers 1≤ j 1 < j 2 <…< j n ≤m such that x 1 ⊆ y j1 , x 2 ⊆ y j2 , …, and x n ⊆ y jn <ns0:ref type='bibr' target='#b67'>[67]</ns0:ref> [69].</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Need for Using Tri-gram based Pattern Matching</ns0:head><ns0:p>In this section, we justified the application of n-gram pattern matching and specifically the trigram for assessing the correctness of a learner question.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.1'>N-gram based Pattern Matching for Question's Correctness Assessment</ns0:head><ns0:p>Typically, the faults in an ill-framed user question lie in the sentence structure (missing subject or verb/phrase error), syntactic structure (grammatical error like subject-verb agreement, error related to the article, plural, verb form, preposition), and semantic errors (incorrect usage and placement of word). Domain-specific questions are interrogative sentences that specify entities, concepts, and relations (between themselves) in a particular sequence. The sequential pattern focuses on how the concepts and entities are related and what interrogative meaning can be inferred from them (the question intention). Word collocation, like words around the entities, concepts, relations together, makes word clusters. The link between the different word clusters in sentence subsequences would enable us to get insight into the structural and semantic aspects of a question. In this direction, pattern match for finding the correct word clusters and their sequences could be a prospective approach in the assessment of a question.</ns0:p><ns0:p>The n-gram language model allows for pattern matching and probability estimation of n-words in a sentence. The high probability of n-gram pattern similarity match could lead us to assume that n-word cluster for a subsequence in a sentence is correct for their syntactic structure and semantic composition. If the entire sentence is split into an ordered sequence of n-gram subsequences, the aggregated probability estimation of correctness for each n-gram could lead us to assume the correctness of the entire question. Hypothetically, if we consider the probability estimation of the correctness is a cumulative assessment of individual n-gram sequences in the question, then which n-gram should be chosen for the optimum result? We shall try to find the answer to this in the next subsection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Tri-gram: The Preferred Choice for Language Modeling</ns0:head><ns0:p>In n-gram, increasing the n value would result in clustering an increased number of words as a sequence and thus decreasing the total number of subsequences in a sentence. This leads to an increase in biasness toward similarity pattern matching and thereby decreases the similarity matching probability of diverse sequence patterns. Whereas decreasing n increases the number of subsequences in a sentence, thereby increasing the probability of similarity match at smaller sentences, but fails to find cohesion among word clusters and hence decreases the probability of accuracy for the larger sentences.</ns0:p><ns0:p>A tri-gram is a perfect capture for the desired features of the sentences and, at the same time, maintaining the optimum complexity factor of the program. While resoluting the sense from a group of words in sequence, it is observed that tri-gram (given one word on either side of the word) is more effective than two words on either side <ns0:ref type='bibr'>(5-gram)</ns0:ref>. It is also found that increasing or reducing the word on either side of a given word does not significantly make it better or worse in n-gram sequencing <ns0:ref type='bibr' target='#b70'>[70]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Question's Correctness Assessment Using Tri-gram Approach</ns0:head><ns0:p>In this section, we present the proposed approach for assessing the correctness of the learner question using tri-gram. The method includes building a tri-gram language model that is trained to assess the correctness of a question on Java, and devising a classification method to separate correctly and incorrectly framed questions. The details are described in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Tri-gram Language Model Generation</ns0:head><ns0:p>The specific procedures for generating the tri-gram based language model are explained in the following. The process flow of the language model generation is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2.</ns0:head><ns0:p>Steps for language model generation</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1.1'>Data Collection and Corpus Preparation</ns0:head><ns0:p>The language model is designed, trained, and tested on a corpus of sentences. To build the needed corpus, we collected a total number of 2533 questions on the various topics of Java from books (available as hardcopy and softcopy), blogs, websites, and university exam papers. We adopted both manual and automatic approaches to extract and collect the questions. A group of four experts in the Java language was involved in the manual collection of questions. For automatic extraction, we used a web crawler with a question parser. The crawler, an HTML parsing program designed in Python language, reads the webpage and spawns across other inbound webpages. Using the appropriate regular expression, the expected question sentences were extracted from the parsed pages. The returned texts were then manually verified and corrected, if required, to obtain meaningful questions.</ns0:p><ns0:p>To test the efficiency of the proposed method in rightly identifying a correct and incorrect question, we needed a set of wrong questions as well. A number of incorrectly framed questions were collected from learners' interaction with the online learning portals and institutional online learning system and questions asked by the students in the class. The incorrect questions contain grammatical errors (sentence structure and syntactic errors) and semantic errors. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1.2'>Data Preprocessing for Language Model Generation</ns0:head><ns0:p>As the collected questions consisted of many redundancies and anomalies, we preprocessed them to develop a suitable language model for questions. Text preprocessing typically includes steps like stopword removal, lemmatization, etc. Stopwords are frequently used words like 'I', 'the', 'are', 'is', 'and', etc., which provide no useful information. Removing these from a question optimizes the text for further analysis. However, sometimes certain domain-specific keywords coincide with the stopwords, removal of which may result in a loss of information from the questions. Therefore, we modified the list of stopwords by removing the domain-specific keywords from the Natural Language Toolkit (NLTK 1 ) stopword list to avert eliminating the required stopwords. The modified NLTK stopword list is used to remove stopwords from the questions, excluding those which are meant for the Java language.</ns0:p><ns0:p>Each question is broken down in the form of tokens using the regular expression tokenizer, which is present in the NLTK library. Each of these tokens is converted into their stem (root word) form using the Wordnet Lemmatizer to reduce any inflectional form of words. The steps for preprocessing an input question are shown in Fig. <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3. Typical steps for preprocessing a question</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.1.3'>Language Modeling</ns0:head><ns0:p>The preprocessed questions are broken down into sets of distinct uni-, bi-, and tri-gram sequences. The uni-gram set is built on individual tokens in the questions. Whereas the bi-and tri-grams are formed using overlapping word sequences sentences, along with the usual two-and three-token sequences, respectively, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 4. Generating uni-gram, bi-gram, and tri-gram sequences from a question</ns0:head><ns0:p>The respective count of each n-gram occurrence is obtained from the question corpus. Along with the count, based on the relative occurrences in the corpus, the unconditional log probabilities of each uni-gram, as represented by Eq. 1, and conditional log probabilities of each bi-and tri-gram, as represented by Eq. 2 and Eq. 3, respectively, are calculated.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝑃(𝑤 1 ) = log ( 𝐶(𝑤 1 ) 𝐶(𝑤 𝑛 ) )</ns0:formula><ns0:p>Where w n represents the words in the corpus and c(w n ) returns the count of the total number of words in the corpus.</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>𝑃(𝑤 2 |𝑤 1 ) = log ( 𝐶(𝑤 1 ,𝑤 2 ) 𝐶(𝑤 1 ) ) (3) 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = log ( 𝐶(𝑤 1 ,𝑤 2 ,𝑤 3 ) 𝐶(𝑤 1 ,𝑤 2 ) )</ns0:formula><ns0:p>The log probabilities in Eq. 1 and Eq. 2 allow transforming higher fractional probability values to lower ones, which are easy to be used in the computation. A sample representation of the language model is shown in Table <ns0:ref type='table'>3</ns0:ref>. The entire language model derived from the question corpus is saved in ARPA 2 format. Table <ns0:ref type='table'>3</ns0:ref>. Uni-gram, bi-gram, and tri-gram probabilities for a question</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Classifying Correct and Incorrect Questions</ns0:head><ns0:p>The correctness of a question is estimated based on its syntactical and semantic aspects and accordingly is classified as correct or incorrect. The complete process of identifying correct and incorrect questions is pictorially shown in Fig. <ns0:ref type='figure'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 5. The flow diagram for identifying correct and incorrect questions</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.2.1'>Preprocessing the Learners' Input Questions</ns0:head><ns0:p>The input questions from the learner are preprocessed to remove the stopwords and the irrelevant words. Also, lemmatization is carried over the input question.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2.2'>Probability Estimation for Question Correctness based on the Syntactical Aspect</ns0:head><ns0:p>After preprocessing, the question is broken down into overlapping tri-gram sequences. Each trigram sequence is estimated for probability by maximum likelihood estimation (MLE) from the language model. If a tri-gram sequence of the question is not present in the language model, it will lead to zero estimation. However, though the entire tri-gram sequence may not occur in the language model, a partial word sequence, a lower-order n-gram (bi-gram) of it, could be valid. The Backoff approach <ns0:ref type='bibr' target='#b71'>[71]</ns0:ref> [72] is considered for tri-grams to take into account of sequence which counts to zero. The tri-gram sequences which estimate to zero are further estimated for their bigrams. The probability of a tri-gram is depicted in Eq. 4 ( <ns0:ref type='formula'>4</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = { 𝑐(𝑤1, 𝑤2, 𝑤3) 𝑐(𝑤1,𝑤2) , if 𝑐(𝑤1, 𝑤2, 𝑤3) > 0 0.5 × ( 𝐶(𝑤 1 , 𝑤 2 ) 𝐶(𝑤 1 ) + 𝐶(𝑤 2 , 𝑤 3 ) 𝐶(𝑤 2 ) ) , if 𝑐(𝑤1, 𝑤2, 𝑤3) = 0</ns0:formula><ns0:p>The probability of each tri-gram ranges from 0<=P<=1. A higher probability refers to more correctness and higher occurrence. The entire probability of syntactic correctness of the sentence can be obtained as the addition of probability of each tri-gram in the question in Eq. 5, where k is the number of tri-grams in the question and P i is the probability of the i th tri-gram sequence in the sentence.</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_3'>𝐸 𝑠𝑦 = 1 𝑘 ∑ 𝑘 𝑖 = 1 𝑃 𝑖</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.2.3'>Probability Estimation for Question Correctness based on Semantic Aspect</ns0:head><ns0:p>The correctness of question semantic is assessed by estimating the validity of individual overlapping tri-gram sequences of the sentence. The validity of the tri-gram is assessed by the probability estimation of each tri-gram sequence in question found matches in the language model, as shown in Eq. 6. The semantic correctness of a question is estimated on the full similarity match of each tri-gram sequence. More the number of subsequences of the question sentence matches the language model, more is the chance of the question being semantically correct. The overlapping tri-gram sequences reflect the cohesion among words in the sentence subsequences. Thus, increasing the number of matching of the tri-gram sequences establishes a higher probability of semantic accuracy of the question. The semantic correctness of the question is calculated as the summative average of probabilities of each tri-gram sequence in the sentence is shown in Eq. 7.</ns0:p><ns0:p>(6)</ns0:p><ns0:formula xml:id='formula_4'>𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = { 1, if 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) > 0 0, if 𝑃(𝑤 3 |𝑤 1 ,𝑤 2 ) = 0 (7) 𝐸 𝑠𝑚 = 1 𝑘 ∑ 𝑘 𝑖 = 1 𝑃 𝑖</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.2.4'>Classification</ns0:head><ns0:p>The correctness of a question is calculated by Eq. 8, where E sy and E sm are the probability estimates of syntactical and semantic correctness of the sentence, respectively. A syntactically correct question has E sy = 1, and E sm = 1 for semantically correct. Hence, the standard score for a correct question is 1 + 1 = 2. Thus the degree of correctness (C d ) of the question with respect to the complete correctness (i.e., 2) is assessed by adding the calculated probability estimates E sy and E sm and subtracting from 2. We considered the question is correctly structured, if C d ≤ 20; otherwise, the framing of the question is not correct. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Experiment and Performance Evaluation for Question's Correctness Assessment</ns0:head><ns0:p>The evaluation of the performance measure of the proposed approach for assessing the correctness of the learner question is done on a corpus of 634 annotated questions, where 52% of questions are correctly framed. The performance of the tri-gram approach for classifying questions as correct or incorrect is measured based on the metrics: true positive, true negative, false negative, and false positive, and the performance measures: Accuracy, Precision, Recall, F1-Score, as shown in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4. Performance measures of the proposed approach</ns0:head><ns0:p>In the experiment, we attempted to distinguish between correct and incorrect questions based on the probabilistic calculation proposed by our approach. The experimental results show that our method fails to classify 50 of these questions correctly. Out of these 50 questions, 32 were correct questions but are identified as incorrect. Further analysis of these false-negative questions reveals that after preprocessing and stopword removal, the length of most of the questions is reduced to less than three. These questions fail to generate any tri-grams to perform the probabilistic calculation. So, these questions by convention get marked as incorrect. Some of these false-negative questions even belong to domains that are not present in the training dataset. As a result, the proposed method fails to identify these questions correctly. The other set of incorrectly classified questions comprises incorrect questions which are marked as correct. The false-positive questions primarily have misplaced punctuation marks which results in the structure of the incorrect question identical to the correct questions in the training set. They form tri-grams or bi-grams, which perfectly match the tri-grams or bi-grams from the language model and render a high probabilistic score for the question. A margin of 8% error shows the efficiency of the proposed approach.</ns0:p><ns0:p>The efficacy of the tri-gram model approach was compared with other n-grams. The models were trained over the same question dataset to keep the experiment bias-free. Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> shows a comparison of the accuracy measures obtained for each n-gram approach over the same statistical calculation. It is evidently seen that the accuracy of tri-gram is far better than other ngrams. The accuracy decreases with the increasing value of n in n-gram. It leads to biased higher-order word sequence pattern search and fewer options for pattern assessment at lower orders. This causes restricted pattern search and a decrease in accuracy. Similarly, decreasing n leads to word sequence pattern search at lower order, which restricts the probability of correctness of the word sequences at higher orders. This typically reduces the accuracy. The comparative experiment thus concludes that the use of the tri-gram model for question assessment leads to better assessment results. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The result of the proposed approach is compared with the result of other research work <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref>. Ganesh et al. <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref> applied a tri-gram based approach to detect an error in English language sentences. Table <ns0:ref type='table'>5</ns0:ref> shows the result comparison in terms of four assessment metrics. From the table, it is evident that the accuracy of our proposed approach is much better. However, the precision of both approaches is the same. This establishes the true positive and true negative identification cases are better in our approach for detecting the errors and thus the correctness or incorrectness of the question sentences.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref>. Comparative results of the proposed approach and the solution given in <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref> 4 Guiding the Learner to the Probable Correct Question In the previous section (Section 3), we checked if the question given as input by the learner to the query-based learning system is syntactically and semantically correct or not. If the question is not correct, we guide the learner to the probable correct question that she actually intended to ask through one or multiple steps of question suggestions. The detailed methodology and framework of the proposed work are discussed in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Similarity-based Recommendation for Mitigating Incorrect Learner Question</ns0:head><ns0:p>Computationally auto-correcting the incorrectly framed question is one of the acclaimed ways followed in literature. But the success is limited and restricted to correcting only a few types of errors or mistakes. The typical mistakes a learner commits while articulating a question are shown in Fig. <ns0:ref type='figure'>7</ns0:ref>. For instance, inappropriate word selection may not reflect the exact intention of the learner. Similarly, insufficient keywords may not express the intended concept.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 7. Typical mistakes made by the learner in a question</ns0:head><ns0:p>In regard to these, except for grammatical and sequential ordering errors, auto-correction for other types of errors is not possible. The other way around, the problem is suggesting correct questions to the learner which are near to what she intended to ask. Suggesting correct questions which are similar to information and morphological structure to the given question could lead to having a chance that learner may found the right question which she intends to ask. Considering the information like the concepts and functional words which are used in compiling the question is best of her knowledge in the current information seeking situation, the learner could be recommended appropriate questions which are aligned to/with the information they are seeking for. Thus, suggesting correct questions in contrast to the incorrect question imposed by the learner is through similarity-based recommendation is an effective way to overcome the incorrect </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Issues in Similarity-Based Recommendation of Questions</ns0:head><ns0:p>Cosine and Jaccard similarity techniques are the two text-based similarity approach which has been widely incorporated for finding similar text <ns0:ref type='bibr' target='#b73'>[73]</ns0:ref> <ns0:ref type='bibr' target='#b74'>[74]</ns0:ref>. But these approaches, when applied to question-based corpus for identifying similar question text, lead to the recommendation issues, as discussed in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Information Overload</ns0:head><ns0:p>Text similarity based on word match searches for similarity for every occurring word in the source sentence-incorrect question text for an exact match in the questions present in the question corpus. The needful comparison based on matching word occurrence among the sentences returns similar text. Since the question framing is incorrect, taking a part of the entire sentence which seemingly founds to be correct and conveys the learner's intent, could lead to a better similarity match. However, the prevailing constraint and limitations of NLP fail to analyze and identify the parts of the source sentence, which are correct as per learner intention. Failing to determine this leads to ambiguity in identifying the parts of a sentence that are to be taken correctly for similarity match. Without this knowledge, the similarity search is done for each occurring word (assuming they are correct as per the learner intent) in the question against the questions in the corpus lead to a huge set of information. For example, a learner questions on Java with incorrect word ordering and missing words like 'What different are interface implement', when runs for similarity match like Jaccard similarity on a question corpus returns a lot of information, as shown in Table <ns0:ref type='table' target='#tab_0'>6</ns0:ref>. With this amount of information, the learner may get confused and lost. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Diverse Information</ns0:head><ns0:p>A learner, when composing a question, intends to seek information limited to a particular topic(s). Text similarity based on word match searches for similarity for every occurring word in the source sentence for an exact match into the question corpus. For similarity measurement, weightage is given to word occurrence frequency rather than on their subject domain relevancy. No consideration is given to individual tokens belonging to a topic of a domain. Since a question is made up of functional words (noun or verb) along with concepts (domain keywords), the word match found for every functional word in the corpus leads to different questions having different topics which the learner does not intends to seek. This results in questions that are beyond the search topic boundary, leading to diversification of information. For example, the similarity search for an incomplete question like 'access modifier in Java' using Jaccard similarity returns questions of different topics, as shown in Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_3'>8</ns0:ref> Manuscript to be reviewed Computer Science questions belonging to different topics for the given similarity recommendation. A large number of questions are on a different topic than that of the input question. This may put the learner in jeopardy and confusion. Conclusively, the similarity match on functional words of the source question in the corpus may result in diversification instead of convergence.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>7</ns0:ref>. Recommended list of question and their topic retrieved using Jaccard similarity for the incorrect input question 'access modifier in Java' </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.3'>Biased to Exact Word Match</ns0:head><ns0:p>While framing a question, keywords and functional words are integrated and sequenced in an appropriate manner to make meaning out of the question. The use of these words by the learner is the natural outcome of the learner's knowledge and communication skill. And as a reason, lack of a learner's expertise does not assure the correctness of question framing. The similarity assessment technique performs an exact word match. This will return only those questions, the words of which are exactly matched (word-by-word) with the learner's input question. This results in obscuring many other similar questions, which are having different words but similar or near to similar meanings. And thus, many of the questions having similar meanings but having different word construction are ignored, resulting in poor efficiency.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Proposed Framework for Correct Question Suggestion to the Learner</ns0:head><ns0:p>Considering the above-mentioned three problems, we have adopted the soft cosine technique to find similar sentences. The similarity matching is augmented by question selection and iteration pass. We propose a similarity assessment framework for suggesting the correct question for a given incorrect question on a particular domain. The framework consists of three phases of working, as discussed below. The framework is shown in Fig. <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>, while the process flow is shown in Fig. <ns0:ref type='figure' target='#fig_18'>10</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Selecting Questions with Similar Concepts</ns0:head><ns0:p>The selection of questions with similar concepts limits the search boundary, and hence the diverse information issue can be addressed. Learners impose questions using the best of their PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>knowledge. This makes them use concepts that are more aligned with the information they are trying to seek. Though not all the concepts which are articulated in the question are rightly chosen, the probability of having the required concept in the question also persists. And thus, claiming all questions from the corpus having the same concept(s) as present in the source question could increase the likelihood of finding the right intended question. This also reduces the probability of recommending questions that are completely on a different topic(s) or concept(s) not relating to the concept(s) present in the source question. As a reason, the conceptwise selection of questions will reduce the diversification of information recommendation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Similarity Assessment and Correct Question Recommendation</ns0:head><ns0:p>A learner may compose an incorrect question due to the following three reasons: a) There are insufficient keywords to express the question. b) Insufficient number of words used to express the question.</ns0:p><ns0:p>c) The selection of words and their usage may be incorrect.</ns0:p><ns0:p>In all the cases, we need to find the alternative questions closest to the learner's intended question. For estimating the similarity, we suggested looking for the questions that have the same or similar word features as the learner's question. A hard similarity (word to word) match for word features between the incorrect and alternative question reduces the chances of getting a more accurate alternative. Moreover, conducting a hard similarity search in the word feature space of the correct question, the source question's inappropriate words would be of no use. Rather a soft similarity (synonym or close related words) match would give a high probability of finding the questions that are meaningfully aligned to the learner's intent. To address the similarity match problem and to find the correct question, we applied soft cosine measures. Soft cosine allows finding the questions that are significantly similar in terms of the semantic matching, irrespective of the exact word match.</ns0:p><ns0:p>The similarity measure sim (f i , f j ) in soft cosine calculates the similarity for synonym or relatedness between the features f i and f j of the vectors under consideration. Here, the vector is a question, and the words of the question represent its features. A dictionary approach like WordNet::Similarity is being used to calculate the similarity (or relatedness) among the features <ns0:ref type='bibr' target='#b75'>[75]</ns0:ref>.</ns0:p><ns0:p>From the n-dimensional vector space model's perspective, the soft cosine measures the semantic comparability between two vectors. It captures the orientation (the angle) between the two vectors. But unlike cosine similarity, the features are projected in an n-dimensional space so that similar features are close by with very less angle difference. This causes the meaningfully similar words (features) of vectors (questions) to have minimal angle differences <ns0:ref type='bibr' target='#b76'>[76]</ns0:ref>, as shown in Fig. <ns0:ref type='figure' target='#fig_18'>11</ns0:ref>. The equation for soft cosine is given in Eq. 9. Where, S ij is the similarity between the features i and j, and p and q are the input question and the correct question, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.3'>Iteration and Question Selection</ns0:head><ns0:p>To overcome the issue of information overload, ten questions whose similarities are found more than 50% in relation to the source question text are enlisted to choose by the learner. This allows the learner to focus much on what he is actually seeking rather than getting overwhelmed by the huge information which would have been recommended otherwise. Since the approach is probabilistic, chances are there that no right question which is close to learner intention is found in the list. In such a case, selecting a question from the recommended list nearer to the question which learner intends to seek would allow the system to have better-informed data. The learner selected questions that, in turn, act as a seed for further similarity search. Considering the selected question (seed question) as new input for further similarity search would actually converge the search boundary and increase the homogeneity of information. This will reduce diversification. With every recommendation pass, the degree of concept-wise similarity increases, which, in turn, increases the range of similar questions. This makes the question suggestion to shift closer to the learner's intention. The complete process is presented in Algorithm 1. The server contains the web application (WebApp) with the requisite HTML and Python file, Flask 3 framework, and Python (version 3.8). Flask is a web application microframework that allows to delegate web pages over the network and handle learner's input requests. The framework is glued as a layer to Python for executing the processes. The model is implemented in Python and is deployed in WebApp as a Python file. Further, the learner's different interactions with the system are stored as the experimental data in the SQLite database, which comes default with Python.</ns0:p><ns0:p>The web server is connected to the client devices over the internet or LAN to exchange HTTP Requests and HTTP Responses. And, the learner (client) interacts with the model through the webpage, as shown in Fig. <ns0:ref type='figure' target='#fig_18'>13</ns0:ref>. The reason behind choosing this web model for the experiment is for q in similar_correct_question then similarity = score_similarity(q) if similarity > 0.50 then print q end if end for print 'input the question and abort/search' input q, status if status == 'Abort' then print q, 'is the intended question' else if Wq = q goto Label 1 end if Selection of the questions based on the concept, and followed by similarity assessment, is carried out in the server. Three similarity assessment techniques -soft cosine, Jaccard, and cosine similarity are used to find the intended correct questions from the corpus. These three techniques are followed in parallel for assessing their performance for the given incorrect input questions. For this experiment, we used the complete training corpus (i.e., 2533 questions).</ns0:p><ns0:p>To select the probable correct question from the recommend similarity list, a threshold of 0.5 is considered as the minimum similarity score for soft cosine, while 0.2 is considered for Jaccard and cosine. It was found that Jaccard and cosine similarity techniques returned either no or very few (one or two) similar questions, which were not suitable for carrying out the experiment. Further, in some cases, while searching for similar questions to the given incorrect question, the same question is iteratively returned for each consecutive pass. As a reason, in the cases of Jaccard and cosine, the threshold for similarity score is reduced to a lower value of 0.2. This gave some outputs needed to carry out the experiment and compared to the result of soft cosine.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 13. The user interface for learner interaction</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.4.2'>Learner Verification</ns0:head><ns0:p>The performance of the framework for similarity-based recommendation to find the intended question was verified by manual assessment. The assessment was carried by a group of learners. A total of 34 students of the CSE department at Bengal Institute of Technology, studying Java in their 6 th semester of the B.Tech degree program, were selected. The students chosen were low scorers in the subject. The rationale behind choosing these students was that we wanted to select learners who are aware of the Java language and its terminology but are neither expert nor good in the subject. This made them suitable candidates as they were susceptible to compose incorrect questions.</ns0:p><ns0:p>Each student was instructed to inputs approximately three incorrect questions, totaling 100. Corresponding to each question, three recommendations are made using the soft cosine, Jaccard, and cosine similarity techniques, as shown in Fig. <ns0:ref type='figure' target='#fig_18'>13</ns0:ref>. If the student found the correct intended question, the iteration was stopped for the respective similarity technique. If the intended question was not found in the recommended list, the student chose a question from the list as a seed question that was close to the intended question, and another iteration or pass was followed. If the intended question was not found within three passes, the recommendation process for the respective individual similarity technique was stopped. The purpose of using three similarity techniques is to make a comparison and find the best performance among the three.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Result and Analysis</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.5.1'>Accuracy</ns0:head><ns0:p>The learner input and feedback on a total of 100 incorrect questions are shown in Table <ns0:ref type='table'>8</ns0:ref>. The learner acceptance result of finding the intended correct question against the incorrect input question is summarized and shown in Fig. <ns0:ref type='figure' target='#fig_14'>14</ns0:ref>. The summarization is made on the basis of whether the learner finds the intended question or not for each of the three similarities-based recommendations. Based on learner input and the system feedback, the framework is evaluated for the accuracy metric. The accuracy is an intuitive performance measure, a ratio of correct observation made to the total observation made. The accuracy is defined in percentage by Eq. 10.  B is the total number of questions taken for observation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 8. Similarity recommendation against learner questions</ns0:head><ns0:p>The overall accuracy result of the framework corresponding to the soft cosine, Jaccard, and cosine similarity techniques is shown in Fig. <ns0:ref type='figure' target='#fig_16'>15</ns0:ref>. The accuracy results for learners accepting the recommended question show that soft cosine similarity outperforms the cosine and Jaccard similarities. In the given experimental data set, the soft cosine based recommendation returns the correct result in two or more passes for 12 input questions. While, for the other 73 input questions, it returns the result in one pass. Therefore, it can be concluded that though the soft cosine similarity-based recommendation returns the intended question in one pass for the maximum number of questions, recommending results in two or more passes is unavoidable. It is observed that input questions lacking sufficient information cause the recommendation system to iterate multiple passes of learner's interaction to reach the intended question. The hefty size of the corpus might be another reason for the increased number of passes.</ns0:p><ns0:p>The results also show that for 15 input questions, the soft cosine similarity-based recommendation fails to find the correct question matching to learner's intent. It is observed that in very few cases where the words in the input question are highly scrambled or out of sequence, it may cause the soft cosine to fail to find the correct questions. In this case, the Jaccard similarity outperforms the soft cosine. The other reason which contributes to soft cosine failing is the string length of the input question. If the string length is reduced to one or two words after stopword removal in question preprocessing, the soft cosine based recommendation is unable to find the exact intended question from the huge number of questions within a limited number (three passes) of learner's interaction. Perhaps a greater number of interactions were needed.</ns0:p><ns0:p>Besides these two structural issues on input questions, the soft cosine has some inherent limitation which causes the recommendation set to fail in retrieving the appropriate questions near to learner intention. Even though it is claimed that soft cosine works well on word similarity, actually, it does not do well for multiple synonyms while matching for similarity. The other inherent issue is that the soft cosine fails to infer the common-sense meaning from a sequence of words or phrases to find semantical similarity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5.2'>Diversity and Evenness</ns0:head><ns0:p>Soft cosine technique with every iteration converges the search for questions on a particular topic. This causes the recommended questions to be very much focused on the intent of the input question. To assess the effectiveness of soft cosine in each pass, the iteration result of the recommended question list, obtained by the three similarity assessment techniques, is analyzed for diversity and evenness. The diversity specifies how the questions in the recommended list are diverse in terms of topic. Where the evenness specifies how evenly the topic information (concepts) are spread (distribution) in the recommended list. The diversity and the evenness of information in the recommended list of questions in each pass are calculated by Shannon's diversity index (H) and Shannon's equitability (E H ), respectively, as given by Eq. 11. and Eq. 12.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Where, n is the number of topic category and P i is the proportion of the number of i th topic relative to the total count of individual topics for all questions in the recommended list.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>(</ns0:p><ns0:formula xml:id='formula_5'>) 𝐸 𝐻 = 𝐻 ln 𝑆<ns0:label>12</ns0:label></ns0:formula><ns0:p>Where, S is the total count of individual topics for all questions in the recommended list. The evenness value assumes between 0 and 1, where 1 denoting completely even. In an ideal situation, H ≈ 0 specifies that topic in recommendation question list is not diverse and all recommended question focuses on one topic. Similarly, E H ≈ 0 specifies zero dispersion of topics in the recommended question list.</ns0:p><ns0:p>The changes in diversity and equitability indices along to each pass for a given incorrect question 'java not have destroy and how garbage collect' are discussed below. a) Each keyword in the source question denotes a concept which in turn relates to a topic.</ns0:p><ns0:p>The keywords in the question are used to select and group questions from the corpus belonging to the same topic domains. The incorrect question is matched with the grouped question using the soft cosine measure. The set of suggested questions returned by the soft cosine similarity measure in the first pass is shown in Table <ns0:ref type='table'>9</ns0:ref>. Each keyword in the recommended similar question list reflects a concept which accounts for a count of the respective topics. Based on which the H and E H are calculated for the list as given in Table <ns0:ref type='table'>10</ns0:ref>.</ns0:p><ns0:p>b) The learner chooses the question 'explain garbage collection in java programming' from the recommended list of questions which is closest to her intent as the seed question for further searching. c) In the second pass, again, based on the keywords from the source question, the questions on the same topic are selected and grouped from the corpus. The set of suggested questions returned by the soft cosine similarity for the selected question against the selected source question is shown in Table <ns0:ref type='table' target='#tab_1'>11</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>9</ns0:ref>. Suggested similar questions from the first iteration (pass 1) Table <ns0:ref type='table'>10</ns0:ref>. Diversity and evenness measures from the pass 1 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on the individual topic count and the total topic count, the H and E H are calculated for the list, as given in Table <ns0:ref type='table'>12</ns0:ref>. It is evident that the diversity index H=1.02 in pass 1 is reduced to H=0.85 in pass 2. This implies that the diversity of topic information found in the recommended list decreases along with the passes. This signifies the search information space converges, which give learner to be focused and better options to select the question from the list. Further, the evenness E H =0.985 in pass 1 is reduced to E H =0.781 in pass 2. This implies that the unevenness of topic distribution among the questions increases. This signifies that the distribution of the intended topic among the question increases which give a high probability of finding the right question.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 12. Diversity and evenness measures from the pass 2</ns0:head><ns0:p>The keyword-based selection and grouping of questions from corpus eliminates the otherwise irrelevant question and thereby restricts it to a reduced topic search space. Further, soft cosine measure based similarity concretely shrinking the search to more meaningful questions close to the learner's intent and thereby decreasing the diversity.</ns0:p><ns0:p>From the results, a sample of nine questions that passed two iterations, applying the soft cosine similarity, was considered. Table <ns0:ref type='table'>13</ns0:ref> shows the diversity and evenness calculated on the topic information for the recommended question list obtained after each pass corresponding to the three similarity assessment techniques for a given question. Here, diversity and evenness equating to 0 indicate that the suggested question list belongs to the same topic. Some question searches using the similarity-based technique led the learner to find the intended question in the first pass. This made the second pass for the question search a not applicable (NA) case. From the table, it is quite clear that with every pass, the diversity in the recommended list of the question, obtained by soft cosine in comparison to other, decreases. This made us conclude that with the progression of search iteration, the search space becomes narrower; in other words, the search converges. This ensures the search result to be focused on the intended topic, which helps the learner in reaching the intended question quickly.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 13. Diversity index and equitability on recommended questions 5 Conclusions and Further Scope</ns0:head><ns0:p>A lot of emphases are given to developing and structuring the contents so that it can be attractive and motivating to learners. Due to the high-cost factor and difficulty in managing peer-to-peer support, learner-expert based interaction is being less encouraged in online systems. Questions are one of the key forms of natural language interaction with computers which gives the learner an upper hand in interacting with computers more broadly. Composing correct questions is essential from this perspective. A rightly composed question allows a clear understanding of what the learner wants to know. An incorrectly composed question raises ambiguity and diversions, which results in incorrect information. This often misleads the learner. For determining the intent and objective and hence the semantics of the question, it is important to know whether the question is composed correctly to its semantics. Determining whether the input question is incorrectly or rightly composed would increase the accuracy of information retrieval. This put the absolute requirement for verifying whether the question framing is and by semantics is correct or not before it can be used for information retrieval. This paper proposes an approach for assessing the validity of framing the question and its semantics. A tri-gram based language model is used for assessing the question's correctness in terms of syntax and semantics. The model outperforms the other n-gram approaches and establishes the fact that tri-gram optimally performs well in assessing the questions. The tri-gram language model exhibits an accuracy of 92%, which is way higher than the accuracy shown by 2gram, 4-gram, and 5-gram over the same test data assessment.</ns0:p><ns0:p>The work also proposes an interactive framework for correct question recommendation. The framework uses a soft cosine based similarity technique for recommending the correct question to the learner. The proposed framework is assessed by learner questions and compared with other similarity assessment techniques, viz. cosine and Jaccard. The soft cosine similarity technique recommends the correct question way better than the other two, achieving an accuracy of 85%. In the case of multi-pass interaction, as the number of passes increased, the information diversity is reduced, and the search is converged to the intended question quickly.</ns0:p><ns0:p>In conclusion, incorporating the presented work in an interactive OLS will not only improve the performance of the system significantly but will also enhance the learner satisfaction and learning focus, leading to a boosted quality of learning. The proposed approach can be used in precise personalized learning recommendations and mitigating the associated cold start problem.</ns0:p><ns0:p>However, this work has a couple of limitations which opens up further research scopes. Since we used a tri-gram based approach, it cannot assess the correctness of a question that has less than three words. Also, it fails to assess the informal questions that typically comprise compound and multiple sentences. Techniques like graphs (semantic network), machine learning (LSTM), etc., can be explored to solve these issues. </ns0:p></ns0:div>
<ns0:div><ns0:head>Rulebased approach</ns0:head><ns0:p>The rule-based approach calls for the application of linguistic rule devised by a linguistic expert for assessing the sentence to find the errors. The rule-based approach includes NLP techniques, tree parsing, etc. [27]</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical approach</ns0:head><ns0:p>The statistical approach uses different statistical and modelling techniques to know more about the existing patterns to infer knowledge. The statistical approach includes techniques like machine learning, pattern matching and mining.</ns0:p><ns0:p>[ Each of the approaches has shortcoming and advantages in comparison to each for detecting an error in the text. Since the implicit working procedure for these techniques is not competent enough to identify the errors, thus the techniques are often combined as a hybrid approach to overcome the limitation of each other. <ns0:ref type='bibr' target='#b54'>[54]</ns0:ref> [55] <ns0:ref type='bibr'>[</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The layout of the proposed work and the implementational environment</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>Syntax error: The error in sentence due to wrong/violation of grammar. The syntax error is of the following types: o Subject-verb error o Article or determiner error o Noun number error o Verb tense or verb form error o Preposition error  Punctuation error: The error in a sentence, which is generated due to misplacing or missing punctuation marks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>The details of the question datasets are as following:Number of questions in training dataset: 2533 (all correct)  Number of questions in testing dataset: 634  Number of correct questions in testing dataset: 334  Number of incorrect questions in testing dataset: 300 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>( 8 )</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>𝐶 𝑑 = (2 -(𝐸 𝑠𝑦 + 𝐸 𝑠𝑚 )) × 50PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Accuracy comparison of the four n-gram approaches</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science question problem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>shows the share of the number of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Returned similar questions belonging to different topics by Jaccard similarity</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. The proposed framework for correct question suggestion to the learner</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>( 9 )</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>𝑆𝑜𝑓𝑡_𝑐𝑜𝑠𝑖𝑛𝑒(𝑝,𝑞) = ∑ 𝑁 𝑖,𝑗 𝑆 𝑖𝑗 𝑝 𝑖 𝑞 𝑗 ∑ 𝑁 𝑖𝑗 𝑆 𝑖𝑗 𝑝 𝑖 𝑝 𝑗 ∑ 𝑁 𝑖𝑗 𝑆 𝑖𝑗 𝑞 𝑖 𝑞 𝑗</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Algorithm 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Finding the correct question as per learner intent Incorrect question Wq Input: Corpus crp Output: The intended question</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 11 . 4 . 4</ns0:head><ns0:label>1144</ns0:label><ns0:figDesc>Figure 11. Comparison between cosine and soft cosine A) orientation between vectors in cosine similarity B) orientation between vectors in soft cosine measure</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>3Label 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>https://flask.palletsprojects.com/en/1.1.x/ concepts[] = get_concept(Wq) Selected_question[] = search_question(crp, concepts) Similar_correct_question[] = soft_cosine_similarity(Selected_question, Wq)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. The web (client/server) model used to implement the proposed framework</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Comparing the correct question recommendation based on three similarity metrics A) Soft-cosine B) Cosine C) Jaccard</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>A is the number of observations made where the learner finds the correct intended question.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Accuracy comparison for similar question recommendation of three similarity measures</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 1 Layout</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,166.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,250.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,525.00,217.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,240.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,525.00,128.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,419.25,125.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,175.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,178.87,525.00,351.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,178.87,525.00,179.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,178.87,525.00,159.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,178.87,525.00,213.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,178.87,525.00,117.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Similar questions returned by Jaccard similarity for the learner question 'what different are interface implement'</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Suggested similar questions from the first iteration (pass 2)</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>metric Value Performance measure Value</ns0:head><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b56'>56]</ns0:ref> </ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>Unigram interface? Unigram probability True positive Bi-gram Accuracy Proposed approach (%) Result of [31] (%) Bigram probability 282 Accuracy 0.9211 92.11 83.33 Tri-gram 52 What is the interface of legacy? probability Tri-gram</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>what 23 What interface must an object implement 0.069 what different 0.034 False positive 18 Precision Precision 94.00 53 What is different between iterator and what different type 0.294 0.9400 94.11</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>different 0.007 True negative different type 0.157 302 Recall Recall 89.80 before it can be written to a stream as an</ns0:cell><ns0:cell>different type operator 0.117 0.8980 80.00 listiterator?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>type</ns0:cell><ns0:cell cols='4'>0.008 False negative type operator 0.023 32 F1-Score F1-Score 91.88 object?</ns0:cell><ns0:cell>type operator use 0.9188 86.48</ns0:cell><ns0:cell>0.333</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='3'>operator 0.006 24 What is applet stub interface? operator use</ns0:cell><ns0:cell>0.067</ns0:cell><ns0:cell>operator use Java 54 What are different collection views 0.166</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>use</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>use Java</ns0:cell><ns0:cell>0.024</ns0:cell><ns0:cell>provided by map interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>Java 25 How interface is different from a class. 0.042</ns0:cell><ns0:cell>55 What is comparable and comparator</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>26 What is an interface?</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>56 What will happen if one of the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>members in the class doesn't</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>implement serializable interface?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>29 What is interface?</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>57 What is serializable interface in Java?</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>30 How interface is different from class?</ns0:cell><ns0:cell /><ns0:cell>58 What is externalizable interface?</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>1 Performance 1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Recommended list of question and their topic retrieved using Jaccard similarity for the incorrect input question 'access modifier in Java' Why java is important to the internet 1(0.81) 1(0.66) 1(0.66) I S : Insufficient information E G : Grammatical error E S : Sequential error NF: Not found</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://www.nltk.org/ PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>http://www.speech.sri.com/projects/srilm/manpages/ngram-format.5.html PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54966:2:0:NEW 9 Apr 2021)</ns0:note>
</ns0:body>
" | "10th April, 2021
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the minor issues as per the reviewer’s comments.
We thank you and all the reviewers for the generous comments on the manuscript and valuable suggestions for improvement.
We have edited the manuscript appropriately to address all the concerns. The point-by-point response to the reviewer’s comments are given below in a tabular format.
We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
Best regards,
Pijush Kanti Dutta Pramanik
Dept. of Computer Science & Engineering
National Institute of Technology, Durgapur, India
(On behalf of all authors)
Response to Reviewer’s Comments:
Reviewer 1:
Reviewer comments
Response
Basic reporting
No comment
NA
Experimental design
No comment
NA
Validity of the findings
No comment
NA
Comments for the Author
Paper format:
Text alignment seems to be off.
As per PeerJ guidelines, the texts are left aligned.
The references citation doesn't appear in order, it seems like the ones missing were just injected in the middle without considering the sequence although it is considered in the rest of the paper.
Corrected.
In the appendix, table 4 caption says 'Table 1. Performance measures of the proposed approach'.
Corrected.
Sub-sections title font like the one in line 122 (1.3.1) seems to be too small and unrecognizable which makes it hard to follow through and identify the beginning of the section.
Updated.
Content:
I still believe the paper lacks consideration for future work to improve the proposed framework or other further work recommended for e-learning experiment enhancement.
The other possible applications of the proposed work and further research scope are briefly mentioned in the Conclusion section.
Language:
Minor grammatical mistakes like missing articles such as in line 333 (an error instead of error), 343 (should be a/the user instead of user), line 906 (the same instead of same), 905 (the corpus instead of corpus), and many others.
Thanks for identifying the mistakes. All are resolved. The manuscript is further revised for possible grammar mistakes.
Grammatical mistakes like subject/verb disagreement like in line363 (should be the system gives instead of giving) Also (this puts instead of put.
Line 966 says 'Questions or questions', I believe that is meant to be 'questions and answers'. Also, line 968 says question or question, not sure if that's intended but it doesn't make sense to me
Wrong usage of [prepositions such as in line 883 (should be along with instead of along to).
" | Here is a paper. Please give your review comments after reading it. |
95 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Technological developments have a significant effect over the development of smart devices. The use of smart devices has been widespread due to their extensive capabilities. Android operating system is preferred in smart devices due to its open source structure. This is the reason for being the target of malware. The advancement in Android malware hiding and detection avoidance methods have overridden traditional malware detection methods.</ns0:p><ns0:p>Methods. In this study, a model using AndroAnalyzer which is using static analysis and deep learning system is proposed. Tests were carried out with the original dataset consisting of 7622 applications. Besides, additional tests were conducted with machine learning techniques to compare it with the deep learning method using the obtained feature vector.</ns0:p><ns0:p>Results. 98.16% accuracy value was reached by presenting a better performance compared to traditional machine learning techniques. Values of Recall, Precision and F-measure were 98.78, 99.24 and 98.90, respectively. The results showed that deep learning models using trace-based feature vectors outperform current cutting-edge technology approaches.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>In recent years, smart devices have become the main source of communication among people. While phones were offered only for verbal communicate for people, they have gained the property of being smart device nowadays. This rich technological equipment enables users to use these devices more and more each and every day <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. In the past, phones were used to send SMS and make phone calls, but today they are used in many areas primarily in web services and then as camera, music service provider, tablet PC. These devices are equipped with hardware and various sensors with advanced memory and processing power, just like computers. Due to this reason, they are very convenient for customization <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. An operating system is required for users to use smart devices. In present, different software such as IOS, Android, Windows, Blackberry OS are available for smart devices <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>. Android offers rich media support, optimized graphics infrastructure and powerful browser support to its users. Along with working with different APIs, it also supports structures such as sensor usage and real-time map systems. The fact that the Android operating system offers such a wide range of free features with open source has made it to be a widely preferred software for users <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. Applications that allow users to use more features can be developed and distributed both via Play Store and third-party environments <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>. Since these applications provide free benefit to users, Android has become the target of malware developers <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> It is possible to repackage Android applications by adding different hidden and malicious codes to the development files in the binary source structure. Thus, users are likely to be exposed to these undesirable situations without even realizing they are exposed. For this reason, researchers are trying to develop different malware detection tools to detect these activities and to reveal applications prepared for malicious purposes. In the Android operating system, if an application is desired to be used on a smart device, first the application is downloaded from the relevant platform, then the permissions required for the application to run are accepted and finally the installation is performed. It is not possible to install and run an application without accepting all the permissions claimed to be required to install the application. Many malicious developers turn this situation into an advantage and request permissions such as camera access, access to text messages, reading private information that the application does not need. Many Android users who do not have sufficient knowledge of the computer, accept these permissions and install them by being unaware of this malicious purpose. Thus, they become the clear target of malicious activities on their own <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> Various techniques such as static and dynamic analysis and derivatives of these techniques have been proposed for the detection of malware and protection of end users <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. Information about static analysis, signature-based analysis, application and expected behavior includes observations presented explicitly and implicitly in binary source code. It is a fast and effective method. However, there are other methods such as hiding the software code or detection bypass that developers use to circumvent this analysis method. Dynamic analysis which is also called as behavior-based analysis, is the collection of information about the runtime of the application, such as system calls, network access requests, information changes over file and memory installed in the real environment or on a sandbox <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>. Selecting meaningful features from Android applications using static analysis and modelling them in better manner enables the development of a powerful malware tool. Thus, unlike dynamic analysis approaches, downloading of applications to devices are prevented even temporarily basis for detection. The method suggested in this study is a new study having the purpose of learning of the requested permissions required by the application with the proposed model and thus detecting the malicious activities of new applications. The recommended method can be used in applications produced for all Android versions, including Android 11 API 30. The following improvements have been made in this study and the aim is to detect malware more accurately: Contributions of this work are:</ns0:p><ns0:p> Development of an advanced deep learning-based network for analyzing and developing malware for all Android versions.  Preparation of specific malicious and benign application datasets for training of network structure.  Via run of this process before installation, providing the users not to be exposed to any dangerous activities, even for a short time.  Ensuring the users to be warned against these detected malwares and presenting a model that makes more perfect detections due to continuous learning and converges False Positive (FP) and False Negative (FN) values to the minimum.</ns0:p><ns0:p>The rest of this article is organized as follows. In section 2, current studies performed in between 2018-2020 using static, dynamic and machine learning techniques are mentioned. In section 3, the methodological infrastructure of the model proposed in this study, pre-processing processes, Android application structure and details of the dataset prepared originally to be used in tests are given. The evaluation scheme, experimental parameters and performance results of the proposed method are presented comparatively in section 4. In section 5, the study has been evaluated in general and recommendations for the future are given.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>LITERATURE REVIEW</ns0:head><ns0:p>In this section, recent studies related with Android malware detection, feature generation and selection, static, dynamic and machine learning approaches are discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.1.</ns0:head><ns0:p>Static Analysis Using static analysis to determine whether Android applications are malicious or not is based on the inspection of application code and still preserves its popularity. Using the static analysis approach, solutions were produced using permissions, API calls, command systems and features based on the purpose of use. Although static analysis approaches allow for more comprehensive code inspection, malware developers are able to use different techniques to avoid static analysis and to hide purposeful code. Data encryption, hiding, update attacks or polymorphic techniques can be given as examples of these hiding techniques <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> DAPASA is a graphical malware detection tool that calculates the sensitivity of API calls using a data mining technique called TF-IDF <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. Detection is performed based on two assumptions that indicate how sensitive API calls are called. In <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref> a model was proposed to detect malicious software by analysing the permissions requested in the mobile application. In the first stage, statistics of the permissions requested by most of the malware were produced and their usage intensities were determined. In the second stage, the application status of the permissions with high usage intensity was investigated in order to determine malicious behaviour. In <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> a malware detection tool with code analysis base was developed to determine whether the permissions are requested by the application or not and whether these permissions are used or not. Classification was made according to a statistically determined threshold value. 92% success rate in Android malware detection was achieved.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>[18]</ns0:ref> a study based on the extraction of properties including permission, purpose and API calls of applications and classification of these extractions with K-nearest neighbors (KNN)derivative algorithms. It is a similarity-based approach. Malware detection was made with an average accuracy of 90% with the proposed algorithm. AppPerm analyzer <ns0:ref type='bibr' target='#b20'>[19]</ns0:ref> is a malware detection tool based on examining the codes together with the manifest file, creating double and triple permission groups and determining the risk scores of the applications accordingly. 95.50% in TP value and 96.88% in Specificity value were achieved.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.2.</ns0:head><ns0:p>Dynamic analysis Dynamic analysis is another method used to detect security vulnerabilities in Android applications <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. It involves a more complex process compared to the static analysis approach. Since the dynamic analysis approach is based on tracking the behavior of the application during runtime, it is not easy for malicious application developers to stop this analysis approach. Researchers often take support from the dynamic analysis approach to overcome the problems they encounter during the static analysis approach. There are many studies that suggest a dynamic analysis model for Android malware detection. In this section, a number of current studies are mentioned. MADAM is a malware detection tool that uses signature and behavior-based malware detection approach. It uses different properties to evaluate and classify applications. Features at four levels as application, kernel, user and package were extracted and used in the study <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>.</ns0:p><ns0:p>In the <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, a malware detection mechanism with the method of tracking behavioral system call traces were proposed. Malicious activities were predicted by examining the frequency of behavioral system calls with a previously trained classifier. There are some points where both static and dynamic analysis approaches are advantageous and disadvantageous. Static analysis is capable of finding malicious components and dynamic analysis is capable of observing the application's status at runtime. For this reason, some studies suggest both methods to be used together in order to benefit from advantageous sides of them <ns0:ref type='bibr' target='#b16'>[16,</ns0:ref><ns0:ref type='bibr' target='#b17'>17]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.3.</ns0:head><ns0:p>Machine learning Machine learning is the approach of allowing algorithms to self-learn the necessary parameters from the data to predict malware detection in Android applications. Machine learning techniques <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref> which are successfully applied in many problems today, have also been applied in the field of mobile security with deep learning in this study. In this section, some of the studies that have been made in recent years and that have used machine learning techniques are mentioned. Successful results were obtained in all of the studies. AspectDroid <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> is a study that allows monitoring suspicious activities such as dynamic class loading during runtime and review of them afterwards by writing them to log files. Activity tracking code has been added to the source code of the applications for this process. NTPDroid <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> is another study using Android permissions and network trafficbased features. The aim is to determine the probability of malicious behaviours. It has been possible to decrease the FP value and thus the level of evaluating benign practices as malicious practices.</ns0:p><ns0:p>In the <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> a suggestion was made for the detection of malware on the Internet of Things. The indepth analysis of malware is based on visualization by color image and classification with convolutional neural network (CNN) methodology. According to the experimental results, it has been shown that more successful results are produced compared to machine learning and oldstyle static and dynamic analysis approaches.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> a hybrid malware detection model was created for Android devices. In this model structure, in the first step, features to be obtained by static analysis such as requested permissions, permissions used, hardware components, intent filters and suspicious system calls were extracted. In addition, network usage and file read-write statistics were obtained by dynamic analysis of the application. Applications with these extracted features were classified by support vector machine (SVM) technique. In <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> a deep learning-based detection method using system call sequences and created with an LSTM-based classifier was proposed. As a result, 96.6% recall value was obtained with 9.3% FP value. In <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> a model using permission patterns extracted from Android applications was proposed. Required and used permissions were used. While achieving the highest 94% classification performance, FP value as 5% and FN value as 1% were achieved. In <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> a model based on extraction of features based on static analysis by using manifest files of apk files and selection by KNN and classification by SVM algorithms of these features was proposed. With the proposed method, a value of 70% and above in TP ratio and close to zero in FP value was obtained. Due to feature selection with KNN and classification with SVM, recognition performance close to classification models with deep learning was achieved. In <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> sensitive API calls, permissions, permission usage rates, properties obtained from system events were extracted. Ensemble Rotation forest classification model was proposed. As a result, 88.26% accuracy, 88.40% sensitivity and 88.16% precision values were obtained. Due to the proposed model, an improvement of 3.33% was achieved compared to the classification made with SVM. In <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> a CNN-based network model was proposed to test malware attacks on the Internet by visualizing colour images. Improved performance results for cyber security threats were obtained in the models where CNN were used. In <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> the AutoDroid tool which automatically detects malware according to API calls extracted using static analysis were developed. The system was developed using different types of deep neural networks (deep belief networks etc.). In the design made by using DBN, a 95.98% success rate was achieved in the tests performed using 2500 benign and 2500 malicious applications. Since Android is an open source and extensible platform, it allows us to extract and use as many application features as we want. The method proposed in this study has a robust and scalable perception and uses deep learning structure. In this manner, the method has a successful detection ability together with low resource consumption. It is more successful compared to current deep learning-based Android malware detection tools. Moreover, the study is based on real devices rather than emulators. Due to the 349 features extracted among the applications available in the dataset consisting total 7622 applications, more successful results were obtained compared to the existing models. It is a study that extensively investigates Android malware on real devices and comparatively evaluates different methods to measure the impact of codes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>MATERIAL AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1.'>Methodology</ns0:head><ns0:p>In the studies we mentioned in the second section, it was shown that there are some problems related with both static analysis approach and dynamic analysis. Due to the model proposed in this study, static analysis and machine learning techniques were used together for malware detection. In this manner, it has been possible to make a classification with a better success Manuscript to be reviewed Computer Science rate and to create a safer Android environment for users. All of this process is carried out without the apk files being uploaded to the user's mobile devices. Flow chart of the proposed model is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>. In the first stage, the process of creation of application datasets were performed. Both malicious and benign datasets are need for training of model. In some of the applications used in the process of creating these datasets, there may be problems in accessing the source code, and in some others, there may be problems in accessing the readable manifest.xml file. For this reason, first of all, these applications are determined and removed from the dataset before going to the feature extraction stage. This process is applied for both malicious and benevolent applications. After this preparation, application Java codes and application package files should be accessed in order to obtain the properties of applications with static analysis. In the second stage, these operations were performed by using Aapt, dex2jar, Xxd, jdcli.jar libraries. In this manner, access to manifest file including the Java code files of the applications, permissions, intent filters, and manufacturer information was provided. Then, a word-based parsing operation was performed on the Manifest.xml file and feature vectors that would be used for training and that contained meaningful information about the application were obtained. At this point, separate programs/frameworks are used to access application codes and manifest.xml file. The reason for this is that while the manifest file can be accessed with the aapt library, it is not possible to access java codes with the same program. Similarly, java codes can be only accessed with dex2jar, xxd or jdcli.jar. The features to be used in the deep learning model have been extracted by reading the data on both the code and the manifest file. A vector is produced from these features. Classification of test applications was made with the model obtained as a result of the training. Details related with the success of the classification are given in section 4 comparatively.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.</ns0:head><ns0:p>Android Application Basics Android applications are mainly written in Java language and then they are compiled together with data and source files into an archive file called Android Application Kit (APK). APK is the file which is distributed in the application market and used for installation of the application. There are four different types of application components such as event, services, broadcast receiver, and content provider. Communication among these components is provided by using a messaging object called intent. On Android, applications must declare all of their components in an XML manifest file within the APK. Intent filters define the limits of each component's capabilities and they are included in the package. Additional information declared in the XML file includes the user permissions required by the application (CALL_PHONE, SEND_SMS etc.), the minimum API level, and also the hardware and software features to be used by the application (GPS, camera, multi-touch screen) <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. As shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, in addition to the XML file, one or more dex files containing all classes in the APK file that will run on its own virtual machine (Dalvik Virtual Machine), the lib folder containing the files compiled specifically for the processor layer, application certificates, resource lists and there is a META_INF folder which contains the SHA-1 summary of these resources and a resources.arsc file that contains the precompiled resources.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.3.</ns0:head><ns0:p>Dataset Description One of the biggest challenges in Android malware detection researches is that it is not easy to access an up-to-date and reliable data set which is large enough. In this study, Drebin <ns0:ref type='bibr' target='#b37'>[35]</ns0:ref> and Genome <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref> malicious application datasets were used to obtain the malicious application set. The Drebin dataset contains 5560 malicious applications. It was created between August 2010 and October 2012. There are applications from 179 different application families <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. On the other hand, the benign application dataset was created specifically for this study. There are applications from different categories such as book-references, communication, finance, musicentertainment, news and magazines, medio, sports, tools, transportations, weather in the benign data set. It has been tried to have a homogeneous distribution.</ns0:p><ns0:p>In Table <ns0:ref type='table'>1</ns0:ref>, it is shown the details of the data set used for the study. Applications labelled as unknown were not used in this study. 6661 out of total 6739 applications in Drebin and Gnome datasets were determined as malicious datasets and they were used in training processes. For the benign dataset, 961 out of total 1073 applications were determined as true benign and they were used in this study. Benign refers to applications identified after verifying that the applications are not malicious or do not have potentially unwanted functionality. To detect this situation, the website https://www.virustotal.com <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref> which contains more than one antivirus program and can perform scanning simultaneously through different programs was used. Making this kind of benign label is a difficult and costly process. On the other hand, malicious applications refer to applications that pose potential danger to users. This application set was automatically tagged as it was taken from Drebin and Genome datasets which have been used in many studies before. Unknown is the name given to instances that are not assigned to malicious and benign tags. These samples could not be processed and they could not be examined manually by accessing their codes. All existing examples were first placed in this category. After the examinations, they were transferred to the other two labels (benign, malicious). Most of these applications can be expected to be harmless, but some of them are likely to have malicious purposes as there are cases where they cannot be detected. For this reason, they were not directly included in one of the other two groups.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Feature Extraction and Preprocessing</ns0:head><ns0:p>To develop an effective Android malware model, it is quite critical to obtain robust and broad representation features such as user permissions, manufacturer and user information, intent filters, process names, binary files. Thanks to this information, it becomes possible to detect malware. In this study, the processes of obtaining source codes and binary files of applications by reverse engineering and converting application APK files to java codes were carried out. The xxd tool was used to extract Dex files in APK files. Access to Java source codes of applications was provided with the Dex2jar and jdcli.jar libraries. Also, the aapt dump AndroidManifest.xml tool offered by the Android SDK was used to obtain xml files. The keywords and permissions obtained from the Androidmanifest.xml file were used in the training of the deep learning model proposed in this study. A total of 349 features were extracted and used in model training.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Results</ns0:head><ns0:p>In this section, the proposed model testing process was explained in detail. Both obtaining the best deep learning structure and comparing it with other classification methods or similar studies are shown in the table. Deep learning classifier results Performance measurement results according to the number of different hidden layers in the deep learning model are given in Table <ns0:ref type='table'>2</ns0:ref>. Results were taken using an input vector containing 349 input parameters. 2, 3, 4 and 5 layered deep learning models were used in 7 different test models. Thus, it was aimed to create the best performing deep learning model. Accordingly, although the results were very close to each other, the best results were achieved with a 4-layer model containing 300, 300, 300 neurons. An average of 1 minute 49 seconds was required for this training. 286202 parameters were generated. Results were made on data which were divided as 80% training and 20% test set. Training was provided as 50 Epoch. Both recall and precision values were above a certain level, so this indicated that the model was not good in one-way (only malicious detection or only benign detection) but it was successful in both cases. In classification problems, when the number of cluster elements are not evenly distributed, simply measuring the accuracy of the model is often an inadequate metric. For this reason, the performance of the proposed model has been analyzed with precision, recall and fmeasure values. Precision value was 99.24%. In malware detection, detecting benign practices as malicious can cause serious problems. For this reason, a high precision value shows that the model is successful in FP marking. Besides, a more successful value was obtained in Recall value and 98.78% level was achieved. This also shows that it gives good results in detecting malware. The value of f-measure at which precision and recall values are evaluated together and unbalanced cluster distributions can be observed was 98.9%. Successful results were obtained in this measurement in which all costs are evaluated. On the other hand, quite successful results have been produced even in the 2-layer neural network with 50 neurons where the model is much simpler. For this reason, a modelling can be made using a simpler neural network depending on the intended use. However, in this study, details are shown on the model at which the highest values were obtained. When the tests were performed as 70% training and 30% test, precision, recall, accuracy and fmeasure values were found as 0.979, 0.992, 0.980, 0.986, respectively. According to the tests performed as 80% training and 20% test, there was a 1% decrease in the results for some measurement metrics. This situation shows that the increase in the number of applications assigned for the training set will cause more successful results in the classification of tested applications.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The scheme of the model with the best results is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. According to this scheme, 349 parameters obtained from the features of mobile applications are given as inputs and a binary result is produced as a result of 1 input, 3 hidden and 1 output layer. The complexity of the model is at a normal level and model quickly completes the training process. The input vector with 349 features is reduced to 300 in the first layer and training is carried out with 300 neurons up to the output layer. In the output layer, it is reduced to 2 for a dual classification. Model includes 286202 trainable parameters. ReLu was used as the activation function in the hidden layer. Softmax was used as the activation function in the output layer. The optimized who used the error back propagation phase is Adam. Increasing the complexity of the model slowed down the learning process, however this did not provide a noticeable increase in classification performance. For this reason, a model that contained more hidden layers with more neurons was not designed. In addition, considering that this model will work on devices with limited resources such as mobile devices, it was thought to be beneficial to work with simpler models. In Fig. <ns0:ref type='figure'>4</ns0:ref>, it is shown that the change in accuracy on training and test data over 50 epochs. As can be seen from the graphic, the proposed method has overcome the over-fitting problem. Approximately after the first 10 Epochs, it was seen that the model actually reached a certain level. However, a stable result has not yet been produced in both training and testing phases. For this reason, the number of Epoch was gradually increased and the aim was to reach to a more stable structure for the results. The ROC curve is shown in Fig. <ns0:ref type='figure'>5</ns0:ref> The area under the curve is measured in AUC and gives the value of the deviation. Accordingly, a value of 0.9 and above is generally stated as a perfect classification <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> and this value was found to be 0.9515 in this study. This result shows that the model has made a very successful classification in malware detection. The confusion matrix of the test is shown in Fig. <ns0:ref type='figure'>6</ns0:ref>. High classification success was achieved in TP and TN values. However, the high value in the classification numbers made as FP and FN indicates a very dangerous situation for end users. It will cause users not to use some useful and safe applications for no reason and even worse, it will cause users to be at risk because some malicious applications are considered safe by them. FP and FN were taken in only 29 of the total 1525 tests performed in this study. It demonstrates the success of this model.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.4.</ns0:head><ns0:p>Comparison of the performance of the Deep Learning with other Machine Learning Algorithms In this section, the accuracy of the proposed deep learning model and the results of traditional machine learning algorithms were compared. 9 different classification algorithms were selected as a result of several pre-tests and examinations and these are among the widely used techniques. Accordingly, it is shown in Table <ns0:ref type='table'>3</ns0:ref> that the deep learning model gave better results than other classification models. Although the results were close to each other in general, overall results in the deep learning model were more successful. In other classification algorithms, (Weighted F measure) value was taken as 0.9223 at most, while it was 98.90% in deep learning model. Except the deep learning model, the most successful classification algorithms were ExtraTree, Random Forest (RF) and SVM. Achieving successful results with both machine learning algorithms and deep learning models showed that application features obtained with static analysis could produce quite successful results in detecting malware. 4.5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The comparison of the model proposed in this study with the studies using similar deep learning or machine learning techniques is shown in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>. It was observed that artificial intelligence modelling was used in almost all studies in 2019 and 2020. One of the main distinguishing differences among these studies is the dataset used and the second one is the feature vectors obtained from the applications in this dataset. In some studies, only the static property obtained from the manifest.xml file is used, while in other studies, intent filters, activity services, API calls can be used. The richness and homogeneity of the dataset has also been another factor that have a direct effect on the results. In addition, the use of different classification methods has been a factor in the difference of the results. When the results were evaluated comparatively according to similar parameters, it was seen that the model proposed in this study could produce successful results with respect to other studies. While much better results could be produced from some studies, better results were obtained with slight differences with the studies using close modelling. Very good performance results were obtained according to studies with similar dataset sizes. This situation shows that existing classification performance values were taken one step further with the model proposed in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusions</ns0:head><ns0:p>The Android platform is the target of malicious mobile application developers and black hat hackers. Many antimalware tools aim to fight against these applications and protect users. In this study, a model for Android malware detection was proposed. Models with high classification accuracy are needed in the development of this model. In these structures, there are two stages: selection of the features that best represent the problem and classification with high accuracy. In this study, a deep neural network structure with 3 hidden layers to classify and the permissions they request to represent applications was proposed. Reverse engineering applications were made to obtain feature vectors and a vector containing 349 features was obtained. The permissions requested by the applications are one of the most important parameters that reveal their purpose. The features used in this study were obtained by static analysis method having low cost, high efficiency and low risk advantages. A total of 6661 malicious samples taken from Drebin and Gnome datasets and 961 benign application samples collected originally were used. In the experimental results, 0.9924 Precision value, 0.9878 recall value, 0.9816 accuracy and 0.9890 F-measure value was obtained. The results showed that android permissions contain very good information to understand the purposes of the applications. The high classification performance obtained with the obtained data set showed that the deep learning structure and Android permissions were a suggestion that could be adopted in the development of malware detection tools. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The model proposed in this study can be developed easily and better performance results can be obtained with feature vectors using the properties obtained by dynamic analysis. This will be part of my future work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>Overview of the proposed model for Android Malware Detection</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>4. 1 .( 4 )</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Experimental setup and parametersIn order to measure the performance and efficiency of the deep neural network model proposed in this study, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070 /, 8GB RAM memory was used for training of the proposed model.Windows 10 64-bit operating system and x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2MB / sec memory usage. And pyton with scikit-learn, panda and numpy packages were used for the experiments. The proposed model was valued with Tensorflow. Many experiments have been done to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%-20% was made respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available on the Android OS are not used in most applications, no feature selection process has been performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation for feature selection in the data. In order to obtain the most successful model in DNN model, many different dnn model with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the softmax function is used in the output layer, reLu is used as the activation function in the input and hidden layers. Adam function is preferred for optimization in the output layer. Since the model completed the learning process in approximately 50 Epoch, the training stage was terminated at this point. As a result, best practice examples and the best values obtained as a results of numerous tests were used in the selection of parameters. 4.2. Performance measure The aim of this study is to create a deep learning-based model for classifying malicious and benign applications and detecting malicious applications. In our experiments, our deep learning models were trained in a binary classification problem as benign or malicious. The created model included an original deep learning architecture. The effectiveness of the model was evaluated and demonstrated by creating a confusion matrix. As a result of these tests, performance values between different popular machine learning techniques were compared to make a comparison of the proposed model. Besides, different test sets were created and the results were observed in a repeated manner. The results of these tests are given in detail in section 4.4 according to the performance measurement methods given below. True positive ratio (1), true negative ratio (2), false positive ratio (3), false negative ratio (4) and precision value (5) which are named as Recall are calculated as follows: PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)TP states for True positive sample amount, TN states for True negative sample amount, FP states for false positive sample amount and FN states for false negative sample amount. P, Precision value states for ratio of malicious applications classified as true. F measure value is measured according to (6) numbered equation separately for both malicious and benign datasets. These two calculations are made according to weighted FM equation<ns0:ref type='bibr' target='#b6'>(7)</ns0:ref>.𝐹-𝑚𝑒𝑎𝑠𝑢𝑟𝑒(𝐹𝑀) = 2 * 𝑟𝑒𝑐𝑎𝑙𝑙 * 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑐𝑎𝑙𝑙 + 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (6) 𝑊𝐹𝑀 = (𝐹 𝑏𝑒𝑛𝑖𝑔𝑛 * 𝑁 𝑏𝑒𝑛𝑖𝑔𝑛 ) + (𝐹 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 * 𝑁 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 ) 𝑁 𝑏𝑒𝑛𝑖𝑔𝑛 + 𝑁 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 (7) 4.3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>to observe the best results in consent-based classification. The ROC curve is used to measure the malware detection rate. It shows the effect of the learning model on the malware detection rate change and on the increase or decrease of the false positive value. The curve shows the change between TP value and FP value and an increase in one value brings a decrease in the other value. The fact that the ROC curve is close to the left and upper part as shown in the figure shows that the model gives the best results.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,333.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The comparison of classification performance among former methods and proposed</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Similar Works Selected Features</ns0:cell><ns0:cell>Num of</ns0:cell><ns0:cell>Num of</ns0:cell><ns0:cell>Num of Neurons</ns0:cell><ns0:cell cols='3'>Precision Recall Accuracy</ns0:cell><ns0:cell>F-measure</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Benign</ns0:cell><ns0:cell>Malwar</ns0:cell><ns0:cell>or Classification</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Apps</ns0:cell><ns0:cell>e Apps</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ASAEF[24]</ns0:cell><ns0:cell>Metadata,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='4'>37224 33259 N-gram, signature 96.4%</ns0:cell><ns0:cell cols='2'>96.1% 97.2%</ns0:cell><ns0:cell>96.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>permissions, intent</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>filter,</ns0:cell><ns0:cell cols='2'>activity,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>services</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FingerPrinting</ns0:cell><ns0:cell cols='2'>Family DNA</ns0:cell><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>928</ns0:cell><ns0:cell>Signature</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>85%</ns0:cell></ns0:row><ns0:row><ns0:cell>[23]</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DroidChain[2</ns0:cell><ns0:cell cols='2'>Permissions,</ns0:cell><ns0:cell>API</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1260</ns0:cell><ns0:cell>Warshall</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>92%</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>N/A</ns0:cell></ns0:row><ns0:row><ns0:cell>2]</ns0:cell><ns0:cell>Call,</ns0:cell><ns0:cell cols='2'>Behaviour</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Chain</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Shhadat [21]</ns0:cell><ns0:cell cols='3'>Heuristic Strategy,</ns0:cell><ns0:cell>172</ns0:cell><ns0:cell>984</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>96.4%</ns0:cell><ns0:cell cols='2'>87.3% 97.8%</ns0:cell><ns0:cell>91.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>dynamic analysis</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DroidDet[8]</ns0:cell><ns0:cell cols='3'>Permissions, System</ns0:cell><ns0:cell>1065</ns0:cell><ns0:cell>1065</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell cols='3'>88.16% 88.40% 88.26%</ns0:cell><ns0:cell>N/A</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>events,</ns0:cell><ns0:cell cols='2'>Sensitive</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>API and URL</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>DL-Droid[20] Application</ns0:cell><ns0:cell /><ns0:cell cols='3'>11505 19620 300,100,300</ns0:cell><ns0:cell cols='3'>94.08% 97.78% 94.95%</ns0:cell><ns0:cell>95.89%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Attributes, Actions,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Events, Permissions</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SRBM[37]</ns0:cell><ns0:cell cols='3'>Static and Dynamic</ns0:cell><ns0:cell cols='3'>39931 40923 RBM</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.804</ns0:cell><ns0:cell>84.3%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Luo[38]</ns0:cell><ns0:cell>API Calls</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>1400</ns0:cell><ns0:cell>1400</ns0:cell><ns0:cell>Correntropy,</ns0:cell><ns0:cell>95.0%</ns0:cell><ns0:cell cols='2'>76.0% 84.25%</ns0:cell><ns0:cell>84.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>ProDroid[39] API Calls</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>500</ns0:cell><ns0:cell>1500</ns0:cell><ns0:cell>HMM</ns0:cell><ns0:cell>93.0%</ns0:cell><ns0:cell cols='2'>95.0% 94.5%</ns0:cell><ns0:cell>93.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell cols='2'>Application</ns0:cell><ns0:cell /><ns0:cell>961</ns0:cell><ns0:cell>6661</ns0:cell><ns0:cell cols='2'>300,300,300,300 98.9%</ns0:cell><ns0:cell cols='2'>99.1% 98.03%</ns0:cell><ns0:cell>99.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Permissions</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DL(376502(30</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0,300,300,300</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>))</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>method PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:1:1:NEW 25 Mar 2021)</ns0:note>
</ns0:body>
" | "REVISION DETAILS
Dear Editor and Reviewers,
Thank you for the reviewers’ comments concerning our manuscript entitled “AndroAnalyzer: Android malicious software detection based on deep learning”. Those comments are all valuable and very helpful for revising and improving our manuscript, as well as the important guiding significance to our studies. We have studied comments carefully and have done all corrections. The responses to the reviewer comments are enclosed below.
Best regards,
Recep Sinan ARSLAN
Q1: The paper needs improvements in terms of experimental results. The authors should state clearly the experiment parameters. Also, the comments of reviewers should be considered.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
A1: A detailed study on the experimental result and methodology has been done.
“
2.1. Experimental setup and parameters
In order to measure the performance and efficiency of the deep neural network model proposed in this study, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070 /, 8GB RAM memory was used for training of the proposed model. Windows 10 64-bit operating system and x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2MB / sec memory usage. And python with scikit-learn, panda and numpy packages were used for the experiments. The proposed model was valued with Tensorflow.
Many experiments have been done to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%-20% was made respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available on the Android OS are not used in most applications, no feature selection process has been performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation or feature selection in the data. In order to obtain the most successful model in DNN model, many different DNN model with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the Softmax function is used in the output layer, ReLu is used as the activation function in the input and hidden layers. Adam function is preferred for optimization in the output layer. Since the model completed the learning process in approximately 50 Epoch, the training stage was terminated at this point. As a result, best practice examples and the best values obtained as a result of numerous tests were used in the selection of parameters.
“
Q2: We notice that the author affiliation you have provided in the system is slightly different from that in the document.
A2: necessary corrections have been made.
Q3: In the reference section, please provide the full author name lists for any references with 'et al.' including, but not limited to, these references:
R.Taheri et al.
H. Zhu et al.
A3: Necessary corrections have been made with trackedchanges.
REVIEWER-1
REVISION DETAILS
Basic reporting
Although the idea is interesting, I recommend to improvement article with some comments as below:
Q1: More description for experiments should be done.
A1: Since similar criticism was received from other referees on this issue, the relevant section was expanded and more detailed information was given about the experiments as shown below.
“
2.2. Experimental setup and parameters
In order to measure the performance and efficiency of the deep neural network model proposed in this study, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070 /, 8GB RAM memory was used for training of the proposed model. Windows 10 64-bit operating system and x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2MB / sec memory usage. And python with scikit-learn, panda and numpy packages were used for the experiments. The proposed model was valued with Tensorflow.
Many experiments have been done to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%-20% was made respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available on the Android OS are not used in most applications, no feature selection process has been performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation or feature selection in the data. In order to obtain the most successful model in DNN model, many different DNN model with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the Softmax function is used in the output layer, ReLu is used as the activation function in the input and hidden layers. Adam function is preferred for optimization in the output layer. Since the model completed the learning process in approximately 50 Epoch, the training stage was terminated at this point. As a result, best practice examples and the best values obtained as a result of numerous tests were used in the selection of parameters.
“
Q2: The method can be explained in a clearer way.Try to explain the theory more detailed in discussion.
A2: The methodology section has been transformed into a seperate section and organized in more detail in the light of the given flow chart.
“
2.3. Methodology
In the studies we mentioned in the second section, it was shown that there are some problems related with both static analysis approach and dynamic analysis. Due to the model proposed in this study, static analysis and machine learning techniques were used together for malware detection. In this manner, it has been possible to make a classification with a better success rate and to create a safer Android environment for users. All of this process is carried out without the apk files being uploaded to the user's mobile devices.
Flow chart of the proposed model is shown in Fig. 1. In the first stage, the process of creation of application datasets were performed. Both malicious and benign datasets are need for training of model. In some of the applications used in the process of creating these datasets, there may be problems in accessing the source code, and in some others, there may be problems in accessing the readable manifest.xml file. For this reason, first of all, these applications are determined and removed from the dataset before going to the feature extraction stage. This process is applied for both malicious and benevolent applications.
After this preparation, application Java codes and package files should be accessed in order to obtain the properties of applications with static analysis. In the second stage, these operations were performed by using Aapt, dex2jar, Xxd, jdcli.jar libraries. In this manner, access to manifest file including the Java code files of the applications, permissions, intent filters, and manufacturer information was provided. Then, a word-based parsing operation was performed on the manifest.xml file and feature vectors that would be used for training and that contained meaningful information about the application were obtained. At this point, separate programs/frameworks are used to access application codes and manifest.xml file. The reason for this is that while the manifest file can be accessed with the aapt library, it is not possible to access java codes with the same program. Similarly, java codes can be only accessed with dex2jar, xxd or jdcli.jar. The features to be used in the deep learning model have been extracted by reading the data on both the code and the manifest file. A vector is produced from these features. Classification of test applications was made with the model obtained as a result of the training.
“
Experimental design
Q3: The reason of using certain parameters in the experiments should be discussed in more details.
A3: Since similar criticism was received from other referees on this issue, the relevant section was expanded and more detailed information was given about the experiments as shown below.
“
2.4. Experimental setup and parameters
In order to measure the performance and efficiency of the deep neural network model proposed in this study, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070 /, 8GB RAM memory was used for training of the proposed model. Windows 10 64-bit operating system and x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2MB / sec memory usage. And python with scikit-learn, panda and numpy packages were used for the experiments. The proposed model was valued with Tensorflow.
Many experiments have been done to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%-20% was made respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available on the Android OS are not used in most applications, no feature selection process has been performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation or feature selection in the data. In order to obtain the most successful model in DNN model, many different DNN model with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the Softmax function is used in the output layer, ReLu is used as the activation function in the input and hidden layers. Adam function is preferred for optimization in the output layer. Since the model completed the learning process in approximately 50 Epoch, the training stage was terminated at this point. As a result, best practice examples and the best values obtained as a result of numerous tests were used in the selection of parameters.
“
Validity of the findings
no comment
Comments for the author
Q4: The references in this manuscript are somewhat out-of-date.In clude more recent researches in this field.
A4: The articles were generally selected from the studies conductedin 2020. For this reason, 3 new studies on mobile security in 2021 were examined and added to the table.
Table 4 The comparison of classification performance among former methods and proposed method.
Similar Works
Selected Features
Num of Benign Apps
Num of Malware Apps
Num of Neurons or Classification Method
Precision
Recall
Accuracy
F-measure
ASAEF[24]
Metadata, permissions, intent filter, activity, services
37224
33259
N-gram, signature
96,4
96,1
97,2
96,2
FingerPrinting[23]
Family DNA
100
928
Signature
89
84
-
85
DroidChain[22]
Permissions, API Call, Behaviour Chain
-
1260
Warshall
91
92
93
N/A
Shhadat [21]
Heuristic Strategy, dynamic analysis
172
984
RF
96,4
87,3
97,8
91,2
DroidDet[8]
Permissions, System events, Sensitive API and URL
1065
1065
SVM
88,16
88,40
88,26
N/A
DL-Droid[20]
Application Attributes, Actions, Events, Permissions
11505
19620
300,100,300
94,08
97,78
94,95
95,89
SRBM[37]
Static and Dynamic Feature
39931
40923
RBM
-
-
0.804
0.843
Luo[38]
API Calls
1400
1400
Correntropy, CNN
0.95
0.76
0.8425
0.84
ProDroid[39]
API Calls
500
1500
HMM
0.93
0.95
94.5
0.939
Proposed Model
DL(376502(300,300,300,300))
Application Permissions
961
6661
300,300,300,300
0,989
0,991
0,9803
0,990
[37] Z. Liu, R. Wang, N. Japkowicz, D. Tang, W. Zhang, J. Zhao, “Research on unsupervised feature learning for Android malware detection based on Restricted Boltzman Machines”, Future Generation Computer Systems, 120, 91-108, 2021.
[38] X. Lu, J. Li, W. Wang, Y. Gao, W. Zha, “Towards improving detection performance for malware eith correntropy-based deep learning method”, Digital Communication and Networks, 2021.
[39] S. Sasidharan, C. Thomas, “ProDroid-An android malware detection framework based on profile hidden Markov model”, Pervasive and Mobile Computing, 72, 1-16, 2021.
Q5: The manuscript has not been carefully written.
A5: Native speaker proofreading is done.
Q6: There are many grammar mistakes as well.
A6: The article was checked by the native speaker and necessary corrections were made in terms of grammer.
Q7: The reason of using certain parameters in the experiments should be discussed in more details.
A7: More details on the test parameters are given in the relevant section as shown below.
“
2.5. Experimental setup and parameters
In order to measure the performance and efficiency of the deep neural network model proposed in this study, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070 /, 8GB RAM memory was used for training of the proposed model. Windows 10 64-bit operating system and x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2MB / sec memory usage. And python with scikit-learn, panda and numpy packages were used for the experiments. The proposed model was valued with Tensorflow.
Many experiments have been done to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%-20% was made respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available on the Android OS are not used in most applications, no feature selection process has been performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation or feature selection in the data. In order to obtain the most successful model in DNN model, many different DNN model with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the Softmax function is used in the output layer, ReLu is used as the activation function in the input and hidden layers. Adam function is preferred for optimization in the output layer. Since the model completed the learning process in approximately 50 Epoch, the training stage was terminated at this point. As a result, best practice examples and the best values obtained as a result of numerous tests were used in the selection of parameters.
“
REVIEWER-2
REVISION DETAILS
Basic reporting
A clear view of the contribution is given in the paper.
Experimental design
Q1: The sample size differs from that used in related works. The authors, for instance, used 6661 Malware App, while the others used 33259, 928,... , and 19620. The distinction between these samples needs to be illustrated, so we can ensure that the authors have not compared the accuracy of basic malware apps with sophisticated ones used in similar works.
A1: The results obtained in this study have been compared with the most up-to-date and high-performance applications as possible. Moreover, it is a succussful project in general with performance of 99% and above. While doing comparison, I tried to show this in the table as the applications use different datasets from each other. I have no claim, I have obtained a better results from all other studies by working exactly with the same dataset. This is why I tried to choose the most recent and minimally similar applications when comparing. For this reason, I could not make any improvements in this paper unfortunately.
Validity of the findings
Q2: The accuracy values written for the work in Table 4 are in the wrong format. (e.g., 0.989 rather than %98.9).
A2: Necessary corrections have been made.
Similar Works
Selected Features
Num of Benign Apps
Num of Malware Apps
Num of Neurons or Classification Method
Precision
Recall
Accuracy
F-measure
ASAEF[24]
Metadata, permissions, intent filter, activity, services
37224
33259
N-gram, signature
96.4%
96.1%
97.2%
96.2%
FingerPrinting[23]
Family DNA
100
928
Signature
89%
84%
N/A
85%
DroidChain[22]
Permissions, API Call, Behaviour Chain
-
1260
Warshall
91%
92%
93%
N/A
Shhadat [21]
Heuristic Strategy, dynamic analysis
172
984
RF
96.4%
87.3%
97.8%
91.2%
DroidDet[8]
Permissions, System events, Sensitive API and URL
1065
1065
SVM
88.16%
88.40%
88.26%
N/A
DL-Droid[20]
Application Attributes, Actions, Events, Permissions
11505
19620
300,100,300
94.08%
97.78%
94.95%
95.89%
SRBM[37]
Static and Dynamic Feature
39931
40923
RBM
-
-
0.804
84.3%
Luo[38]
API Calls
1400
1400
Correntropy, CNN
95.0%
76.0%
84.25%
84.0%
ProDroid[39]
API Calls
500
1500
HMM
93.0%
95.0%
94.5%
93.9%
Proposed Model
DL(376502(300,300,300,300))
Application Permissions
961
6661
300,300,300,300
98.9%
99.1%
98.03%
99.0%
Q3: Also, the header for this row is confused 'Proposed Model DL(376502(300,300,300,300))'
A3: Proposed Model DL (376502 (300,300,300,300)) shows the model parameters decided in the prepared deep learning structure, mentioned in the 'Deep learning classifier results' section, in which DL model is explained. The best results were obtained in a 4-layer deep learning structure containing 300 neurons in each layer. 376502 parameters are processed with this model. Instead of directly saying DL model, it is preferred to show the details of the model in the table.
Comments for the author
None.
REVIEWER-3
REVISION DETAILS
Basic reporting
The paper describes a model using AndroAnalyzer that uses static analysis and a deep learning system is proposed. The model is tested in numerous applications, and additional tests were conducted with machine learning. The authors claim that 98.16% accuracy value was reached compared to 26 traditional machine learning techniques, where Precision and F-measure were 98.78, 27 99.24, and 98.90, respectively.
The paper is generally good. The motivation and research problem are well defined. The methodology is clear and the results are discussed and compared with some similar tools.
Q1: The paper needs more linguistic and grammatical improvement. It should proofread.
A1: Proofreading was done by native speaker.
Q2: In the paper structure at the end of the Introduction Section, preplace the word “Chapter” with the word “Section”.
A2: Necessary corrections have been made. “Chapter” words have been changed to 'section'.
Q3: Heading and Sub-Heading are not numbered.
A3: Necessary corrections were made and all sections and subsections were numbered.
“The rest of this article is organized as follows. In section 2, current studies performed in between 2018-2020 using static, dynamic and machine learning techniques are mentioned. In section 3, the methodological infrastructure of the model proposed in this study, pre-processing processes, Android application structure and details of the dataset prepared originally to be used in tests are given. The evaluation scheme, experimental parameters and performance results of the proposed method are presented comparatively in section 4. In section 5, the study has been evaluated in general and recommendations for the future are given.”
Q4: The Literature Review should be a standalone section named related.
A4: “Literature Review” section was turned into an independent section as the 2nd section and was taken out of the materials and methods section. Accordingly, arrangements were made in the numbering of other sections.
Q5: Many papers are surveyed and classified, however, the studies reviewed are in between 2018-2020, why no mention of the studies contacted before 2018.
The reviews I received from previous studies on a similar topic was that the studies contacted before 2018 were included in the references. For this reason, in this study, I have only evaluated in the last two years. Nevertheless, the other referee who reviews this paper stated that even the studies in the last two years are old, so I should only include the studies for the year 2020-2021. For this reason, I will make the necessary corrections according to the editor’s decision, since the two different referees requests are opposite.
For this reason, I have not been able to make a correction regarding this item yet.
Q6: Change the sentence “In the [13] numbered study performed by Shahriar et al., a model was proposed to detect malicious software ….” to be “In [13], a model was proposed to detect malicious software …. ” and use this throughout the paper.
A6: The relevant sentence has been corrected as you have stated, and this writing style has been used for all articles in the paper.
“ …. In [13] a model was proposed to detect malicious software by analyzing the permissions requested in the mobile application. In the first stage, statistics of the permissions requested by most of the malware were produced and their usage intensities were determined. In the second stage, the application status of the permissions with high usage intensity was investigated in order to determine malicious behavior.
In [14] a malware detection tool with code analysis base was developed to determine whether the permissions are requested by the application or not and whether these permissions are used or not. Classification was made according to a statistically determined threshold value. 92% success rate in Android malware detection was achieved.
In [18] a study based on the extraction of properties including permission, purpose and API calls of applications and classification of these extractions with K-nearest neighbors (KNN) derivative algorithms. It is a similarity-based approach. Malware detection was made with an average accuracy of 90% with the proposed algorithm.
….”
Q7: The description of the existing studies is not enough to shed light on the research gap. The existing work has to be compared and critically evaluated.
A7: Since a similar criticism was received from other referees, 3 new studies on a similar subject in 2021 were added and evaluated in more detailed.
Similar Works
Selected Features
Num of Benign Apps
Num of Malware Apps
Num of Neurons or Classification Method
Precision
Recall
Accuracy
F-measure
ASAEF[24]
Metadata, permissions, intent filter, activity, services
37224
33259
N-gram, signature
96.4%
96.1%
97.2%
96.2%
FingerPrinting[23]
Family DNA
100
928
Signature
89%
84%
N/A
85%
DroidChain[22]
Permissions, API Call, Behaviour Chain
-
1260
Warshall
91%
92%
93%
N/A
Shhadat [21]
Heuristic Strategy, dynamic analysis
172
984
RF
96.4%
87.3%
97.8%
91.2%
DroidDet[8]
Permissions, System events, Sensitive API and URL
1065
1065
SVM
88.16%
88.40%
88.26%
N/A
DL-Droid[20]
Application Attributes, Actions, Events, Permissions
11505
19620
300,100,300
94.08%
97.78%
94.95%
95.89%
SRBM[37]
Static and Dynamic Feature
39931
40923
RBM
-
-
0.804
84.3%
Luo[38]
API Calls
1400
1400
Correntropy, CNN
95.0%
76.0%
84.25%
84.0%
ProDroid[39]
API Calls
500
1500
HMM
93.0%
95.0%
94.5%
93.9%
Proposed Model
DL(376502(300,300,300,300))
Application Permissions
961
6661
300,300,300,300
98.9%
99.1%
98.03%
99.0%
Q8: The methodology needs more clarification about its stages, and the figure should be linked with the stages described in detail in numbered subheadings and bearing the numbers of the stages in the illustrated figure.
A8: Methodology section was organized as a separeted section. The section with the methodology scheme is explained in more detail.
“
2.1. Methodology
In the studies we mentioned in the second section, it was shown that there are some problems related with both static analysis approach and dynamic analysis. Due to the model proposed in this study, static analysis and machine learning techniques were used together for malware detection. In this manner, it has been possible to make a classification with a better success rate and to create a safer Android environment for users. All of this process is carried out without the apk files being uploaded to the user's mobile devices.
Flow chart of the proposed model is shown in Fig. 1. In the first stage, the process of creation of application datasets were performed. Both malicious and benign datasets are need for training of model. In some of the applications used in the process of creating these datasets, there may be problems in accessing the source code, and in some others, there may be problems in accessing the readable manifest.xml file. For this reason, first of all, these applications are determined and removed from the dataset before going to the feature extraction stage. This process is applied for both malicious and benevolent applications.
After this preparation, application Java codes and package files should be accessed in order to obtain the properties of applications with static analysis. In the second stage, these operations were performed by using Aapt, dex2jar, Xxd, jdcli.jar libraries. In this manner, access to manifest file including the Java code files of the applications, permissions, intent filters, and manufacturer information was provided. Then, a word-based parsing operation was performed on the manifest.xml file and feature vectors that would be used for training and that contained meaningful information about the application were obtained. At this point, separate programs/frameworks are used to access application codes and manifest.xml file. The reason for this is that while the manifest file can be accessed with the aapt library, it is not possible to access java codes with the same program. Similarly, java codes can be only accessed with dex2jar, xxd or jdcli.jar. The features to be used in the deep learning model have been extracted by reading the data on both the code and the manifest file. A vector is produced from these features. Classification of test applications was made with the model obtained as a result of the training.
“
Q9: As well as clarify what is meant by some of the acronyms and not leave an indulgence for the reader to devise them
A9: The entire work has been thoroughly reviewed. For all acronyms, their open forms were written where they were firstly used. Thus, the problem of not beein understood is later acronym usage was eliminated.
“Android application kit (APK), Application Interface (API), False Positive (FP), False Negative (FN), …”
Experimental design
The experiments are conducted, explained, discussed, and evaluated with other related approaches
Validity of the findings
Results are explained, discussed, and evaluated with other related approaches
" | Here is a paper. Please give your review comments after reading it. |
96 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Technological developments have a significant effect on the development of smart devices. The use of smart devices has become widespread due to their extensive capabilities. The Android operating system is preferred in smart devices due to its open-source structure. This is the reason for its being the target of malware. The advancements in Android malware hiding and detection avoidance methods have overridden traditional malware detection methods.</ns0:p><ns0:p>Methods. In this study, a model employing AndroAnalyzer that uses static analysis and deep learning system is proposed. Tests were carried out with an original dataset consisting of 7622 applications. Additional tests were conducted with machine learning techniques to compare it with the deep learning method using the obtained feature vector.</ns0:p><ns0:p>Results. Accuracy of 98.16% was achieved by presenting a better performance compared to traditional machine learning techniques. Values of recall, precision, and F-measure were 98.78, 99.24, and 98.90, respectively. The results showed that deep learning models using trace-based feature vectors outperform current cutting-edge technology approaches.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>In recent years, smart devices have become the main medium of communication among people. While phones used to offer only verbal communication, they are now smart devices. This rich technological equipment enables users to make increasing use of these devices <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. In the past, phones were used to send SMS messages and make phone calls, but today they are used in many areas primarily in web services and as a camera, music service provider, and tablet PC. These devices are equipped with hardware and various sensors with advanced memory and processing power, just like computers. For this reason, they are very easy to customize <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. An operating system is required for users to use smart devices. At present, a variety, such as IOS, Android, Windows, and Blackberry OS, are available for smart devices <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>. Android offers rich media support, optimized graphics infrastructure, and powerful browser support to its users. Along with working with different APIs, it also supports structures such as sensor usage and real-time map systems. The fact that the Android operating system offers such a wide range of free features that are open source has made it widely preferred among users <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. Applications that allow users to use more features can be developed and distributed via both Play Store and third-party environments <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref>. Since these applications provide free benefit to users, Android has become the target of malware developers <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. It is possible to repackage Android applications by adding different hidden and malicious codes to the development files in the binary source structure. Thus, users are likely to be exposed to these undesirable outcomes without even realizing it. For this reason, researchers are trying to develop different malware detection tools to detect these activities and to reveal applications prepared for malicious purposes. In the Android operating system, if an application is desired to be used on a smart device, first the application is downloaded from the relevant platform, then the permissions required for the application to run are accepted, and finally the installation is performed. It is not possible to install and run an application without accepting all the permissions claimed to be required to install the application. Many malicious developers turn this situation to their advantage and request permissions such as camera access, access to text messages, and reading private information that the application does not need. Many Android users who do not have sufficient knowledge of computers accept these permissions and install the application unaware of this malicious purpose. Thus, they become the clear target of malicious activities <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. Various techniques such as static and dynamic analysis and derivatives of these techniques have been proposed for the detection of malware and protection of end users <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. Information about static analysis, signature-based analysis, application, and expected behavior includes observations presented explicitly and implicitly in binary source code. It is a fast and effective method. However, there are other methods such as hiding the software code or detection bypass that developers use to circumvent this analysis method. Dynamic analysis, which is also known as behavior-based analysis, is the collection of information about the runtime of the application, such as system calls, network access requests, information changes in files, and memory installed in the real environment or on a sandbox <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>. Selecting meaningful features from Android applications using static analysis and modeling them in a better manner enable the development of a powerful malware tool. Thus, unlike dynamic analysis approaches, downloading of applications to devices is prevented even on a temporary basis for detection. The method suggested in the present study is new, with the purpose of learning the requested permissions required by the application with the proposed model and thus detecting the malicious activities of new applications. The recommended method can be used in applications produced for all Android versions, including Android 11 API 30. The following improvements have been made in the present study and the aim was to detect malware more accurately: The contributions of the present work are as follows:</ns0:p><ns0:p> Development of an advanced deep learning-based network for analyzing and developing malware for all Android versions.  Preparation of specific malicious and benign application datasets for training of network structure.  Via running of this process before installation, users are prevented from being exposed to any dangerous activities, even for a short time.  Ensuring users are warned about these detected malwares and presenting a model that enables more perfect detection due to continuous learning and converges false positive (FP) and false negative (FN) values to the minimum.</ns0:p><ns0:p>The rest of this article is organized as follows. In section 2, recent studies performed between 2018 and 2020 using static, dynamic, and machine learning techniques are mentioned. In section 3, the methodological infrastructure of the model proposed in the present study, preprocessing processes, the Android application structure, and details of the original dataset prepared for use in the tests are given. The evaluation scheme, experimental parameters, and performance results of the proposed method are presented comparatively in section 4. In section 5, the study is evaluated in general and recommendations for the future are given.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>LITERATURE REVIEW</ns0:head><ns0:p>In this section, recent studies related to Android malware detection, feature generation, and selection, static, dynamic, and machine learning approaches are discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.1.</ns0:head><ns0:p>Static Analysis The use of static analysis to determine whether Android applications are malicious or not is based on inspection of the application code and it remains popular. Using the static analysis approach, solutions were produced using permissions, API calls, command systems, and features based on the purpose of use. Although static analysis approaches allow more comprehensive code inspection, malware developers are able to use different techniques to avoid static analysis and to hide purposeful code. Data encryption, hiding, update attacks, or polymorphic techniques are examples of these hiding techniques <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. DAPASA is a graphical malware detection tool that calculates the sensitivity of API calls using a data mining technique called TF-IDF <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. Detection is performed based on two assumptions that indicate how sensitive API calls are called. In <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref> a model was proposed to detect malicious software by analyzing the permissions requested in the mobile application. In the first stage, statistics of the permissions requested by most of the malware were produced and their usage intensities were determined. In the second stage, the application status of the permissions with high usage intensity was investigated in order to determine malicious behavior. In <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> a malware detection tool with a code analysis base was developed to determine whether the permissions are requested by the application or not and whether these permissions are used or not. Classification was performed according to a statistically determined threshold value. A 92% success rate in Android malware detection was achieved.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>[18]</ns0:ref> a study was reported based on the extraction of properties including permission, purpose, and API calls of applications and classification of these extractions with K-nearest neighbors (KNN) derivative algorithms. It is a similarity-based approach. Malware was detected with an average accuracy of 90% with the proposed algorithm. AppPerm analyzer <ns0:ref type='bibr' target='#b20'>[19]</ns0:ref> is a malware detection tool based on examining the codes together with the manifest file, creating double and triple permission groups, and determining the risk scores of the applications accordingly. A TP value of 95.50% and a specificity value of 96.88% were achieved.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.2.</ns0:head><ns0:p>Dynamic analysis Dynamic analysis is another method used to detect security vulnerabilities in Android applications <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>. It involves a more complex process compared to the static analysis approach. Since the dynamic analysis approach is based on tracking the behavior of the application during runtime, it is not easy for malicious application developers to prevent this analysis approach. Researchers often use the dynamic analysis approach to overcome the problems they encounter during the static analysis approach. There are many studies that suggest a dynamic analysis model for Android malware detection. In this section, a number of current studies are mentioned. MADAM is a malware detection tool that uses a signature-and behavior-based malware detection approach. It uses different properties to evaluate and classify applications. Features at four levels, i.e., application, kernel, user, and package, were extracted and used in the study <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>. In <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, a malware detection mechanism using tracking behavioral system call traces was proposed. Malicious activities were predicted by examining the frequency of behavioral system calls with a previously trained classifier. There are some points in which both static and dynamic analysis approaches are advantageous and disadvantageous. Static analysis is capable of finding malicious components and dynamic analysis is capable of observing the application's status at runtime. For this reason, some studies suggest both methods be used together in order to benefit from their advantages <ns0:ref type='bibr' target='#b16'>[16,</ns0:ref><ns0:ref type='bibr' target='#b17'>17]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.3.</ns0:head><ns0:p>Machine learning Machine learning is the approach of allowing algorithms to self-learn the necessary parameters from the data to predict malware detection in Android applications. Machine learning techniques <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>, which are successfully applied in many problems today, have also been implemented in the field of mobile security with deep learning in the present study. In this section, some of the studies that have been performed in recent years and that have used machine learning techniques are mentioned. Successful results were obtained in all of the studies. AspectDroid <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> is a system that allows the monitoring of suspicious activities such as dynamic class loading during runtime and review of them afterwards by writing them to log files. Activity tracking code has been added to the source code of the applications for this process. NTPDroid <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> is a model using Android permissions and network traffic-based features. The aim is to determine the probability of malicious behavior. It is possible to decrease the FP value and thus the level of evaluating benign practices as malicious practices. In <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> a suggestion was made for the detection of malware on the Internet of Things. The indepth analysis of malware is based on visualization by color image and classification with convolutional neural network (CNN) methodology. According to the experimental results, more successful results are produced compared to machine learning and old-style static and dynamic analysis approaches.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> a hybrid malware detection model was created for Android devices. In this model structure, in the first step, features to be obtained by static analysis such as requested permissions, permissions used, hardware components, intent filters, and suspicious system calls were extracted. In addition, network usage and file read-write statistics were obtained by dynamic analysis of the application. Applications with these extracted features were classified by support vector machine (SVM) technique. In <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> a deep learning-based detection method using system call sequences and created with an LSTM-based classifier was proposed. It yielded a 96.6% recall value with a 9.3% FP value. In <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> a model using permission patterns extracted from Android applications was proposed. Required and used permissions were used. While achieving the highest 94% classification performance, an FP value of 5% and FN value of 1% were obtained. In <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> a model based on extraction of features based on static analysis by using manifest files of apk files and selection by KNN and classification by SVM algorithms of these features was proposed. With that method, a TP ratio of 70% and above and an FP value close to zero were obtained. Due to feature selection by KNN and classification by SVM, recognition performance close to that of classification models with deep learning was achieved. In <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref> sensitive API calls, permissions, permission usage rates, and properties obtained from system events were extracted. An ensemble rotation forest classification model was proposed and it yielded 88.26% accuracy, 88.40% sensitivity, and 88.16% precision values. With the proposed model, an improvement of 3.33% was achieved compared to the classification by SVM.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> a CNN-based network model was proposed to test malware attacks on the Internet by visualizing color images. Improved performance results for cyber security threats were obtained in the models in which CNNs were used. In <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> the AutoDroid tool, which automatically detects malware according to API calls extracted using static analysis, was described. The system was developed using different types of deep neural networks (deep belief networks etc.). In the design made by DBN, a 95.98% success rate was achieved in the tests performed using 2500 benign and 2500 malicious applications. Since Android is an open source and extensible platform, it allows us to extract and use as many application features as we want. The method proposed in the present study has a robust and scalable perception and uses a deep learning structure. In this manner, the method has successful detection ability together with low resource consumption. It is more successful than current deep learning-based Android malware detection tools. Moreover, it is based on real devices rather than emulators. Due to the 349 features extracted among the applications available in the dataset consisting of a total of 7622 applications, more successful results were obtained compared to the existing models. It is a method that extensively investigates Android malware on real devices and comparatively evaluates different methods to measure the impact of codes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1.'>Methodology</ns0:head><ns0:p>In the studies we mentioned in the second section, it was shown that there are some problems related to both static analysis and dynamic analysis. In the model proposed in the present study, static analysis and machine learning techniques were used together for malware detection. In this manner, it became possible to achieve classification with a better success rate and to create a safer Android environment for users. All of this is carried out without the apk files being uploaded to the user's mobile devices. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A flow chart of the proposed model is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>. In the first stage, the application datasets were created. Both malicious and benign datasets are need for training the model. In some of the applications used in the creation of these datasets, there may be problems in accessing the source code, and, in others, there may be problems in accessing the readable manifest.xml file. For this reason, first of all, these applications are determined and removed from the dataset before going on to the feature extraction stage. This process is applied for both malicious and benevolent applications. After this preparation, application Java codes and application package files should be accessed in order to obtain the properties of applications with static analysis. In the second stage, these operations were performed by using Aapt, dex2jar, Xxd, jdcli.jar libraries. In this manner, access to the manifest file including the Java code files of the applications, permissions, intent filters, and manufacturer information was provided. Then a word-based parsing operation was performed on the manifest.xml file and feature vectors that would be used for training and that contained meaningful information about the application were obtained. At this point, separate programs/frameworks are used to access application codes and the manifest.xml file. The reason for this is that while the manifest file can be accessed with the aapt library, it is not possible to access Java codes with the same program. Similarly, Java codes can be only accessed with dex2jar, xxd, or jdcli.jar. The features to be used in the deep learning model were extracted by reading the data in both the code and the manifest file. A vector is produced from these features. Classification of test applications was made with the model obtained as a result of the training. Details related to the success of the classification are compared in section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.2.</ns0:head><ns0:p>Android Application Basics Android applications are mainly written in Java language and then they are compiled with data and source files into an archive file called an Android Application Kit (APK). The APK is the file that is distributed in the application market and used for installation of the application. There are four different types of application component: event, services, broadcast receiver, and content provider. Communication among these components is provided by using a messaging object called intent. In Android, applications must declare all of their components in an XML manifest file within the APK. Intent filters define the limits of each component's capabilities and they are included in the package. Additional information declared in the XML file includes the user permissions required by the application (CALL_PHONE, SEND_SMS etc.), the minimum API level, and also the hardware and software features to be used by the application (GPS, camera, multitouch screen) <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. As shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, in addition to the XML file, there are one or more dex files containing all classes in the APK file that will run on its own virtual machine (Dalvik Virtual Machine), the lib folder containing the files compiled specifically for the processor layer, application certificates, resource lists, and a META_INF folder that contains the SHA-1 summary of these resources and a resources.arsc file that contains the precompiled resources.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.3.</ns0:head><ns0:p>Dataset Description One of the biggest challenges in Android malware detection research is that it is not easy to access an up-to-date and reliable dataset that is large enough. In the present study, the Drebin <ns0:ref type='bibr' target='#b37'>[35]</ns0:ref> and Genome <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref> malicious application datasets were used to obtain the malicious application set. The Drebin dataset contains 5560 malicious applications. It was created between August 2010 and October 2012. There are applications from 179 different application families <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. On the other hand, the benign application dataset was created specifically for this study. There are applications from different categories such as books/references, communication, finance, music/entertainment, news and magazines, media, sports, tools, transportations, and weather in the benign dataset. A homogeneous distribution was attempted. Table <ns0:ref type='table'>1</ns0:ref> shows the details of the dataset used for the study. Applications labeled as unknown were not used. Out of the total 6739 applications in the Drebin and Genome datasets, 6661 were determined as malicious datasets and they were used in the training processes. For the benign dataset, 961 out of the total 1073 applications were determined as truly benign and they were used in the present study. Benign refers to applications identified after verifying that the applications are not malicious or do not have potentially unwanted functionality. To detect this situation, the website https://www.virustotal.com <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref>, which contains more than one antivirus program and can perform scanning simultaneously through different programs, was used. Creating this kind of benign label is a difficult and costly process. On the other hand, malicious applications are those that pose a potential danger to users. This application set was automatically tagged as it was taken from the Drebin and Genome datasets, which have been used in many studies before. Unknown is the name given to instances when malicious and benign tags are not assigned. These samples could not be processed and they could not be examined manually by accessing their codes. All existing examples were first placed in this category. After the examinations, they were transferred to the other two labels (benign, malicious). Most of these applications can be expected to be harmless, but some of them are likely to have malicious purposes as there are cases when they cannot be detected. For this reason, they were not directly included in either of the other two groups.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Feature Extraction and Preprocessing</ns0:head><ns0:p>To develop an effective Android malware model, it is quite critical to obtain robust and broad representation features such as user permissions, manufacturer and user information, intent filters, process names, and binary files. With this information, it becomes possible to detect malware.</ns0:p><ns0:p>In the present study, the processes of obtaining source codes and binary files of applications by reverse engineering and converting application APK files to Java codes were carried out. The xxd tool was used to extract Dex files in APK files. Access to Java source codes of the applications was provided with the Dex2jar and jdcli.jar libraries. Moreover, the aapt dump AndroidManifest </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Results</ns0:head><ns0:p>In this section, the proposed model testing process is explained in detail. Both the obtaining of the best deep learning structure and comparison of it with other classification methods or similar studies are shown in the table. The true positive ratio (1), true negative ratio (2), false positive ratio (3), false negative ratio (4), and precision value <ns0:ref type='bibr' target='#b4'>(5)</ns0:ref>, which are referred to as recall, are calculated as follows: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>𝑇𝑃𝑅 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 (1) 𝑇𝑁𝑅 = 𝑇𝑁 𝑇𝑁 + 𝐹𝑃 (2) 𝐹𝑃𝑅 = 𝐹𝑃 𝐹𝑃 + 𝑇𝑁 (3) 𝐹𝑁𝑅 = 𝐹𝑁 𝐹𝑁 + 𝑇𝑃<ns0:label>(4</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_1'>𝑃 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 (5)</ns0:formula><ns0:p>TP stands for true positive sample amount, TN stands for true negative sample amount, FP stands for false positive sample amount, and FN stands for false negative sample amount. P, the precision value, stands for the ratio of malicious applications classified as true.</ns0:p><ns0:p>The F-measure value is measured according to equation ( <ns0:ref type='formula'>6</ns0:ref>) separately for both the malicious and benign datasets. These two calculations are made according to the weighted FM equation <ns0:ref type='bibr' target='#b6'>(7)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>𝐹 -𝑚𝑒𝑎𝑠𝑢𝑟𝑒(𝐹𝑀) = 2 * 𝑟𝑒𝑐𝑎𝑙𝑙 * 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑟𝑒𝑐𝑎𝑙𝑙 + 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (6) 𝑊𝐹𝑀 = (𝐹 𝑏𝑒𝑛𝑖𝑔𝑛 * 𝑁 𝑏𝑒𝑛𝑖𝑔𝑛 ) + (𝐹 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 * 𝑁 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 ) 𝑁 𝑏𝑒𝑛𝑖𝑔𝑛 + 𝑁 𝑚𝑎𝑙𝑤𝑎𝑟𝑒 (7) 4.3.</ns0:formula><ns0:p>Deep learning classifier results Performance measurement results according to the number of different hidden layers in the deep learning model are given in Table <ns0:ref type='table'>2</ns0:ref>. The results were obtained using an input vector containing 349 input parameters. In 7 different test models 2-, 3-, 4-, and 5-layered deep learning models were used. Thus, it was aimed to create the best performing deep learning model. Accordingly, although the results were very close to each other, the best results were achieved with a 4-layer model containing 300, 300, 300 neurons. An average of 1 minute 49 seconds was required for this training. In total 286202 parameters were generated. The results were obtained from data divided into 80% training and 20% test sets. Training was provided over 50 epochs. Both recall and precision values were above a certain level, so this indicated that the model was not good in one-way detection (only malicious detection or only benign detection) but it was successful in both cases. In classification problems, when the numbers of cluster elements are not evenly distributed, simply measuring the accuracy of the model is often an inadequate metric. For this reason, the performance of the proposed model was analyzed with precision, recall, and F-measure values. The precision value was 99.24%. In malware detection, detecting benign practices as malicious can cause serious problems. For this reason, a high precision value shows that the model is successful in FP marking. Furthermore, a more successful value was obtained for the recall value, 98.78%. This also shows that it gives good results in detecting malware. The value of the F-measure at which precision and recall values are evaluated together and unbalanced cluster distributions can be observed was 98.9%. Successful results were obtained in this measurement in which all costs are evaluated. On the other hand, quite successful results were produced even in the 2-layer neural network with 50 neurons where the model is much simpler. For this reason, modeling can be performed using a simpler neural network depending on the intended use. However, in the present study, details are shown for the model at which the highest values were obtained. When the tests were performed with 70% training and 30% test sets, precision, recall, accuracy, and F-measure values were 0.979, 0.992, 0.980, and 0.986, respectively. According to the tests performed with 80% training and 20% test sets, there was a 1% decrease in the results for some measurement metrics. This shows that the increase in the number of applications Manuscript to be reviewed Computer Science assigned for the training set will cause more successful results in the classification of tested applications. The scheme of the model with the best results is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. According to this scheme, 349 parameters obtained from the features of mobile applications are given as inputs and a binary result is produced as a result of 1 input layer, 3 hidden layers, and 1 output layer. The complexity of the model is at a normal level and model quickly completes the training process. The input vector with 349 features is reduced to 300 in the first layer and training is carried out with 300 neurons up to the output layer. In the output layer, it is reduced to 2 for a dual classification. The model includes 286202 trainable parameters. ReLu was used as the activation function in the hidden layers. Softmax was used as the activation function in the output layer. The optimizer that used the error back propagation phase was Adam. Increasing the complexity of the model slowed down the learning process; however, this did not provide a noticeable increase in classification performance. For this reason, a model that contained more hidden layers with more neurons was not designed. In addition, considering that this model will work on devices with limited resources such as mobile devices, it was thought to be beneficial to work with simpler models. Fig. <ns0:ref type='figure'>4</ns0:ref> shows the change in accuracy on the training and test data over 50 epochs. As can be seen from the graphic, the proposed method overcame the over-fitting problem. Approximately after the first 10 epochs, it was seen that the model actually reached a certain level. However, a stable result was not produced in the training and testing phases. For this reason, the number of epochs was gradually increased and the aim was to obtain a more stable structure for the results. The ROC curve is shown in Fig. <ns0:ref type='figure'>5</ns0:ref> to observe the best results in consent-based classification. The ROC curve is used to measure the malware detection rate. It shows the effect of the learning model on the malware detection rate change and on the increase or decrease in the false positive value. The curve shows the change between the TP value and FP value and an increase in one value causes a decrease in the other. The fact that the ROC curve is close to the left and upper part, as shown in the figure, shows that the model gives the best results. The area under the curve is measured and gives the value of the deviation. Accordingly, a value of 0.9 and above is generally stated as a perfect classification <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> and it was 0.9515 in the present study. This result shows that the model was very successful in terms of classification in malware detection. The confusion matrix of the test is shown in Fig. <ns0:ref type='figure'>6</ns0:ref>. High classification success was achieved in TP and TN values. However, the high values of the classification numbers for FP and FN indicates a very dangerous situation for end users. It will cause users not to use some useful and safe applications for no reason and, even worse, it will cause users to be at risk because some malicious applications are considered safe by them. FP and FN were obtained in only 29 of the total 1525 tests performed in the present study, demonstrating the success of this model.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.4.</ns0:head><ns0:p>Comparison of the performance of Deep Learning with other Machine Learning Algorithms In this section, the accuracy of the proposed deep learning model and the results of traditional machine learning algorithms were compared. Nine different classification algorithms were selected after several pre-tests and examinations and these are among the widely used techniques. Accordingly, it is shown in Table <ns0:ref type='table'>3</ns0:ref> that the deep learning model gave better results than the other classification models. Although the results were close to each other in general, overall results in the deep learning model were better. In the other classification algorithms, the weighted F-measure value was 0.9223 at most, while it was 98.90% in the deep learning model. Apart from the deep learning model, the most successful classification algorithms were ExtraTree, Random Forest (RF), and SVM. The successful results achieved with both machine learning algorithms and deep learning models showed that application features obtained with static analysis could produce quite favorable results in detecting malware. 4.5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The model proposed in the present study is compared with similar deep learning or machine learning techniques used in previous studies in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>. It was observed that artificial intelligence modeling was used in almost all studies in 2019 and 2020. One of the main distinguishing differences among these studies is the dataset used and the second one is the feature vectors obtained from the applications in this dataset. In some studies, only the static property obtained from the manifest.xml file is used, while in other studies, intent filters, activity services, and API calls are used. The richness and homogeneity of the dataset are another factor with a direct effect on the results. In addition, the use of different classification methods was another reason for the difference in the results. When the results were evaluated comparatively according to similar parameters, it was seen that the model proposed in the present study produced successful results with respect to other studies. While much better results were produced in some studies, better results were obtained with slight differences with the studies using similar modeling. Very good performance results were obtained according to studies with similar dataset sizes. This shows that existing classification performance values were taken one step further with the model proposed in the present study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusions</ns0:head><ns0:p>The Android platform is the target of malicious mobile application developers and black hat hackers. Many antimalware tools aim to combat these applications and protect users. In the present study, a model for Android malware detection was proposed. Models with high classification accuracy are needed in the development of this model. In these structures, there are two stages: selection of the features that best represent the problem and classification with high accuracy. In the current study, a deep neural network structure with 3 hidden layers to classify the permissions they request to represent applications was proposed. Reverse engineering applications were used to obtain feature vectors and a vector containing 349 features was obtained. The permissions requested by the applications are one of the most important parameters that reveal their purpose. The features used in the present study were obtained by static analysis having the advantages of low cost, high efficiency, and low risk. A total of 6661 malicious samples taken from the Drebin and Genome datasets and 961 original benign application samples were used. In the experimental results, a 0.9924 precision value, 0.9878 recall value, 0.9816 accuracy, and 0.9890 F-measure value were obtained. The results showed that Android permissions contain very good information for understanding the purposes Manuscript to be reviewed Computer Science of applications. The high classification performance obtained with the obtained dataset showed that the deep learning structure and Android permissions were a suggestion that could be adopted in the development of malware detection tools. The model proposed in the present study can be developed easily and better performance results can be obtained with feature vectors using properties obtained by dynamic analysis. This will be part of my future work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>Overview of the proposed model for Android Malware Detection</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>.xml tool offered by the Android SDK was used to obtain xml files. The keywords and permissions obtained from the AndroidManifest.xml file were used in the training of the deep learning model proposed in the present study. A total of 349 features were extracted and used in model training.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>4 . 1 .</ns0:head><ns0:label>41</ns0:label><ns0:figDesc>Experimental setup and parameters In order to measure the performance and efficiency of the deep neural network model proposed, experiments with different parameters were conducted. A laptop with CORE I7 LZ5070, 8 GB RAM memory was used for training of the proposed model. Windows 10 64-bit operating system and an x64-based processor were used to create the presented malware detection model. The training time of the proposed model varies depending on the complexity of the deep learning model and has 0.2 MB/sec memory usage. In addition, Python with scikit-learn, pandas, and NumPy packages were used for the experiments. The proposed model was evaluated with TensorFlow. Many experiments were conducted to create the most successful model in classifier design for malware detection with a deep learning model. A distinction of 80%/20% was made, respectively, to use the model with 349 features in the training and testing stages. After this separation, under-sampling or over-sampling procedures were not used to balance the training data. In addition, although many permissions available in the Android OS are not used in most applications, no feature selection process was performed to ensure objectivity in future tests. After all, the original feature set and the original data vector were used in the training and testing phase without using any synthetic data generation for feature selection in the data. In order to obtain the most successful model in the DNN model, many different DNN models with different layers and nodes in each layer were created and tests were carried out. Thus, the best model was obtained. While the softmax function is used in the output layer, reLu is used as the activation function in the input and hidden layers. The Adam function is used for optimization in the output layer. Since the model completed the learning process in approximately 50 epochs, the training stage was terminated at this point. The best practice examples and the best values obtained as a result of numerous tests were used in the selection of parameters. 4.2. Performance measure The aim of the present study was to create a deep learning-based model for classifying malicious and benign applications and detecting malicious applications. In the experiments, our deep learning models were trained in a binary classification problem as benign or malicious. The created model included an original deep learning architecture. The effectiveness of the model was evaluated and demonstrated by creating a confusion matrix. As a result of these tests, performance values between different popular machine learning techniques were compared to make a comparison of the proposed model. Moreover, different test sets were created and the results were observed in a repeated manner. The results of these tests are given in detail in section 4.4 according to the performance measurement methods given below.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,333.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The comparison of classification performance among former methods and proposed</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Similar Works Selected Features</ns0:cell><ns0:cell>Num of</ns0:cell><ns0:cell>Num of</ns0:cell><ns0:cell>Num of Neurons</ns0:cell><ns0:cell cols='3'>Precision Recall Accuracy</ns0:cell><ns0:cell>F-measure</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Benign</ns0:cell><ns0:cell>Malwar</ns0:cell><ns0:cell>or Classification</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Apps</ns0:cell><ns0:cell>e Apps</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ASAEF[24]</ns0:cell><ns0:cell>Metadata,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='4'>37224 33259 N-gram, signature 96.4%</ns0:cell><ns0:cell cols='2'>96.1% 97.2%</ns0:cell><ns0:cell>96.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>permissions, intent</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>filter,</ns0:cell><ns0:cell cols='2'>activity,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>services</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>FingerPrinting</ns0:cell><ns0:cell cols='2'>Family DNA</ns0:cell><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>928</ns0:cell><ns0:cell>Signature</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>85%</ns0:cell></ns0:row><ns0:row><ns0:cell>[23]</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DroidChain[2</ns0:cell><ns0:cell cols='2'>Permissions,</ns0:cell><ns0:cell>API</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1260</ns0:cell><ns0:cell>Warshall</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>92%</ns0:cell><ns0:cell>93%</ns0:cell><ns0:cell>N/A</ns0:cell></ns0:row><ns0:row><ns0:cell>2]</ns0:cell><ns0:cell>Call,</ns0:cell><ns0:cell cols='2'>Behaviour</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Chain</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Shhadat [21]</ns0:cell><ns0:cell cols='3'>Heuristic Strategy,</ns0:cell><ns0:cell>172</ns0:cell><ns0:cell>984</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>96.4%</ns0:cell><ns0:cell cols='2'>87.3% 97.8%</ns0:cell><ns0:cell>91.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>dynamic analysis</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DroidDet[8]</ns0:cell><ns0:cell cols='3'>Permissions, System</ns0:cell><ns0:cell>1065</ns0:cell><ns0:cell>1065</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell cols='3'>88.16% 88.40% 88.26%</ns0:cell><ns0:cell>N/A</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>events,</ns0:cell><ns0:cell cols='2'>Sensitive</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>API and URL</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>DL-Droid[20] Application</ns0:cell><ns0:cell /><ns0:cell cols='3'>11505 19620 300,100,300</ns0:cell><ns0:cell cols='3'>94.08% 97.78% 94.95%</ns0:cell><ns0:cell>95.89%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Attributes, Actions,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Events, Permissions</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SRBM[37]</ns0:cell><ns0:cell cols='3'>Static and Dynamic</ns0:cell><ns0:cell cols='3'>39931 40923 RBM</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.804</ns0:cell><ns0:cell>84.3%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Luo[38]</ns0:cell><ns0:cell>API Calls</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>1400</ns0:cell><ns0:cell>1400</ns0:cell><ns0:cell>Correntropy,</ns0:cell><ns0:cell>95.0%</ns0:cell><ns0:cell cols='2'>76.0% 84.25%</ns0:cell><ns0:cell>84.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>ProDroid[39] API Calls</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>500</ns0:cell><ns0:cell>1500</ns0:cell><ns0:cell>HMM</ns0:cell><ns0:cell>93.0%</ns0:cell><ns0:cell cols='2'>95.0% 94.5%</ns0:cell><ns0:cell>93.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell cols='2'>Application</ns0:cell><ns0:cell /><ns0:cell>961</ns0:cell><ns0:cell>6661</ns0:cell><ns0:cell cols='2'>300,300,300,300 98.9%</ns0:cell><ns0:cell cols='2'>99.1% 98.03%</ns0:cell><ns0:cell>99.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Permissions</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DL(376502(30</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0,300,300,300</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>))</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>method PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56293:2:0:NEW 8 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "REVISION DETAILS
Dear Editor and Reviewers,
Thank you for the reviewers’ comments concerning our manuscript entitled “AndroAnalyzer: Android malicious software detection based on deep learning”. Those comments are all valuable and very helpful for revising and improving our manuscript, as well as the important guiding significance to our studies. We have studied comments carefully and have done all corrections. The responses to the reviewer comments are enclosed below.
Best regards,
Recep Sinan ARSLAN
Editor
Q1: As mentioned in the first review, the paper needs proofreading since it has grammatical errors.
[# PeerJ Staff Note: The Academic Editor has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at copyediting@peerj.com for pricing (be sure to provide your manuscript number and title) #]
A1: A general and detailed English proofreading proess was done by the native speaker. All necassary changes are shown with trackchanges.
Reviewer 2
Q1: The contribution is clear and significant, but still the Language needs to be improved. For example, the following statement in the result section “ Both obtaining the best deep learning structure ” is confused.
A1: A general and detailed English proofreading proess was done by the native speaker. All necassary changes are shown with trackchanges. For example, the corrected version of the sentence you criticized is as follows. In this way, corrections have been made throughout the article.
“ … In this section, the proposed model testing process is explained in detail. Both the obtaining of the best deep learning structure and comparison of it with other classification methods or similar studies are shown in the table. …”
" | Here is a paper. Please give your review comments after reading it. |
97 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Crop classification in early phenological stages has been a difficult task due to spectrum similarity of different crops. For this purpose, low altitude platforms such as drones have great potential to provide high resolution optical imagery where Machine Learning (ML) applied to classify different types of crops. In this research work, crop classification is performed at different phenological stages using optical images which are obtained from drone. For this purpose, gray level co-occurrence matrix (GLCM) based features are extracted from underlying gray scale images collected by the drone. To classify the different types of crops, different ML algorithms including Random Forest (RF), Naive Bayes (NB), Neural Network (NN) and Support Vector Machine (SVM) are applied. The results showed that the ML algorithms performed much better on GLCM features as compared to gray scale images with a margin of 13.65% in overall accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Remote sensing has been found to be a valuable tool in evaluating, monitoring and management of land, water and crop resources. Satellite imagery and aerial imagery have wide applications in the field of agriculture, monitoring snow cover trends, wild fire trends, water level rising and forestry. In agriculture, crop health monitoring, yield estimation, classification of crops based on land cover and monitoring of droughts are some common applications of remote sensing <ns0:ref type='bibr' target='#b22'>[Navalgund, Jayaraman, and Roy, 2007</ns0:ref><ns0:ref type='bibr' target='#b23'>, Seelan, Laguette, Casady, and Seielstad, 2003</ns0:ref><ns0:ref type='bibr' target='#b14'>, Hufkens, Melaas, Mann, Foster, Ceballos, Robles, and Kramer, 2019</ns0:ref><ns0:ref type='bibr' target='#b24'>, Sivakumar, Roy, Harmsen, and Saha, 2004]</ns0:ref>. Among these applications, crop classification is quite challenging due to texture and colour similarity of crops in initial stages. For this purpose, satellite data is commonly used which provide free access to the data. The data obtained from such platforms is coarse in resolution which makes it difficult to classify the different types of crops. Apart from coarse resolution of satellite data, the effect of atmospheric particles and cloud cover percentage in image, if greater than 90 % will result in discarding the images as no valuable information can be extracted from the satellite for these images.</ns0:p><ns0:p>The low cost Un-manned Aerial Vehicles (UAV) are substitute of the satellite platforms which provide high resolution data with flexibility in data collection. After high resolution images acquisition, several Machine / Deep Learning (ML/DL) algorithms are applied to classify the different types of crops. A lot of applications use texture information as features which are further given as input to the ML classification algorithms. The texture features provide useful insights about the color, its spatial arrangement and PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science intensities.</ns0:note><ns0:p>In <ns0:ref type='bibr' target='#b17'>[Kwak and Park, 2019]</ns0:ref>, the crop classification based on texture features is performed on the data collected by a drone mounted with multi-spectral camera. The acquired images are up scaled to 25 cm resolution and mosaiced later to obtain complete field of view. To extract texture features, Gray Level Co-occurrence Matrix (GLCM) at different kernel sizes is used including 3 x 3, 15 x 15 and 31 x 31. The mosaiced images act as an input to classification algorithm, such as Random Forest and Support Vector Machine (SVM). It is seen that using textural features obtained from larger kernel size showed improvement in classification results by 7.72% in overall accuracy rather than only using spectral information for classification.</ns0:p><ns0:p>Similarly in <ns0:ref type='bibr' target='#b1'>[Böhler, Schaepman, and Kneubühler, 2018]</ns0:ref>, texture base classification of crops is performed at pixel and parcel-based level where the crops in study are maize, bare soil, sugar beat, winter wheat and grassland. The images are acquired by eBee UAV in four flights of 30 minute each on 26 June 2015.</ns0:p><ns0:p>Textural features are extracted from the obtained UAV images. Random forest algorithm is applied after extracting the texture features which obtained the overall accuracy of 86.3%.</ns0:p><ns0:p>In this study, we performed classification of four different types of crops including wheat, soybean, rice, maize. The main objective of this research is to investigate the texture feature based crop classification of different crops having same spatial texture and colors. The high resolution optical images are acquired by the drone. The multiple texture features are extracted including contract, homogeneity, dimensionality, angular second moment, energy and correlation. To perform classification, Support Vector Machine, Naive Bayes, Random forest and Neural Network are applied on the gray scale images and the texture features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK Crop Classification Traditional Techniques</ns0:head><ns0:p>Over the two decades, a lot of research have been done in agriculture domain to perform different agricultural activities,particularly in crop disease detection, crop health monitoring, crop yield estimation, crop classification and others <ns0:ref type='bibr' target='#b19'>[Latif, 2018]</ns0:ref>. To perform these activities; machine learning or deep learning techniques are applied on the data collected from satellite, drone or IoT sensors which are discussed in the sections below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Crop Classification Using Satellite Data</ns0:head><ns0:p>An analysis on crop classification and land cover is presented in <ns0:ref type='bibr' target='#b16'>[Khaliq, Peroni, and Chiaberge, 2018]</ns0:ref>, in which Sentinel-2 is used to capture the multi-spectral imagery. The phonological cycle of crops is analyzed by computing NDVI of time series spectral imagery data. The 'Random Forest' classifier is used to classify the land cover where NDVI values are used as feature vectors. The Random Forest shows 91.1 % classification accuracy i.e. predicted land cover match with the actual ground cover. In <ns0:ref type='bibr' target='#b4'>[Deng, Zhu, He, and Tang, 2019]</ns0:ref>, land cover classification is using Random Forest as classifier. The images are acquired from two satellites including Landsat 8 and MODIS. These images are fused based on Enhanced Spatial In <ns0:ref type='bibr' target='#b21'>[Luciani, Laneve, Jahjah, and Collins, 2017]</ns0:ref>, an analysis on crop classification is presented in which Landsat-8 OLI is used to capture the multispectral imagery at coarse spatial resolution of 30m. The acquired images are resampled to 15m spatial resolution using pan sharpening technique.The phenological profile of crop are extracted by processing NDVI of time series spectral imagery data. The phenological profile are extracted based on pixel level and interpolation is used for reconstruction of missing NDVI value at a particular pixel. The univariate decision tree is applies on the data where feature vector consists of NDVI values. Results show that univariate decision tree achieved an accuracy of 92.66%.</ns0:p><ns0:p>There are a lot of datasets which are publicly available for land classification. In <ns0:ref type='bibr'>[Helber, Bischke, Dengel, and Borth, 2018]</ns0:ref>, land classification is performed using publicly available dataset 'EuroSAT' which Manuscript to be reviewed Computer Science of 98,18 % and 98.57 % respectively. In <ns0:ref type='bibr' target='#b14'>[Hufkens, Melaas, Mann, Foster, Ceballos, Robles, and Kramer, 2019]</ns0:ref>, health of wheat crop is monitored using near surface imagery captured by smart phone. Images are collected from 50 fields by smart phone during the complete life cycle of wheat crop. Each day, farmers captured images three times and captured images are transmitted to the cloud where green level is assessed by green chromatic coordinates. Crop is classified as healthy or un-healthy based on the green level. Subsequently, the classification result is compared with Landsat 8 imagery in which classification of healthy and un-healthy crops is performed based on Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) values. Results show that there is small deviation between the classification results based on smart phone imagery and satellite imagery.</ns0:p></ns0:div>
<ns0:div><ns0:head>Crop Classification Using Drone Data</ns0:head><ns0:p>Textural features from an image help to extract useful information from the images. In <ns0:ref type='bibr' target='#b20'>[Liu, Shi, Duan, and Wu, 2018]</ns0:ref>, experimental area is selected in Minzhu Township, Daowai District, Harbin, where variety of crops are planted. The 12 types of cropland cover in study includes rice, unripe wheat, ripe wheat, harvested wheat, soybean, corn, trees, grassland, bare land, houses, greenhouses and roads. The measurement and marking of Ground Control Points (GCP) is conducted on 3 August 2017 and data is collected on 4 August 2017 using fixed-wing UAV with a Sony Digital Camera. Digital Surface Model(DSM) and Digital Orthophoto Map (DOM) are produced with the help of POS data and GCP.</ns0:p><ns0:p>Texture features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment and correlation are extracted using ENVI software for RGB and DSM bands. SVM is used to perform classification of crops with RBF kernel. The combination of different features is performed to see impact of each feature. By using RGB resulted in a classification accuracy of 72.94% and combination of RGB, DSMs, Second Moment of green band, DSMs variance (27 * 27), DSMs contrast (27 * 27) acheived an accuracy of 94.5%. The results show that the hard to differentiate classes in color space became separable by adding altitude as a spatial feature where height for each tree, crop and grass differs.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b12'>[Hu, Zhong, Luo, and Wei, 2018]</ns0:ref>, hyper-spectral imaging sensor is mounted on a UAV to offer images at a higher spatial and higher spectral resolution at same interval. The study area chosen is a field in southern farm in Honghu city, located in China. The images are taken from the altitude of 100m at a spatial resolution of 4cm with 274 spectral bands. To fully utilize the potential of spatial and spectral resolution of the image, a combination of CNN-CRF model is proposed, to classify crops accurately.</ns0:p><ns0:p>For this to work, in preprocessing phase, the Principal Component Analysis (PCA) is performed for dimensionality reduction of the data while in meantime preserving spectral information. Each patch on image will be passed to CNN as input, to get rule image from the PCA. The rule image, which is the output of CNN will be passed to CRF model to generate classification map of the output. The CNN-CRF model achieved an accuracy of 91.79% in classifying different crop types.</ns0:p><ns0:p>Image fusion between satellite and UAV can help in classification of crops at detail level. In <ns0:ref type='bibr' target='#b31'>[Zhao, Shi, Liu, Hovis, Duan, and Shi, 2019]</ns0:ref>, fusion between Sentinel-2A satellite and images acquired from fixed wing Agristrong UAV drone is performed to get image at high spatial, high spectral and high temporal resolution. For this purpose, experimental area covering around 750 ha is selected in Harbin city, Heilongjiang province, China. The crop types in the current study includes rice, soybean, corn, buckwheat, other vegetation, bareland, greenhouses, waters, house and roads. The images are acquired using UAV drone for 14 September 2017 at 0.03 m resolution and Sentinel-2A images for 16 September 2017 are downloaded. The high resolution 0.03 m images are sub-sampled at lower resolution (0.10 m, 0.50 m, 1.00 m and 3.00 m). The fusion between UAV images at different resolutions and Sentinel-2A images is performed using Gram-Schmidt transformation <ns0:ref type='bibr' target='#b18'>[Laben and Brower, 2000]</ns0:ref>. Random forest algorithm performed better crop classification for fused image at 0.10 m with accuracy at 88.32%, whereas without fusion the accuracy is at 76.77% and 71.93% for UAV and Sentinel-2A images respectively.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b1'>[Böhler, Schaepman, and Kneubühler, 2018]</ns0:ref>, classification of crops is done at pixel and parcel-based level. The study area covering 170 hectare is selected in Canton of Zurich, Switzerland. The crops in study are maize, bare soil, sugar beat, winter wheat and grassland. The images are acquired by eBee UAV in four flights of 30 minutes each on 26 June 2015. Subsequently, the textural features are extracted from the obtained UAV images. The random forest algorithm is applied on the extracted features and crop maps are generated where object based classification resulted in overall accuracy of 86.3% for overall set of crop classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Deep Learning for Crop Classification</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b27'>[Trujillano, Flores, Saito, Balcazar, and Racoceanu, 2018]</ns0:ref>, a deep learning network is used to classify the corn crop in region of Peru, Brazil. The images are acquired for two locations where first location contained corn plots, trees, river and other crops situated in a mountainous region, where flight is conducted at 100 and 180 m respectively. The second location is a coastal area where images are acquired at altitude of 100 m, area consists of corn crop and some nearby houses. The multi-spectral camera mounted on the UAV acquired images in 5 different bands, at a spatial resolution of 8 cm. Photoscan tool is used to generate the mosaic of image. The image is divided into a patch size of 28 x 28, covering two rows of the corn fields. The patch is labelled as corn or no corn field. Four datasets are generated from the acquired images where dataset #1 and dataset #2 covered classes with images acquired at an altitude of 100m and 180m. The dataset #3 merged the corn classes from different altitude flight images whereas in the dataset #4, the dataset #1 is augmented which included rotation and flipping of images. Each dataset containing 28 x 28 patches of images is trained using the LeNet model, in which the dataset number two achieved an accuracy of 86.8% on test set.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b32'>[Zhou, Li, and Shao, 2018]</ns0:ref>, the various types of crop classification methods are proposed using CNN and SVM algorithms. For this purpose, Yuanyang Country, in the province of Henan, China is selected as study area where the main crops in the region are rice, peanut and corn. The Sentinel-2A images are acquired for two dates, where all the bands data has been resampled to 10 m resolution and the resultant stack of 26-dimensional image is generated. A ground survey is conducted in the month of August 2017, for labelling of different types of crops. Around 1200 pixels are selected for training and rests of the pixels are used for validation. The labeled pixel in the final stack image is converted to gray scale which is given as an input to the model. The CNN outperformed the SVM, where it clearly show the deep learning based model is better at learning the features, while achieving accuracy of 95.61 % in classification of crops. In <ns0:ref type='bibr' target='#b26'>[Sun, Abdulghani, Imran, and Abbasi, 2020]</ns0:ref>, an application for smart home is presented. The application monitors the moisture of the soil and value of nitrogen, phosphorous and potassium for an indoor plant with the help of IoT sensors. The value is classified based on various levels and provide feedback to the user with help of dashboard. The system designed is a prototype, which helps the farmers when to irrigate the crop and what ratio of the value of nutrients is suitable for the specific plant. Water content estimation in plant leaf can help in the productivity of the crops. In <ns0:ref type='bibr' target='#b30'>[Zahid, Abbas, Ren, Zoha, Heidari, Shah, Imran, Alomainy, and Abbasi, 2019]</ns0:ref>, a novel approach based on machine learning is presented to estimate the health status of the plant leaves terahertz waves by measuring transmission response for four days. Each frequency point recorded is used as a feature in the study. Feature selection was carried out to discard any irrelevant feature that could result in wrong prediction of water content in the leaves. The support vector machine (SVM) algorithm clearly performed better at predicting the accurate water content in the leaves for four days.</ns0:p><ns0:p>The work proposed in this paper will process the optical images acquired by UAV by data augmentation for the crop class with very few images. The processed images will be converted to gray scale down scaled to a low resolution. The textural features will be extracted from the gray scale images. Crop classification will be performed by using machine learning and deep learning algorithms for gray scale and textural based images. With the evaluation measure we will compare and evaluate the performance of how GLCM based textural features will outperform the one with gray scale images. In this work, the main focus is how textural features will be helpful to distinguish between different types of crops compared to gray scale images. The paper is organized as follows, where literature review is conducted in Section-2, study area and data set is discussed in Section-3, feature extraction in section-4, methodology in section-5, results in section-6 and conclusion and future work in section-7.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>STUDY AREA & DATA SET Study Area</ns0:head><ns0:p>To perform crop classification, an experimental area in the capital of Pakistan, Islamabad located at National Agriculture Research Center (NARC) is selected. In NARC region, various types of crops are grown throughout the year and experiments are performed. For our research, we selected four crops wheat, maize, rice and soybean as shown in Figure <ns0:ref type='figure'>1</ns0:ref>. Crop calendar for Pakistan can be viewed at <ns0:ref type='bibr'>[Calendar, 2020]</ns0:ref>, where the particular locations of the crops in the study along with their growth cycle is enlisted in the </ns0:p></ns0:div>
<ns0:div><ns0:head>Data set</ns0:head><ns0:p>The data set used in the study is gathered for five different crops at different growth cycle of crops as shown in Table-1 using DJI Phantom pro Advanced. All the selected crops including wheat, rice, soybean and maize have overlapping crop cycle, especially winter wheat crop and winter maize crop had the same planting time. It is quite challenging to separate wheat and maize crops based solely on their NDVI profile.</ns0:p><ns0:p>In order to address this problem, UAV optical imagery is collected and GLCM features are extracted from these images. Subsequently, several machine / deep learning models are applied to perform crop classification where the details of the dataset acquisition, machine learning/deep models, results are provided in the following sections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>METHODOLOGY</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows the architectural diagram of the system divided into modules. The first module is the data acquisition where the data is collected with the help of UAV drone. After the collection of the data, next step is the pre-processing of the data, which requires analysis to remove images outside the boundary of the crop and to apply data augmentation for the crops fields where we have limited amount of data. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the five crops fields including soysbean, rice, maize, wheat at tillering stage and wheat at maturity stage.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Data Pre-processing</ns0:head><ns0:p>The first step after collecting the data is to pre-process it to make it suitable for training. The captured images are analyzed without removing the images outside of the boundary. The collected data is organised in folders containing date of collection along with the stage of the particular crops. In order to perform supervised classification, field survey is conducted to label each image with the help of NARC experts. Initially, the collected data is not sufficient to apply any classification technique, therefore, data augmentation is used to enhance the data. For this purpose, horizontal flipping and zoom with a minor</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Feature Extraction</ns0:head><ns0:p>The pixels in an optical image contain noise which is reduced by the information from texture and is considered as a complimentary feature in classification. The analysis of textural images are categorized into four different groups that includes model-based, statistical, geometrical and signal processing <ns0:ref type='bibr' target='#b28'>[Tuceryan and Jain, 1993]</ns0:ref>. For feature extraction GLCM is employed, which is a widely used statistical technique developed by <ns0:ref type='bibr' target='#b9'>[Haralick, Shanmugam, and Dinstein, 1973]</ns0:ref> for processing of remote sensing data. In first step, the original image is converted to the gray-scale. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• Homogeneity (HOM): This statistical measure is also called inverse difference moment. It measures the homogeneity in the image where it assumes larger values for smaller differences in grey tone within pair elements. Homogeneity is more sensitive to the presence of near diagonal elements in the GLCM. The value of homogeneity is maximum when elements in the image are same.</ns0:p><ns0:p>GLCM contrast and homogeneity are strongly but inversely correlated, which means homogeneity decreases when contrast increases while energy is kept constant.</ns0:p><ns0:p>• Dissimilarity (DIS): Dissimilarity is a linear measure of local variations in an image.</ns0:p><ns0:p>• Angular second Moment (ASM): It measures textural uniformity i.e. repetitions in pixel pair. It detects the disorders in textures of the images. The maximum value achieved by angular second moment is one. Higher values occurs when the gray level distribution has constant periodic form.</ns0:p><ns0:p>• Energy (EG): Energy is computed as square root of angular second moment. When the window is orderly, energy has higher values.</ns0:p><ns0:p>• Correlation (CORR): It is a measure of linear dependencies between gray tone of the image.</ns0:p><ns0:p>Each of the listed textural feature is computed using the Equations-1 to 6 <ns0:ref type='bibr'>[[Equations, 2011]</ns0:ref>]:</ns0:p><ns0:formula xml:id='formula_0'>CON = N−1 ∑ i=0 N−1 ∑ j=0 (i − j) 2 (1) HOM = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 1 + (i − j) 2</ns0:formula><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>DIS = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j)x|i − j| (3) ASM = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 2 (4) EG = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 2 (5) CORR = N−1 ∑ i=0 N−1 ∑ j=0 (i − µ i )( j − µ j ) (σ i )(σ j )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where N denotes number of gray levels, while P(i, j) is the normalized value of gray-scale at position i and j of the kernel with sum equal to 1. The textural features are generated from 100 x 100 gray-scale images.</ns0:p><ns0:p>In this study, the kernel size is set to 19, and a total of 48 features are generated for each gray-scale image with distance at 1 and 2, rotation at 0, 45 • , 90 • and 135 • for each textural feature.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Crop Classification</ns0:head><ns0:p>In order to perform crop classification on the collected dataset, several supervised techniques are applied which are discussed below:</ns0:p></ns0:div>
<ns0:div><ns0:head>Naive Bayes Classifier</ns0:head><ns0:p>Navie Bayes Classifier is a simple probabilistic classifier which is based on Bayes theorem. The inducer in Naive-Bayes computes conditional probabilities of classes given the instance and selects the class with higher posterior probability <ns0:ref type='bibr' target='#b29'>[Witten and Frank, 2002]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Neural Network</ns0:head><ns0:p>Neural Network is a very famous model which is designed to mimic the human brain to perform classification and regression tasks. It contains one input layer, or more hidden layers where each layer holds several neuron or nodes, and single output layer <ns0:ref type='bibr' target='#b7'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>.</ns0:p><ns0:p>Each layer computes some mathematical functions which enable it find complex relationship in the data.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Support Vector Machines</ns0:head><ns0:p>The goal of Support Vector Machine (SVM) is to find an optimal boundary that separates the classes based on data in the training set <ns0:ref type='bibr' target='#b5'>[Ding, Qi, and Tan, 2011]</ns0:ref>. SVM algorithm solves the optimization in a way that it tends to maximize the margin between decision boundary <ns0:ref type='bibr' target='#b8'>[Gunn, 1998]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Random Forest Classifier</ns0:head><ns0:p>Random Forest Classifier is developed by Breiman <ns0:ref type='bibr' target='#b2'>[Breiman, 2001]</ns0:ref> which performs classification by extending decision to multiple trees instead of the single tree. The ability to diversify through multiple trees instead of single tree helps to achieve better classification performance. The final class of the particular instance is decided by the majority votes of all trees. Random Forest requires only few parameters including the number of variables required for partitioning of the nodes and the number of trees to be grown.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolutional Neural Network (CNN)</ns0:head><ns0:p>CNN is a deep learning model which is commonly used on imagery data <ns0:ref type='bibr' target='#b7'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>. It consists of an input layer, multiple hidden layers and output layer where hidden layers are comprised of convolutional layers followed by the pooling layer and dense layer.</ns0:p></ns0:div>
<ns0:div><ns0:head>Long Short Term Memory (LSTM) Netwrok</ns0:head><ns0:p>LSTM is another deep learning model based on the Recurrent Neural Network (RNN) which has capability to learn from the time series data with long dependencies <ns0:ref type='bibr' target='#b7'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>. Each layer in LSTM model is the set of recurrently connected blocks or memory cells which performs reasonably good on several complex tasks such as crop classification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Evaluation Metrics</ns0:head><ns0:p>The evaluation metrics used to assess the performance of the machine and deep learning algorithms are described as follows:</ns0:p><ns0:p>Producer Accuracy</ns0:p><ns0:p>Producer's Accuracy (PA) defined in Equation-7 is the accuracy map from the point of reference of the the producer. The PA shows how the classified map depicts the real features on the ground correctly or the probability that how certain land cover of an area on the ground is classified. The PA is the complement of the Omission Error (OE), where PA = 100% − OE <ns0:ref type='bibr' target='#b25'>[Story and Congalton, 1986]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PA = No. o f correctly classi f ied images o f a class Total no. o f ground truth images f or class (7)</ns0:head><ns0:p>User Accuracy</ns0:p><ns0:p>The User's Accuracy (UA) defined in Equation-8 is the accuracy with respect to the user of the map.</ns0:p><ns0:p>The UA shows how often the class on classification map will actually be in the ground data. The UA is complement of the Commission Error (CE), UA = 100% −CE. UA is calculated by taking total number of correct classifications for a class divided by total number of the class. </ns0:p></ns0:div>
<ns0:div><ns0:head>Recall = T P T P + FN (11)</ns0:head><ns0:p>F1-Score F1-Score is computed to provide a balance between precision and recall, compared to individually computing precision and recall does not cover all aspects of accuracy. F1-Score is calculated using the Equation-12. The range of F1-score is between 0 and 1, where higher number shows the higher performance of the particular model. Similarly, FN are the number of Positive tuples which are wrongly classified ad Negative <ns0:ref type='bibr' target='#b15'>[Kantardzic, 2011]</ns0:ref>. In the crop classification domain, the confusion matrix is another metric which is used to see the performance of the model in detail.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RESULTS AND DISCUSSION</ns0:head><ns0:p>The machine learning and deep learning algorithms used in our study are support vector machine (SVM), random forest classifier (RFC), naive bayes classifier and neural networks (NN). Each algorithm is performed once on gray scale images and once on the images with GLCM based textural features. We Similarly, the algorithm classify 85.7% wheat crop images correctly, whereas 7.1% images are classified as rice and remaining 7.1% images are classified to soybean. The algorithm classify 81.82% maize crop images correctly, whereas 9.1% images are classified as rice and remaining 9.1% images are classified to wheat-T. The overall accuracy obtained for SVM classification on generated GLCM based textural features is 84.10% which showed an improvement of 13.65% compared to training the classifier only on gray scale images. This improvement clearly shows impact of using textural features extracted from gray scale images and their ability to distinguish between different crop types. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed It can be concluded from the results obtained by applying machine and deep learning algorithms that machine learning models with the help of textural features extraction using GLCM are able to outperform deep learning algorithms because of the limited data set available. To further enhance the performance of deep learning algorithm, there is a need to gather more data in order to achieve better results compared to machine learning algorithms. the grey scale images and the texture images. The highest accuracy (100%) is achieved for rice crop using gray scale images. Similarly, F1-score is the highest for rice crop when SVM is applied. The highest value of precision is achieved for rice and wheat-T crops using gray scale images and texture based images.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The recall shows highest value for rice using gray scale images.</ns0:p><ns0:p>Similarly, Table-17 shows the accuracy, precision, recall and F1-score for the gray scale images and the texture images when Random forest is applied. The highest accuracy that is 100% accuracy is obtained in the case of rice, maize and wheat-T crops for gray scale and texture based images. The F1-score for maize, rice and wheat-T is highest in case of both gray scale and texture images. The highest value of precision is observed for all crops with gray scale and texture based images, whereas highest precision is for soybean crop. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Temporal and Fusion Model to generate time series-based Landsat-8 images. The data from GF-1 satellite and Google Earth is used as a supporting data for training and validation. In this research work, object base classification is used instead of pixel base classification. The classification results show the accuracy of 94.38 % on the fused data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>is comprised of 27000 labelled examples covering 10 distinctive classes. Each image patch is of 64 x 64 pixels which is collected from 30 cities in European Urban Atlas. For classification, the data set is divided in the ratio of 80 to 20 which is used for training and testing respectively. Two deep learning architecture such as 'GoogLeNet' and RestNet-50 are trained on the dataset which achieved an accuracy 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. System Architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>3. The first wheat flight is conducted on 16-May-2019 with wheat field at max maturity stage and second wheat crop flight is performed on 02-March-2020 at tillering stage. The flight for soybean is conducted on 03-September-2019, where as the flight for rice field is done on 03-September-2019 at max-tiller stage. The flight for maize crop is done at max-maturity stage on 24-July-2019. Due to limited images of rice field, the image of rice field are augmented to make the images count equivalent to the minimum of rest of crop field classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Crops Optical Images captured by using DJI Phantom</ns0:figDesc><ns0:graphic coords='8,371.16,291.02,98.04,73.53' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>The next step is to extract spatial features from the gray-scale images based on relationship of brightness values to centre pixel with its neighborhood defined by a kernel or window size. The relationship of the brightness values is represented in the form of a matrix. The matrix is made up of the frequent occurrence of the sequential pair of the pixel values along with a defined direction. The relationship helps GLCM to generate different set of texture information based on gray-scale, kernel size and direction. Harlick in [Haralick, Shanmugam, and Dinstein, 1973] defined fourteen textural features, which provide redundant spatial context information which is an overhead in classification. In this study, only six textural features are considered which are listed below: • Contrast (CON): The contrast measures the spatial frequency of an image and is a different moment of GLCM. It is the difference between the highest and the lowest values of adjacent set of pixels. The contrast texture measures the local variations present in the image. An image with the low contrast presents GLCM concentration term around the principal diagonal and features low spatial frequencies. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4. Confusion Matrix</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>F1 − Score = 2 *</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Precision * Recall Precision + Recall (12) Accuracy Accuracy refers to capability of the model to produce correct predictions for the instances observed. It is defined in Equation-13, where TP means true positive, TN means true negative, FP means false positive and FN means false negative. Accuracy = T P + T N T P + T N + FP + FN (13) All the TP, TN, FP and FN values can easily be computed by drawing the confusion matrix which is visual representation of all these values as shown in Figure 4 The Figure 4 shows the confusion matrix for two classes i.e. Positive and Negative. The TP are the correctly classified tuples of class Positives, TN are the number of tuples which are correctly classified as Negative. However, FP are the number of Negative tuples which are incorrectly classified as Positive.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>have selected five crop classes at various phenological stages of the crops. The results of overall accuracy of all classes and individual crop class performance are organized in separate tables to give better overview of the performance. Table-4 shows confusion matrix for classification performed on the gray scale images using SVM algorithm. The SVM algorithm is able to classify rice and wheat-T crop correctly. The algorithm with only gray scale images is not able to classify soybean crop correctly and all the images of soybean are classified as wheat (maturity stage) crop. SVM algorithm correctly classify 93% of 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021) Manuscript to be reviewed Computer ScienceTable 4. Confusion matrix for classification performed on grayscale images using SVM Class Soybean Rice Wheat-T Wheat Maize PA(%) and only 7% images are classified as maize, similarly 73% of maize crop images are correctly classified and only 27% images are classified as wheat-T. The overall accuracy obtained by SVM classification on gray scale images is 70.45%.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>gray scale images classify soybean as wheat crop. In addition, the wheat crop images at maturity stage are classified as wheat crop correctly, but only one image is mis-classified as maize crop. The overall accuracy obtained from CNN based classifier on gray scale images is 77.27%.Table-15 shows the confusion matrix for classification performed on generated GLCM textural featuresTable 15. Confusion matrix for classification performed on generated textures features images using CNN Class Soybean Rice Wheat-T Wheat Maize PA(%) network (CNN) based classifier. The CNN based classifier is able to classify maize crop only. The classifier with glcm based images classify all crop images into a single class. Each of the soybean, rice, wheat-T, wheat and maize images are classified as maize crop. The overall accuracy obtained for CNN based classifier on GLCM based images is 25% and is lower compared to the gray scale images. The reason for this poor performance is the limited amount of available data which is not enough to train a deep learning model and thus resulted in showing an average performance. The CNN based classifier on glcm generated images failed to learn any information.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table-1. The climate of Islamabad is humid subtropical climate with an average rainfall of 790.8 millimeter. List of crops selected in study area Wheat-II Nov-19 to May-20 33 • 40' 17.29' N, 73 • 07' 48.98' E Maize Mar-19 to Jul-19 33</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Crop</ns0:cell><ns0:cell>Crop-Cycle</ns0:cell><ns0:cell>Location</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-I</ns0:cell><ns0:cell>Dec-18 to Jun-19</ns0:cell><ns0:cell>30</ns0:cell></ns0:row></ns0:table><ns0:note>4/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021) Manuscript to be reviewed Computer Science • 40' 22.25' N, 73 • 07' 18.28' E Rice Jun-19 to Oct-19 30 • 40' 25.19' N, 73 • 07' 27.93' E Soybean Jul-19 to Dec-19 33 • 40' 34.46' N, 73 • 08' 10.20' E • 40' 18.69' N, 73 • 07' 37.84' E Figure 1. Crops marked in © Google Earth (NARC Region)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Specifications of UAV drone used in the study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Characteristics</ns0:cell><ns0:cell>Technical Specifications</ns0:cell></ns0:row><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Four-rotor electric UAV</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight</ns0:cell><ns0:cell>1368 g</ns0:cell></ns0:row><ns0:row><ns0:cell>Manufacturer</ns0:cell><ns0:cell>DJI</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>FC6310</ns0:cell></ns0:row><ns0:row><ns0:cell>Operating Temperature</ns0:cell><ns0:cell>0 • to 40 •</ns0:cell></ns0:row><ns0:row><ns0:cell>Camera Sensor</ns0:cell><ns0:cell>1' CMOS</ns0:cell></ns0:row><ns0:row><ns0:cell>Image Size</ns0:cell><ns0:cell>4864 x 3648</ns0:cell></ns0:row><ns0:row><ns0:cell>Flight Duration</ns0:cell><ns0:cell>30 minutes</ns0:cell></ns0:row><ns0:row><ns0:cell>Battery</ns0:cell><ns0:cell>5870 mAH LIPo 4S</ns0:cell></ns0:row></ns0:table><ns0:note>to cover five fields at different stages of crop cycle which are listed in Table-</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Crop fields images acquired at various stage of crop cycle</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Crop</ns0:cell><ns0:cell>Stage</ns0:cell><ns0:cell>Acquisition Date</ns0:cell><ns0:cell>Acquisition Time</ns0:cell><ns0:cell>Altitude</ns0:cell><ns0:cell>Images Count</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Wheat-I Max Maturity</ns0:cell><ns0:cell>16-May-2019</ns0:cell><ns0:cell>12:20 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>41</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>Max-Tiller</ns0:cell><ns0:cell>03-Sept-2019</ns0:cell><ns0:cell>12:15 PM</ns0:cell><ns0:cell>120 foot</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>V2 Stage</ns0:cell><ns0:cell>03-Sept-2019</ns0:cell><ns0:cell>12:40 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-II</ns0:cell><ns0:cell>Tiller Stage</ns0:cell><ns0:cell>02-March-2020</ns0:cell><ns0:cell>01:30 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>Max Maturity</ns0:cell><ns0:cell>24-July-2019</ns0:cell><ns0:cell>01:15 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>39</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SOYBEAN</ns0:cell><ns0:cell>RICE</ns0:cell><ns0:cell /><ns0:cell cols='2'>MAIZE</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>WHEAT TILLERING</ns0:cell><ns0:cell cols='2'>WHEAT MATURITY</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using SVM</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>66.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>85.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>81.8</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>85.7</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>84.1%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>Table-5 shows the confusion matrix for classification performed on generated GLCM textural features</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>using SVM algorithm. The SVM algorithm on GLCM based textural features is able to classify rice</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>and wheat-T crop images correctly. The algorithm with GLCM based textural images is able to classify</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>66.67% soybean crop images correctly, whereas 33.33% images are classified as wheat (maturity stage).</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc> shows the confusion Confusion matrix for classification performed on grayscale images using Random Forest Classifier for classification performed on gray scale images using Random Forest classifier. The Random Forest classifier is able to classify rice, wheat, wheat-T and maize crop correctly. The algorithm with only gray scale images is not able to classify soybean crop correctly, except one soybean image all the remaining images of soybean are classified as wheat (maturity stage) crop. The overall accuracy obtained by the Random Forest classifier on the gray scale images is 81.82%. Table-7 shows the confusion matrix</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>12.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>63.6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>81.82%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>11/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)Manuscript to be reviewedComputer Science matrix</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Confusion matrix for classification performed on GLCM features using Random Forest Classifier performed on generated GLCM textural features using Random Forest classifier. The Random Forest classifier based on GLCM based textural features is able to classify rice, wheat, wheat-T and maize crop correctly. The algorithm with GLCM based textural images is able to classify 55.56% soybean crop images correctly, where 44.4% images are classified to wheat (maturity stage) images. The overall accuracy obtained for Random Forest classifier on generated GLCM based textural features is 90.91% which showed an improvement of 9.09% compared to training the classifier only on gray scale images. This improvement clearly indicates the ability of the textural features extracted from gray scale images to distinguish between the crop types. Table-8 shows the confusion matrix for classification</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>55.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>90.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>for the classification</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using Naive Bayes Classifier 56% soybean crop images correctly, where 11.11% images are classified as rice crop and 33.33% images are classified as wheat (maturity stage) images. The overall accuracy obtained for Naive Bayes classifier on generated GLCM based textural features is 90.91% which showed an improvement of 11.36% compared to training the classifier on gray scale images only. This improvement clearly shows the impact of textural features extracted from gray scale images for distinguishing the crop types. Table-10 shows the confusion matrix for classification performed on gray scale images using feed</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>66.7</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>79.55%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>performed on gray scale images using Naive Bayes classifier. The Naive Bayes classifier is able to classify</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>rice, wheat (maturity stage), wheat (tillering stage), and maize crop correctly. The classifier with only gray</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>scale images is not able to classify soybean crop correctly and all the images of soybean are classified as</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>wheat (maturity stage) crop. The overall accuracy obtained for Random Forest classification on gray scale</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>images is 79.55%. Table-9 shows the confusion matrix for classification performed on generated GLCM</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>textural features using Naive Bayes classifier. The Naive Bayes classifier based on GLCM based textural</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>features is able to classify rice, wheat (maturity stage), wheat (tiler stage) and maize crop correctly. The</ns0:cell></ns0:row><ns0:row><ns0:cell>algorithm classified 55.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>12/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using Naive Bayes Classifier</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>55.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>82.4</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>90.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using Neural Networks The feed forward neural network classifier is able to classify wheat crop correctly. The classifier with only gray scale images classify soybean, wheat-T, wheat, maize as wheat crop. In addition, the rice crop images are classified as maize crop. The overall accuracy obtained for feed forward neural network classifier on gray scale images is 31.82%. The reason for this poor performance is the limited amount of available data which is not enough to train a deep learning model and thus resulted in showing an average performance. Table-11 shows the confusion matrix for classification performed</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>31.82%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>forward neural network classifier.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using Neural Networks based Classifier</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>25%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>on generated GLCM textural features using feed forward neural network classifier. The feed forward</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>neural network classifier is able to classify maize crop only. The classifier with only glcm based images</ns0:cell></ns0:row></ns0:table><ns0:note>classify all crop images into a single class. Each of the soybean, rice, wheat-T, wheat and maize images are classified as maize crop. The overall accuracy obtained for feed forward neural network classifier on GLCM based images is 25% and is lower compared to the gray scale images. The reason for this poor performance is the limited amount of available data which is not enough to train a deep learning model and thus resulted in showing an average performance.Table-12 shows the confusion matrix for classification performed on gray scale images using long short 13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using LSTM LSTM). The LSTM based classifier when applied on gray scale image is able to classify maize and rice crop correctly. The classifier with only gray scale images classify soybean and wheat-T as wheat crop at mature stage. In addition, the wheat crop images at maturity stage are classified as wheat crop correctly, but only one image is mis classified as maize crop. The overall accuracy obtained for LSTM based classifier on gray scale images is 65.91%. Table-13 shows the confusion matrix for</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>93</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>65.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>term memory (</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using LSTMThe LSTM based classifier is able to classify maize crop only. The classifier with only glcm based images classify all crop images into a single class. Each of the soybean, rice, wheat-T, wheat and maize images are classified as maize crop. The overall accuracy obtained for LSTM based classifier on GLCM based images is 25% and is lower compared to the gray scale images. The reason for this poor performance is the limited amount of available data which is not enough to train a deep learning model and thus resulted in showing an average performance. Table-14 shows the confusion matrix for classification performed on</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>25%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>classification performed on generated GLCM textural features using long short term memory classifier.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using CNN</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>93</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>77.27%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>gray scale images using convolutional neural network (CNN). The CNN based classifier when applied on gray scale image is able to classify rice, wheat-tiller and maize crop correctly. The classifier with only 14/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Table-16 describes the accuracy, precision, recall and F1-score for SVM on Precision, Recall & F1-Score on gray scale images and texture images using SVM</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='8'>Accuracy (%) Gray scale GLCM Gray scale GLCM Gray scale GLCM Gray scale GLCM Precision Recall F-1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>79.55</ns0:cell><ns0:cell>91.11</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.56</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>93.18</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.91</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>77.27</ns0:cell><ns0:cell>86.67</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>90.91</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head /><ns0:label /><ns0:figDesc> shows the accuracy, precision, recall and F1-score for the gray scale images and the texture images when Naive Bayes classifier is applied. The highest accuracy obtained for gray scale images is in the case of maize, wheat-T and rice crop, whereas in case of texture based images</ns0:figDesc><ns0:table /><ns0:note>15/19PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='3'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='19'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:1:1:NEW 27 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Original Article ID: 54928
Original Article Title: “Gray level co-occurrence matrix (GLCM) texture based crop
classification using low altitude remote sensing platforms”.
To: Tariq Masood
Re: Response to editor
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the comments.
We have reduced the overall length of the manuscript as it was exceeding the limit. We have updated the
earlier rebuttal document with the reviewer and editor comments to match the updated manuscript with
the changes.
Best regards,
Uferah Shafi et al.
Editor Comments
First Comment: Charge for Extra-long Manuscript
●
●
●
●
We would like to draw your attention to
https://peerj.com/about/author-instructions/#manuscript-text-style and warn you that, if accepted,
your manuscript would incur an additional charge to help offset the production costs of a
manuscript of that size.
The initial submission is currently estimated to be between 40-49 typeset pages (our typical
article is approximately 17 pages in extent). If the final accepted manuscript remains within that
length, the additional fee would be $400.
If the final accepted manuscript is outside of this range then you will be given a new estimate
after acceptance in your production queries. This is not an additional APC and discount codes
cannot be used to pay for this fee.
ACTION NEEDED:
○ Please either: confirm that you accept this fee should it be applicable and therefore it is
OK to progress your submission by sending it out to review. Please leave a note in the
Confidential Information for PeerJ Staff,
○ or reduce the overall length of your submission by decreasing the manuscript word count
and moving primary figures and/or tables to the Supplemental Files. Note: you must
renumber any remaining primary figures/tables and update your in-text .
Author action: We have reduced the length of the manuscript by removing appendix figures
Second comment: Equal Authorship
●
You have designated equal co-authors in the submission system, but not on the manuscript
cover page. If your manuscript is accepted, we will only use the information entered in the
system.
Author action: We have updated the manuscript cover page to reflect the designation in the submission
system.
Third Comment: References
●
In the reference section, please provide the full author name lists for any references with 'et al.'
including these references:
○ Gunn, S. R. et al. (1998).
○ Tuceryan, M., Jain, A. K., et al. (1993).
●
If you have used EndNote, you can change the references using the steps provided on our author
instructions.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: We have updated the .bib file to fix the issue with ‘et al.’ in references. Also the .cls file
is modified with:
\RequirePackage[longnamefirst]{natbib}
\bibliographystyle{plainnat}
Fourth Comment: Google Maps / Google Earth Figure
●
●
Figures 1, 3, and 4 are Google Maps/Google Earth images.
Please follow the Attribution Guidelines for Google Maps and Google Earth here
http://www.google.com/permissions/geoguidelines/attr-guide.html and
https://peerj.com/about/author-instructions/#figure-referencing (i.e., in addition to the
attribution info on the images themselves, please add the attribution info in the captions of
the images).
Author response: Thanks for the suggestions, we have updated the paper accordingly.
Author action: The Figure 1 has been updated with the attribution guidelines for Google Earth Image.
However, the Figure-3 and Figure-4 contains the images which were captured with the help of UAV
drone DJI Phantom used in the study.
Fifth Comment: Figure Permissions
●
●
Please supply the original Figure 2 (from Calendar, 2020) so that we can confirm that the
modifications are sufficient to not require permissions from the original copyright owners.
Please upload it as a confidential supplemental file here.
If the modifications are not sufficient and the figure is taken from copyrighted material, we
will need a copy of the written permission specifically allowing us to publish under our CC
BY 4.0 license for our records uploaded as a Supplemental File here. Please ask the
copyright holder to complete our permission form found here.
Author response: We are thankful to the editor for pointing out the copyright issue for the images from
the website.
Author action: The Figure has been removed from the main paper, and only the reference of that
particular figure has been added to the site.
Sixth Comment: Figures
●
●
Only vector PDFs are acceptable. Please replace Figures crop_calendar.pdf and
crops_narc_region.pdf (which are bitmap PDFs) with either PNG, EPS or vector PDF,
measuring minimum 900 pixels and maximum 3000 pixels on all sides and eliminating
excess white space around the images, as primary files here.
Please use numbers to name your files, example: Figure1.eps, Figure2.png.
Author response: We are thankful to the editor for providing the information regarding the figures. .
Author action: We have updated the crop_narc_region.pdf and update the image copy. However, the
crop_calendar.pdf image was from an external source which has been removed.
Seventh Comment: Correct Figure Orientation
●
●
Please rotate crop_calendar.pdf so that it is positioned correctly and the text can be read
from left to right. Please re-upload here.
Please provide a replacement figure measuring minimum 900 pixels and maximum 3000
pixels on all sides, saved as PNG, EPS or PDF (vector images) file format without excess
white space around the images.
Author response: We are thankful to the editor for providing valuable information.
Author action: We have addressed all concerns of the editor by removing crop_calendar.pdf image
which was from an external source and not in acceptable limit to be publishable with mentioned figure
requirements.
Eighth Comment: Figure / Table Citation
●
●
●
●
The submission appears to be missing a citation for Figure 4 and Table 3 in the text. Please
can you add a citation for Figure 4 and Table 3 in your manuscript and re-upload the
document.
Please rename and cite the Appendix Figures as Figure A1, Figure A2, Figure A3, Figure
A4, and Figure A5. These files (and the Appendix as a whole) should be cited in the main
text.
Citations must be organized, and cited for the first time, in ascending numerical order,
meaning Figure 1 must always be cited first, Figure 2 must always be cited second, and so
on. The same applies to Tables and Appendices.
Please edit the citations in your manuscript and re-upload the document.
Author response: We are thankful to the editor for highlighting the missing information.
Author action: We have addressed all concerns of the editor by citing the missing Table / Figure
citations.
Reviewer#1
Basic Reporting: The paper needs a grammar polishing. Please have it proofread carefully again.
Author response:
manuscript.
We are thankful to the reviewer for their valuable comments to improve the
Author action: We have revised the whole paper to remove the grammatical issues.
Reviewer#1,
Experimental Design: The experiment needs to be more comprehensive. I would suggest to add deep
learning algorithms such as CNN and LSTM. In addition, the paper should include comparison with
state-of-the-art approaches.
Author response: We appreciate the reviewer for their suggestion for testing our research work with
other deep learning models.
Author action:
1- We have added more detail and explanation in Section 4 (Methodology) & Section 5 (Results and
Discussion). Please see section 4, page 10 (from line # 317 to line # 327). Please see section 5, page 15
and 16 (line # 433 to line # 461) and please see Table 12 to Table 15
2- We have used two deep learning models i.e. CNN and LSTM as suggested by the reviewer which are
state of the art techniques used for crop classification.
3- We have added the performance comparison among all classification techniques in the Section 5
(Results and Discussion). Please see page 18 and 19 (line # 486 - line # 498) and Table 20 & Table 21.
Reviewer#1,
Validity of the findings: The dataset collection is very interesting and it can be helpful for future
research direction.
Author response: We are thankful to the reviewer for providing encouraging comment.
Reviewer#1,
Comments for the author: How is the data annotated? Is the data labelled by someone who is expert in
agriculture?
I would recommend to use deep learning algorithms in this paper you can find information in the
following paper: Dashtipour, K., Gogate, M., Li, J., Jiang, F., Kong, B. and Hussain, A., 2020. A hybrid
Persian sentiment analysis framework: Integrating dependency grammar based rules and deep neural
networks. Neurocomputing, 380, pp.1-10.
I would recommend to use graph neural network: Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M.
and Monfardini, G., 2008. The graph neural network model. IEEE Transactions on Neural Networks,
20(1), pp.61-80.
I would suggest to compare your results with state-of-the-art approaches such as: Zahid, A., Abbas, H.T.,
Ren, A., Zoha, A., Heidari, H., Shah, S.A., Imran, M.A., Alomainy, A. and Abbasi, Q.H., 2019. Machine
learning driven non-invasive approach of water content estimation in living plant leaves using terahertz
waves. Plant Methods, 15(1), p.138. Sun, J., Abdulghani, A.M., Imran, M.A. and Abbasi, Q.H., 2020,
April. IoT Enabled Smart Fertilization and Irrigation Aid for Agricultural Purposes. In Proceedings of the
2020 International Conference on Computing, Networks and Internet of Things (pp. 71-75).
Author response: We are thankful for the suggestions, we have updated the paper accordingly.
Author action:
1- The collected dataset is labeled with the help of field experts at National Agriculture Research Center
(NARC) Islamabad, the detail is added in Section 4.1 (Data Preprocessing) on page 7 (line # 243 to line #
245) and Table 3.
2- We have applied two deep learning models on the collected dataset including Convolutional Neural
Network (CNN) and Long Short Term Memory (LSTM) Network which are addressed in the previous
comment.
3- Justification of their suggested papers...
The paper 1 and paper number 2 recommended by the reviewer, presents a framework for data set which
contains a relationship i.e. in a sentence verb and adjective relationship or graph based neural network
which also depends on the relationship which cannot be applied in our context. The techniques discussed
in the mentioned papers are relevant to domain of sentiment analysis where the one word is dependant on
the other which is quite different from our research problem i.e. crop classification using optical imagery
obtained from drone.
The paper 3 suggested by the reviewer, uses SVM algorithm to predict water content in the leaves and in
our paper we have already applied SVM algorithm, so we have included the paper in the Section 2, page 4
(line # 179 to line # 185).
The fourth paper suggested by the reviewer is on an IoT based which has also been added to Section 2,
page 4 (line # 173 to line # 179)
Reviewer#2
Basic Reporting: In this paper, the optical images were used to classify different crops using UAV. The
topic was interesting. The used calibration methods were normally used methods, the experiment design
was not quite practical in real applications.
Author response: We are thankful to the reviewer for providing the useful comments.
Author action: We have addressed all concerns of the reviewer below:
1- All the selected crops including wheat, rice, soybean and maize have overlapping crop cycle, which is
added in Table 1, especially winter wheat crop and winter maize crop have the same planting time. It is
quite challenging to separate wheat and maize crops based solely on their NDVI profile. In order to
address this problem, UAV optical imagery is collected and GLCM features are extracted from the
images. Subsequently, several machine / deep learning models are applied to perform crop classification.
2- The details of the experiments are added in the manuscript as suggested by the reviewer in Section 3
page 5 (Study Area, Table 1) and Section 4 page 6 (Table 2).
Experimental Design: From Table 1 and Figure 1, the classified crops (wheat, rice, soybean, maize)
were not planted at the same time. So the results of this classification was not a practical production
scenario. We could classify the crops based on the planting time. The details of the experiment should be
supplied, such as planting time or imaging acquisition time, the flight information of UAV, the calibration
board of UAV imaging and so on.
Author response: We are thankful to the reviewer for highlighting the missing information.
Author action: The concern of the reviewer is addressed below:
It is already discussed in the previous comment that winter wheat crop and winter maize crop had the
same planting time. The details of the experiments are added in the manuscript as suggested by the
reviewer in Section 3 page 5 (Study Area, Table 1) and Section 4 page 6 (Table 2).
Validity of the findings: I think the methods used for crop classification is helpful, even though these
methods were normally used methods. The authors just used for comparison, not any improvement for
related methods.
Author response: We are thankful to the reviewer for providing the comment
Author action: We agree with the reviewer comment that we did not make improvements in the existing
machine learning models. The primary objective of this research work was to perform crop classification
by applying existing techniques of image processing and machine learning.
Comments for the Author: I suggest to make a crop or surface features discrimination at the same time
in practical agricultural productions not just for the purpose of classification.
Author response: We are thankful to the reviewer for the suggestion.
Author action: The scope of our research work was limited to crop classification of the crops having
similar NDVI profile and same crop growing cycle. The data collected for current research work would
not support the implementation of the work suggested by the reviewer. Therefore we have included the
reviewers suggestion as a future work. Please see Section 6 (Conclusion and Future Work) page 19 (line #
549 to line # 551)
Reviewer#3
Basic Reporting: The conclusion can’t represent the whole results. What reviewer can see is that the
GLCM based classification outperform the grayscale based in some cases. And it’s hard to get how the
13.5% is reached.
Author response: We are thankful to the reviewer for the helpful feedback to improve the paper.
Author action: The reviewers concerns are addressed below:
1- The GLCM based classification results are much better as compared to the gray scale images based
classification whereas the justification is updated in Section 6 (Conclusion and Future work ). Please see
page 19 and 20 (line # 529 to line # 541).
Experimental Design: The experiment was conducted in separated fields at different time. It can be
more conclusive and usable if this classifier be used in a large orthoimage which contains different types
of ground cover.
Author response: We are thankful to the reviewer for their valuable comments
Author action: We agree with the reviewer but the experimental fields selected for our study had
limitations which are added in Section 6 (Conclusion and Future Work). Please see page 20 (line # 546 to
line # 549)
Validity of the findings: More detailed discussion and conclusion are needed since the result is really
ambiguous.
Author response: We are thankful to the reviewer for their suggestion.
Author action: The Section 5(Results and Discussion) is updated to make it more comprehensive. Please
see page # 19 (line # 499 to line # 522)
Moreover, conclusion is revised to add more detail in Section 6(Conclusion and Future work). Please see
page 19 and 20 (line # 529 to line # 542).
Comments for the Author: The topic of this paper is interesting but needs improvement. The conclusion
in abstract can’t find proof in results section.
Author response: We are thankful to the reviewer for your comments
Author action: The required details have been added in the result and abstract section. Please see page 1
(line # 17 to line # 22) and page 19, 20 (line # 529 to line # 542).
Editor Comments
First Comment: Was a proper design of experiment technique followed? Which other methodology
options were considered? Provide a justification for the selected methodology choice.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: Yes, we have tried various machine learning and deep learning techniques and have
chosen the one which performed better. Please see methodology section on page # 6.
Second comment: The system architecture also needs to be explained further.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: We have added more detail and explanation for system architecture in Section 4, on page
# 6 ( line # 218 to line # 225).
Third Comment: The descriptions of evaluation metrics in section 5 may be moved to section 4.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: As suggested by the editor, we have moved evaluation metrics from Section # 5 to
Section #4. The updated page number is now page # 10 and page # 11.
Fourth Comment: Findings/results (Section 5) does not provide in depth discussion and analysis.
Adding a new 'discussion' section would help, where results and research objectives are discussed in light
of the literature, as well as limitations and future work are presented in detail.
Author response: Thanks for the suggestions, we have updated the paper accordingly.
Author action: We have added the detailed and comprehensive discussion in the Section 5 (Results and
Discussion), page # 19 (line # 499 to line # 522)
Fifth Comment: Tables 4-15 (Section 5). It would be good to add how to read the confusion matrices.
Author response: We are thankful to the editor for pointing out the need to add a note on how to read
confusion matrices.
Author action: We have addressed all concerns of the editor by adding information how to read
confusion matrices with help of Figure (Figure # 4) in Section 4 ( Methodology), page # 12. The updated
line number for how to read confusion matrices is from line # 361 to line # 367.
" | Here is a paper. Please give your review comments after reading it. |
98 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Crop classification in early phenological stages has been a difficult task due to spectrum similarity of different crops. For this purpose, low altitude platforms such as drones have great potential to provide high resolution optical imagery where Machine Learning (ML) applied to classify different types of crops. In this research work, crop classification is performed at different phenological stages using optical images which are obtained from drone. For this purpose, gray level co-occurrence matrix (GLCM) based features are extracted from underlying gray scale images collected by the drone. To classify the different types of crops, different ML algorithms including Random Forest (RF), Naive Bayes (NB), Neural Network (NN) and Support Vector Machine (SVM) are applied. The results showed that the ML algorithms performed much better on GLCM features as compared to gray scale images with a margin of 13.65% in overall accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>intensities.</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b17'>[Kwak and Park, 2019]</ns0:ref>, the crop classification based on texture features is performed on the data collected by a drone mounted with a multi-spectral camera. The acquired images are up-scaled to 25 cm resolution and mosaiced later to obtain a complete field of view. To extract texture features, Gray Level Co-occurrence Matrix (GLCM) at different kernel sizes is used including 3 x 3, 15 x 15, and 31</ns0:p><ns0:p>x 31. The mosaiced images act as an input to classification algorithms, such as Random Forest and Support Vector Machine (SVM). It is seen that using textural features obtained from larger kernel size showed improvement in classification results by 7.72% in overall accuracy rather than only using spectral information for classification.</ns0:p><ns0:p>Similarly in <ns0:ref type='bibr' target='#b2'>[Böhler, Schaepman, and Kneubühler, 2018]</ns0:ref>, texture base classification of crops is performed at pixel and parcel-based level where the crops in the study are maize, bare soil, sugar beat, winter wheat, and grassland. The images are acquired by eBee UAV in four flights of 30 minutes each on 26 June 2015.</ns0:p><ns0:p>Textural features are extracted from the obtained UAV images. Random forest algorithm is applied after extracting the texture features which obtained an overall accuracy of 86.3%.</ns0:p><ns0:p>In this study, we performed a classification of four different types of crops including wheat, soybean, rice, maize. The main objective of this research is to investigate the texture feature-based crop classification of different crops having the same spatial texture and colors. The high-resolution optical images are acquired by the drone. The multiple texture features are extracted including contrast, homogeneity, dimensionality, angular second moment, energy, and correlation. To perform classification, Support Vector Machine, Naive Bayes, Random Forest, and Neural Network are applied on the grayscale images and the texture features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK Crop Classification Traditional Techniques</ns0:head><ns0:p>Over the two decades, a lot of research has been done in the agriculture domain to perform different agricultural activities, particularly in crop disease detection, crop health monitoring, crop yield estimation, crop classification, and others <ns0:ref type='bibr' target='#b19'>[Latif, 2018]</ns0:ref>. To perform these activities; machine learning or deep learning techniques are applied to the data collected from satellite, drone or IoT sensors which are discussed in the sections below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Crop Classification Using Satellite Data</ns0:head><ns0:p>An analysis on crop classification and land cover is presented in <ns0:ref type='bibr' target='#b16'>[Khaliq, Peroni, and Chiaberge, 2018]</ns0:ref>, in which Sentinel-2 is used to capture the multi-spectral imagery. The phonological cycle of crops is analyzed by computing the NDVI of time series spectral imagery data. The 'Random Forest' classifier is used to classify the land cover where NDVI values are used as feature vectors. The Random Forest shows 91.1 % classification accuracy i.e. predicted land cover match with the actual ground cover. In <ns0:ref type='bibr' target='#b5'>[Deng, Zhu, He, and Tang, 2019]</ns0:ref>, land cover classification is performed using Random Forest as a classifier. The images are acquired from two satellites including Landsat 8 and MODIS. These images are fused based on Enhanced Spatial-Temporal and Fusion Model to generate time series-based Landsat-8 images. The data from the GF-1 satellite and Google Earth is used as supporting data for training and validation. In this research work, object base classification is used instead of pixel-based classification. The classification results show an accuracy of 94.38 % on the fused data.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b21'>[Luciani, Laneve, Jahjah, and Collins, 2017]</ns0:ref>, an analysis on crop classification is presented in which Landsat-8 OLI is used to capture the multispectral imagery at a coarse spatial resolution of 30m. The acquired images are resampled to 15m spatial resolution using the pan-sharpening technique. The phenological profile of crops is extracted by processing NDVI of time series spectral imagery data. The phenological profile is extracted based on pixel-level and interpolation is used for the reconstruction of missing NDVI value at a particular pixel. The univariate decision tree is applied to the data where the feature vector consists of NDVI values. Results show that the univariate decision tree achieved an accuracy of 92.66%.</ns0:p><ns0:p>There are a lot of datasets that are publicly available for land classification. In <ns0:ref type='bibr'>[Helber, Bischke, Dengel, and Borth, 2018]</ns0:ref>, land classification is performed using the publicly available dataset 'EuroSAT' which is In <ns0:ref type='bibr' target='#b14'>[Hufkens, Melaas, Mann, Foster, Ceballos, Robles, and Kramer, 2019]</ns0:ref>, the health of the wheat crop is monitored using near-surface imagery captured by a smartphone. Images are collected from 50 fields by smartphone during the complete life cycle of the wheat crop. Each day, farmers captured images three times and captured images are transmitted to the cloud where the green level is assessed by green chromatic coordinates. The crop is classified as healthy or unhealthy based on the green level.</ns0:p><ns0:p>Subsequently, the classification result is compared with Landsat 8 imagery in which classification of healthy and unhealthy crops is performed based on Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) values. Results show that there is a small deviation between the classification results based on smartphone imagery and satellite imagery.</ns0:p></ns0:div>
<ns0:div><ns0:head>Crop Classification Using Drone Data</ns0:head><ns0:p>Textural features from an image help to extract useful information from the images. In <ns0:ref type='bibr' target='#b20'>[Liu, Shi, Duan, and Wu, 2018]</ns0:ref>, the experimental area is selected in Minzhu Township, Daowai District, Harbin, where the variety of crops are planted. The 12 types of cropland cover in the study include rice, unripe wheat, ripe wheat, harvested wheat, soybean, corn, trees, grassland, bare land, houses, greenhouses, and roads.</ns0:p><ns0:p>The measurement and marking of Ground Control Points (GCP) are conducted on 3 August 2017 and data is collected on 4 August 2017 using a fixed-wing UAV with a Sony Digital Camera. Digital Surface Model(DSM) and Digital Orthophoto Map (DOM) are produced with the help of POS data and GCP.</ns0:p><ns0:p>Texture features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation are extracted using ENVI software for RGB and DSM bands. SVM is used to perform the classification of crops with RBF kernel. The combination of different features is performed to see the impact of each feature. By using RGB resulted in a classification accuracy of 72.94% and a combination of RGB, DSMs, Second Moment of green band, DSMs variance (27 * 27), DSMs contrast (27 * 27) achieved an accuracy of 94.5%. The results show that the hard to differentiate classes in color space became separable by adding altitude as a spatial feature where height for each tree, crop, and grass differs.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b13'>[Hu, Zhong, Luo, and Wei, 2018]</ns0:ref>, a hyper-spectral imaging sensor is mounted on a UAV to offer images at a higher spatial and higher spectral resolution at the same interval. The study area chosen is a field in the southern farm in Honghu city, located in China. The images are taken from the altitude of 100m at a spatial resolution of 4cm with 274 spectral bands. To fully utilize the potential of the spatial and spectral resolution of the image, a combination of the CNN-CRF model is proposed, to classify crops accurately. For this to work, in preprocessing phase, the Principal Component Analysis (PCA) is performed for dimensionality reduction of the data while in meantime preserving spectral information.</ns0:p><ns0:p>Each patch on the image will be passed to CNN as input, to get the rule image from the PCA. The rule image, which is the output of CNN will be passed to the CRF model to generate a classification map of the output. The CNN-CRF model achieved an accuracy of 91.79% in classifying different crop types.</ns0:p><ns0:p>Image fusion between satellite and UAV can help in the classification of crops at a detailed level. In <ns0:ref type='bibr' target='#b32'>[Zhao, Shi, Liu, Hovis, Duan, and Shi, 2019]</ns0:ref>, a fusion between Sentinel-2A satellite and images acquired from fixed-wing Agristrong UAV drone is performed to get the image at high spatial, high spectral, and high temporal resolution. For this purpose, an experimental area covering around 750 ha is selected in Harbin city, Heilongjiang province, China. The crop types in the current study include rice, soybean, corn, buckwheat, other vegetation, bareland, greenhouses, waters, houses, and roads. The images are acquired using a UAV drone for 14 September 2017 at 0.03 m resolution and Sentinel-2A images for 16 September 2017 are downloaded. The high-resolution 0.03 m images are sub-sampled at lower resolution (0.10 m, 0.50 m, 1.00 m, and 3.00 m). The fusion between UAV images at different resolutions and Sentinel-2A images is performed using Gram-Schmidt transformation <ns0:ref type='bibr' target='#b18'>[Laben and Brower, 2000]</ns0:ref>.</ns0:p><ns0:p>Random forest algorithm performed better crop classification for a fused image at 0.10 m with accuracy at 88.32%, whereas without fusion the accuracy is at 76.77% and 71.93% for UAV and Sentinel-2A images respectively.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b2'>[Böhler, Schaepman, and Kneubühler, 2018]</ns0:ref>, classification of crops is done at pixel and parcel-based level. The study area covering 170 hectares is selected in the Canton of Zurich, Switzerland. The crops in the study are maize, bare soil, sugar beat, winter wheat, and grassland. The images are acquired by eBee UAV in four flights of 30 minutes each on 26 June 2015. Subsequently, the textural features are extracted from the obtained UAV images. The random forest algorithm is applied to the extracted features and crop </ns0:p></ns0:div>
<ns0:div><ns0:head>Deep Learning for Crop Classification</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b28'>[Trujillano, Flores, Saito, Balcazar, and Racoceanu, 2018]</ns0:ref>, a deep learning network is used to classify the corn crop in the region of Peru, Brazil. The images are acquired for two locations where the first location contained corn plots, trees, river, and other crops situated in a mountainous region, where flight is conducted at 100 and 180 m respectively. The second location is a coastal area where images are acquired at an altitude of 100 m, area consists of a corn crop and some nearby houses. The multi-spectral camera mounted on the UAV acquired images in 5 different bands, at a spatial resolution of 8 cm. Photoscan tool is used to generate the mosaic of the image. The image is divided into a patch size of 28 x 28, covering two rows of the cornfields. The patch is labeled as corn or no corn field. Four datasets are generated from the acquired images where dataset #1 and dataset #2 covered classes with images acquired at an altitude of 100m and 180m. The dataset #3 merged the corn classes from different altitude flight images whereas, in dataset #4, the dataset #1 is augmented which included rotation and flipping of images. Each dataset containing 28 x 28 patches of images is trained using the LeNet model, in which dataset number two achieved an accuracy of 86.8% on the test set.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b33'>[Zhou, Li, and Shao, 2018]</ns0:ref>, the various types of crop classification methods are proposed using CNN and SVM algorithms. For this purpose, Yuanyang Country, in the province of Henan, China is selected as a study area where the main crops in the region are rice, peanut, and corn. The Sentinel-2A images are acquired for two dates, where all the bands data has been resampled to 10 m resolution and the resultant stack of the 26-dimensional image is generated. A ground survey is conducted in August 2017, for the labeling of different types of crops. Around 1200 pixels are selected for training and the rests of the pixels are used for validation. The labeled pixel in the final stack image is converted to grayscale which is given as an input to the model. The CNN outperformed the SVM, where it clearly shows the deep learning-based model is better at learning the features while achieving an accuracy of 95.61 % in the classification of crops. In <ns0:ref type='bibr' target='#b27'>[Sun, Abdulghani, Imran, and Abbasi, 2020]</ns0:ref>, an application for smart home is presented. The application monitors the moisture of the soil and the value of nitrogen, phosphorous, and potassium for an indoor plant with the help of IoT sensors. The value is classified based on various levels and provides feedback to the user with help of the dashboard. The system designed is a prototype, which helps the farmers when to irrigate the crop and what ratio of the value of nutrients is suitable for the specific plant. Water content estimation in plant leaves can help in the productivity of the crops.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b31'>[Zahid, Abbas, Ren, Zoha, Heidari, Shah, Imran, Alomainy, and Abbasi, 2019]</ns0:ref>, a novel approach based on machine learning is presented to estimate the health status of the plant leaves terahertz waves by measuring transmission response for four days. Each frequency point recorded is used as a feature in the study. Feature selection was carried out to discard any irrelevant feature that could result in the wrong prediction of water content in the leaves. The support vector machine (SVM) algorithm clearly performed better at predicting the accurate water content in the leaves for four days.</ns0:p><ns0:p>The work proposed in this paper will process the optical images acquired by UAV by data augmentation for the crop class with very few images. The processed images will be converted to grayscale downscaled to a low resolution. The textural features will be extracted from the grayscale images. Crop classification will be performed by using machine learning and deep learning algorithms for grayscale and textural-based images. With the evaluation measure, we will compare and evaluate the performance of how GLCM based textural features will outperform the ones with grayscale images. In this work, the main focus is how textural features will be helpful to distinguish between different types of crops compared to grayscale images. The paper is organized as follows, where a literature review is conducted in Section-2, data set used in the study along with methodology is discussed in Section-3, results and discussion in section-4 and conclusion and future work in section-5. This section provide the details of study area for experiment and then discusses the methodology based on modules of our experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head></ns0:div>
<ns0:div><ns0:head>Normalized</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Study Area & Data set</ns0:head><ns0:p>To perform crop classification, an experimental area in the capital of Pakistan, Islamabad located at the National Agriculture Research Center (NARC) is selected. In the NARC region, various types of crops are grown throughout the year and experiments are performed. For our research, we selected four crops wheat, maize, rice, and soybean as shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. The crop calendar for Pakistan can be viewed at <ns0:ref type='bibr'>[Calendar, 2020]</ns0:ref>, where the particular locations of the crops in the study along with their growth cycle is enlisted in Table <ns0:ref type='table' target='#tab_14'>-</ns0:ref> Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> shows the architectural diagram of the system divided into modules. The first module is the data acquisition where the data is collected with the help of a UAV drone. After the collection of the data, the next step is the pre-processing of the data, which requires analysis to remove images outside the boundary of the crop and to apply data augmentation for the crops fields where we have a limited amount of data. The next step is feature extraction where we will extract features from the grayscale images which can help in the study. The last step is to classify the crop classification based on machine learning and deep learning algorithms for gray scale-based images and feature-based images and to evaluate the result of crop classification. Each module is discussed in detail in this section.</ns0:p><ns0:p>UAV platforms provide the ability to gather images at higher spatial resolution compared to satellitebased solutions. In this study, the DJI Phantom pro-Advanced (details mentioned in Table <ns0:ref type='table' target='#tab_14'>-</ns0:ref>2) equipped with a 20 Megapixel camera is used for data acquisition. The data was collected by carrying out multiple Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to the minimum of the rest of the crop field classes. Figure <ns0:ref type='figure'>3</ns0:ref> shows the five crop fields including soybean, rice, maize, wheat at tillering stage and wheat at the maturity stage. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Data Pre-processing</ns0:head><ns0:p>The first step after collecting the data is to pre-process it to make it suitable for training. The captured images were analyzed without removing the images outside of the boundary. The collected data was organized in folders containing the date of the collection along with the stage of the particular crops. In order to perform supervised classification, a field survey was conducted to label each image with the help of NARC experts. Initially, the collected data was not sufficient to apply any classification technique, therefore, data augmentation was used to enhance the data. For this purpose, horizontal flipping and zoom with a minor factor was applied using Keras pre-processing library <ns0:ref type='bibr'>[Augmentation, 2020]</ns0:ref>. The optical images captured with the drone were high-resolution images, and performing classification requires computing power. To reduce the processing requirement, the images were downscaled to a size of 100 x 100, and the features were extracted from the down-scaled images.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Feature Extraction</ns0:head><ns0:p>The pixels in an optical image contained noise which was reduced by the information from texture and was considered as a complementary feature in classification. The analysis of textural images was categorized into four different groups that include model-based, statistical, geometrical, and signal processing <ns0:ref type='bibr' target='#b29'>[Tuceryan and Jain, 1993]</ns0:ref>. For feature extraction, GLCM was employed, which is a widely used statistical technique developed by <ns0:ref type='bibr' target='#b10'>[Haralick, Shanmugam, and Dinstein, 1973]</ns0:ref> for processing of remote sensing data. In the first step, the original image was converted to the grayscale. GLCM contrast and homogeneity are strongly but inversely correlated, which means homogeneity decreases when contrast increases while energy is kept constant.</ns0:p><ns0:p>• Dissimilarity (DIS): Dissimilarity is a linear measure of local variations in an image.</ns0:p><ns0:p>• Angular second Moment (ASM): It measures textural uniformity i.e. repetitions in pixel pair. It detects the disorders in textures of the images. The maximum value achieved by the angular second moment is one. Higher values occur when the gray level distribution has a constant periodic form.</ns0:p><ns0:p>• Energy (EG): Energy is computed as the square root of an angular second moment. When the window is orderly, energy has higher values.</ns0:p><ns0:p>• Correlation (CORR): It is a measure of linear dependencies between the gray tone of the image.</ns0:p><ns0:p>Each of the listed textural feature is computed using the Equations-1 to 6 <ns0:ref type='bibr'>[[Equations, 2011]</ns0:ref>]:</ns0:p><ns0:formula xml:id='formula_0'>CON = N−1 ∑ i=0 N−1 ∑ j=0 (i − j) 2</ns0:formula><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_1'>HOM = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 1 + (i − j) 2</ns0:formula><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>DIS = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j)x|i − j| (3) ASM = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 2 (4) EG = N−1 ∑ i=0 N−1 ∑ j=0 P(i, j) 2 (5) CORR = N−1 ∑ i=0 N−1 ∑ j=0 (i − µ i )( j − µ j ) (σ i )(σ j )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where N denotes the number of gray levels, while P(i, j) is the normalized value of the gray-scale at position i and j of the kernel with a sum equal to 1. The textural features were generated from 100 x 100 gray-scale images. In this study, the kernel size was set to 19, and a total of 48 features were generated for each gray-scale image with distance at 1 and 2, rotation at 0, 45 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Crop Classification</ns0:head><ns0:p>In order to perform crop classification on the collected dataset, several supervised techniques are applied which are discussed below:</ns0:p></ns0:div>
<ns0:div><ns0:head>Naive Bayes Classifier</ns0:head><ns0:p>Naive Bayes Classifier is a simple probabilistic classifier that is based on the Bayes theorem. The inducer in Naive-Bayes computes conditional probabilities of classes given the instance and selects the class with higher posterior probability <ns0:ref type='bibr' target='#b30'>[Witten and Frank, 2002]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Neural Network</ns0:head><ns0:p>Neural Network is a very famous model which is designed to mimic the human brain to perform classification and regression tasks. It contains one input layer, or more hidden layers where each layer holds several neurons or nodes, and a single output layer <ns0:ref type='bibr' target='#b8'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>. Each layer computes some mathematical functions which enable it to find the complex relationship in the data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Support Vector Machines</ns0:head><ns0:p>The goal of the Support Vector Machine (SVM) is to find an optimal boundary that separates the classes based on data in the training set <ns0:ref type='bibr' target='#b6'>[Ding, Qi, and Tan, 2011]</ns0:ref>. SVM algorithm solves the optimization in a way that it tends to maximize the margin between decision boundary <ns0:ref type='bibr' target='#b9'>[Gunn, 1998]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Random Forest Classifier</ns0:head><ns0:p>Random Forest Classifier is developed by Breiman <ns0:ref type='bibr' target='#b3'>[Breiman, 2001]</ns0:ref> which performs classification by extending decision to multiple trees instead of a single tree. The ability to diversify through multiple trees instead of a single tree helps to achieve better classification performance. The final class of the particular instance is decided by the majority votes of all trees. Random Forest requires only a few parameters including the number of variables required for partitioning of the nodes and the number of trees to be grown.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolutional Neural Network (CNN)</ns0:head><ns0:p>CNN is a deep learning model which is commonly used on imagery data <ns0:ref type='bibr' target='#b8'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>. It consists of an input layer, multiple hidden layers and an output layer where hidden layers are comprised of convolutional layers followed by the pooling layer and dense layer.</ns0:p></ns0:div>
<ns0:div><ns0:head>Long Short Term Memory (LSTM) Netwrok</ns0:head><ns0:p>LSTM is another deep learning model based on the Recurrent Neural Network (RNN) which has the capability to learn from the time series data with long dependencies <ns0:ref type='bibr' target='#b8'>[Goodfellow, Bengio, Courville, and Bengio, 2016]</ns0:ref>. Each layer in the LSTM model is the set of recurrently connected blocks or memory cells that performs reasonably well on several complex tasks such as crop classification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Evaluation Metrics</ns0:head><ns0:p>The evaluation metrics used to assess the performance of the machine and deep learning algorithms are described as follows:</ns0:p><ns0:p>Producer Accuracy</ns0:p><ns0:p>Producer's Accuracy (PA) defined in Equation-7 is the accuracy map from the point of reference of the producer. The PA shows how the classified map depicts the real features on the ground correctly or the probability that how certain land cover of an area on the ground is classified. The PA is the complement of the Omission Error (OE), where PA = 100% − OE <ns0:ref type='bibr' target='#b25'>[Story and Congalton, 1986]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PA = No. o f correctly classi f ied images o f a class Total no. o f ground truth images f or class (7)</ns0:head></ns0:div>
<ns0:div><ns0:head>User Accuracy</ns0:head><ns0:p>The User's Accuracy (UA) defined in Equation-8 is the accuracy with respect to the user of the map. The UA shows how often the class on the classification map will actually be in the ground data. The UA is the </ns0:p></ns0:div>
<ns0:div><ns0:head>Precision = T P T P + FP</ns0:head><ns0:p>(10)</ns0:p></ns0:div>
<ns0:div><ns0:head>Recall</ns0:head><ns0:p>Recall refers to amount of total instances of a positive class that are classified correctly. The formula to compute recall is defined in Equation-11, where TP means true positive and FN means false negative.</ns0:p></ns0:div>
<ns0:div><ns0:head>Recall = T P T P + FN (11)</ns0:head><ns0:p>F1-Score F1-Score is computed to provide a balance between precision and recall, compared to individually computing precision and recall does not cover all aspects of accuracy. F1-Score is calculated using the Equation-12. The range of F1-score is between 0 and 1, where higher number shows the higher performance of the particular model.</ns0:p><ns0:formula xml:id='formula_3'>F1 − Score = 2 * Precision * Recall Precision + Recall (12)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Accuracy</ns0:head><ns0:p>Accuracy refers to capability of the model to produce correct predictions for the instances observed. It is defined in Equation-13, where TP means true positive, TN means true negative, FP means false positive and FN means false negative.</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy = T P + T N T P + T N + FP + FN (13)</ns0:head><ns0:p>All the TP, TN, FP and FN values can easily be computed by drawing the confusion matrix which is visual representation of all these values as shown in Figure <ns0:ref type='figure'>4</ns0:ref> Figure <ns0:ref type='figure'>4</ns0:ref> shows the confusion matrix for two classes i.e. Positive and Negative. The TP is the correctly classified tuples of class Positives, TN is the number of tuples that are correctly classified as Negative.</ns0:p><ns0:p>However, FP is the number of Negative tuples which are incorrectly classified as Positive. Similarly, FN is the number of Positive tuples that are wrongly classified ad Negative <ns0:ref type='bibr' target='#b15'>[Kantardzic, 2011]</ns0:ref>. In the crop classification domain, the confusion matrix is another metric that is used to see the performance of the model in detail.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS AND DISCUSSION</ns0:head><ns0:p>The machine learning and deep learning algorithms used in our study were support vector machine (SVM), random forest classifier (RFC), naive Bayes classifier, and neural networks (NN). Each algorithm was applied once on grayscale images and once on the images with GLCM based textural features. We have Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>81.82% maize crop images correctly, whereas 9.1% images were classified as rice and the remaining 9.1% images were classified as wheat-T. The overall accuracy obtained for SVM classification on generated GLCM based textural features was 84.10% which showed an improvement of 13.65% compared to training the classifier only on grayscale images. This improvement clearly shows the impact of using textural features extracted from grayscale images and their ability to distinguish between different crop types. Table-6 shows the confusion matrix for classification performed on grayscale images using Random to training the classifier only on grayscale images. This improvement clearly indicates the ability of the textural features extracted from grayscale images to distinguish between the crop types. Table <ns0:ref type='table' target='#tab_19'>-8</ns0:ref> shows the confusion matrix for classification performed on grayscale images using Naive Bayes classifier.</ns0:p><ns0:p>The Naive Bayes classifier was able to classify rice, wheat (maturity stage), wheat (tillering stage), and maize crop correctly. The classifier with only grayscale images was not able to classify soybean crop correctly and all the images of soybean were classified as wheat (maturity stage) crop. The overall accuracy obtained for Random Forest classification on grayscale images was 79.55%. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science shows the confusion matrix for classification performed on grayscale images using a feed-forward neural network classifier. The feed-forward neural network classifier was able to classify wheat crop correctly.</ns0:p><ns0:p>The classifier with only grayscale images classifies soybean, wheat-T, wheat, maize as the wheat crop.</ns0:p><ns0:p>Also, the rice crop images were classified as maize crop. The overall accuracy obtained for feed-forward neural network classifier on grayscale images was 31.82%. The reason for this poor performance was the limited amount of available data which was not enough to train a deep learning model and thus resulted in showing an average performance. Manuscript to be reviewed The CNN-based classifier on GLCM generated images failed to learn any information.</ns0:p><ns0:note type='other'>Computer Science Table 11.</ns0:note><ns0:p>It can be concluded from the results obtained by applying machine and deep learning algorithms that machine learning models with the help of textural features extraction using GLCM are able to outperform deep learning algorithms because of the limited data set available. To further enhance the performance of deep learning algorithms, there is a need to gather more data in order to achieve better results compared to machine learning algorithms. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Out of the four machine learning based algorithms applied on GLCM based images, random forest, and naive classifier achieved an overall accuracy of 90.9% whereas SVM-based classifier achieved only 84.1 which was still better compared to only using grayscale images for classification. The random forest algorithm performs better because each tree to be used, gives a single prediction for each record and it takes an average overall to give the final prediction which results in an overall better result. However, the deep learning models including CNN, LSTM don't perform well on the extracted GLCM based features. These models were designed to learn the features automatically using filters in their architecture as compared to the hand-engineered GLCM features. Moreover, the accuracy of these models doesn't improve on the grayscale images because of the very small dataset with a little variation which results in the miss classifies all crop images. The same was the case with the feed-forward neural network which shows poor performance with the accuracy of 63.64% on grayscale images and 25% on extracted GLCM features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this study, we investigated the potential of GLCM-based texture information for crop classification.</ns0:p><ns0:p>The main goal of this study is to evaluate the benefit of textural features by comparing them with the grayscale images. The experimental area with five crops at different stages of the crop cycle is selected from the NARC fields located in Islamabad, Pakistan. The grayscale images have little information which makes them difficult to distinguish between the different classes. In contrast to this, the textural features extracted from the grayscale images show great potential to classify the different crop classes. Among these GLCM features, four features are found to be more useful for the particular experiment of crop classification including contrast, energy, dissimilarity, and angular second moment which try to extract local variations in the image.</ns0:p><ns0:p>In order to perform crop classification, machine/deep learning algorithms are applied including Naive Bayes, random forest, support vector machine, feed-forward neural network, convolutional neural network, and LSTM. The overall crop classification among the five classes, Naive Bayes, and random forest classifier for textural-based images achieved an accuracy of 90.91%. However, in the gray scale-based images, Naive Bayes achieved an accuracy of 79.55% and random forest achieved an accuracy of 81.82%.</ns0:p><ns0:p>In contrast to this, the deep learning models including CNN, LSTM don't perform well on the extracted GLCM based features due to the small dataset. Similarly, the neural network achieved an accuracy of 31.82% in grayscale images and 25% in the case of textural-based images. The deep learning algorithms will show a better performance for crop classification by using more data and with extracted texture-based images.</ns0:p><ns0:p>The major limitation of data acquisition is the specific altitude of the drone flight, which constrained us to cover all fields in one view. For this particular experiment, limited plots are chosen where images are collected from these allotted experimental fields. The other limitation is related to the drone flight i.e.</ns0:p><ns0:p>drone cannot be flown at much higher altitude in the subject area due to security reasons.</ns0:p><ns0:p>To better analyze the results of crops grown in a region, the drone should be flown at an altitude of 400 ft covering at least 15 crop fields. In addition, for practical agriculture production, crop surface feature discrimination can be explored.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>comprised of 27000 labeled examples covering 10 distinctive classes. Each image patch is 64 x 64 pixels which is collected from 30 cities in European Urban Atlas. For classification, the data set is divided in the ratio of 80 to 20 which is used for training and testing respectively. Two deep learning architectures such 2/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021) Manuscript to be reviewed Computer Science as 'GoogLeNet' and ResNet-50 are trained on the dataset which achieved an accuracy of 98,18 % and 98.57 % respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021) Manuscript to be reviewed Computer Science maps are generated where the object-based classification resulted in the overall accuracy of 86.3% for the overall set of crop classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Crops marked in © Google Earth (NARC Region)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. System Architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>crop flight was performed on 02-March-2020 at tillering stage. The flight for soybean was conducted on 03-September-2019, whereas the flight for the rice field was done on 03-September-2019 at the max-tiller stage. The flight for the maize crop was done at the max-maturity stage on 24-July-2019. Due to limited images of the rice field, the images of the rice field were augmented to make the images count equivalent 6/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>The next step was to extract spatial features from the gray-scale images based on the relationship of brightness values to the center pixel with its neighborhood defined by a kernel or window size. The relationship of the brightness 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021) Manuscript to be reviewed Computer Science values was represented in the form of a matrix. The matrix was made up of the frequent occurrence of the sequential pair of the pixel values along with a defined direction. The relationship helps GLCM to generate a different set of texture information based on gray-scale, kernel size, and direction. Harlick in [Haralick, Shanmugam, and Dinstein, 1973] defined fourteen textural features, which provide redundant spatial context information which was an overhead in classification. In this study, only six textural features are considered which are listed below: • Contrast (CON): The contrast measures the spatial frequency of an image and is a different moment of GLCM. It is the difference between the highest and the lowest values of the adjacent set of pixels. The contrast texture measures the local variations present in the image. An image with the low contrast presents GLCM concentration term around the principal diagonal and features low spatial frequencies.• Homogeneity (HOM): This statistical measure is also called inverse difference moment. It measures the homogeneity in the image where it assumes larger values for smaller differences in grey tone within-pair elements. Homogeneity is more sensitive to the presence of near diagonal elements in the GLCM. The value of homogeneity is maximum when elements in the image are the same.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Commission Error (CE), UA = 100% −CE. UA is calculated by taking a total number of correct classifications for a class divided by the total number of the class. UA = No. o f correctly classi f ied images o f a class Total no. o f images classi f ied f or class (8) Overall Accuracy Overall Accuracy (OAA) defined in Equation-9 essentially tells us out of all available classes from classification what proportion are classified correctly. The overall accuracy is usually expressed in percentage, where 100% depicts all classes in the classification classified correctly. OAA = Total no. correctly classi f ied images o f all classes Total no. o f ground truth images o f all classes (9) Precision Precision refers to the number of positive class instances correctly classified out of total classified instances of the class. The formula to compute precision is defined in Equation-10, where TP means true positive and FP means false positive.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4. Confusion Matrix</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using Neural Networks based Classifier Class Soybean Rice Wheat-T Wheat Maize PA(%) images into a single class. Each of the soybean, rice, wheat-T, wheat, and maize images were classified as a maize crop. The overall accuracy obtained for feed-forward neural network classifier on GLCM based images was 25% and was lower compared to the grayscale images. The reason for this poor performance was the limited amount of available data which was not enough to train a deep learning model and thus resulted in showing an average performance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>matrix for classification performed on generated GLCM textural features using long short term memory classifier. The LSTM based classifier was able to classify maize crop only. The classifier with 14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021) Manuscript to be reviewed Computer Science only GLCM based images classifies all crop images into a single class. Each of the soybean, rice, wheat-T, wheat, and maize images were classified as maize crop. The overall accuracy obtained for LSTM based classifier on GLCM based images was 25% and was lower compared to the grayscale images. The reason for this poor performance was the limited amount of available data which was not enough to train a deep learning model and thus resulted in showing an average performance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>network (CNN) based classifier. The CNN-based classifier was able to classify maize crop only. The classifier with GLCM based images classifies all crop images into a single class.Each of the soybean, rice, wheat-T, wheat, and maize images were classified as a maize crop. The overall accuracy obtained for CNN-based classifier on GLCM based images was 25% and was lower compared to the grayscale images. The reason for this poor performance was the limited amount of available data which was not enough to train a deep learning model and thus resulted in showing an average performance.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,141.73,63.78,413.55,232.79' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>1. The climate of Islamabad is a humid subtropical climate with an average rainfall of 790.8 millimeters. List of crops selected in study area</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Crop</ns0:cell><ns0:cell>Crop-Cycle</ns0:cell><ns0:cell>Location</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-I</ns0:cell><ns0:cell>Dec-18 to Jun-19</ns0:cell><ns0:cell>30 • 40' 22.25' N, 73 • 07' 18.28' E</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>Jun-19 to Oct-19</ns0:cell><ns0:cell>30 • 40' 25.19' N, 73 • 07' 27.93' E</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>Jul-19 to Dec-19</ns0:cell><ns0:cell>33 • 40' 34.46' N, 73 • 08' 10.20' E</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Wheat-II Nov-19 to May-20 33 • 40' 17.29' N, 73 • 07' 48.98' E</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>Mar-19 to Jul-19</ns0:cell><ns0:cell>33 • 40' 18.69' N, 73</ns0:cell></ns0:row></ns0:table><ns0:note>• 07' 37.84' EThe data set used in the study was gathered for five different crops at the different growth cycles of crops as shown in Table-1 using DJI Phantom pro-Advanced. All the selected crops including wheat, rice, soybean, and maize have overlapping crop cycles, especially winter wheat crop and winter maize crop had the same planting time. It was quite challenging to separate wheat and maize crops based solely on their NDVI profile. In order to address this problem, UAV optical imagery was collected and GLCM5/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Optical images converted to gray scale images Downscale high resolution images to 100 x 100 x 3 Data Preprocessing CNN LSTM Support Vector Machine Naive Bayes Classifier Neural Networks Random Forest Classifier Accuracy assesment Gray scale images N x 100 x 100 x 1 Support Vector Machine Naive Bayes Classifier Neural Networks CNN LSTM Random Forest Classifier Accuracy assesment GLCM features N x 100 x 100 x 48</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Specifications of UAV drone used in the study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Characteristics</ns0:cell><ns0:cell>Technical Specifications</ns0:cell></ns0:row><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Four-rotor electric UAV</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight</ns0:cell><ns0:cell>1368 g</ns0:cell></ns0:row><ns0:row><ns0:cell>Manufacturer</ns0:cell><ns0:cell>DJI</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>FC6310</ns0:cell></ns0:row><ns0:row><ns0:cell>Operating Temperature</ns0:cell><ns0:cell>0 • to 40 •</ns0:cell></ns0:row><ns0:row><ns0:cell>Camera Sensor</ns0:cell><ns0:cell>1' CMOS</ns0:cell></ns0:row><ns0:row><ns0:cell>Image Size</ns0:cell><ns0:cell>4864 x 3648</ns0:cell></ns0:row><ns0:row><ns0:cell>Flight Duration</ns0:cell><ns0:cell>30 minutes</ns0:cell></ns0:row><ns0:row><ns0:cell>Battery</ns0:cell><ns0:cell>5870 mAH LIPo 4S</ns0:cell></ns0:row></ns0:table><ns0:note>flights to cover five fields at different stages of the crop cycle which are listed in Table-3. The first wheat flight was conducted on 16-May-2019 with the wheat field at max maturity stage and the second wheat</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Crop fields images acquired at various stage of crop cycle</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Crop</ns0:cell><ns0:cell>Stage</ns0:cell><ns0:cell>Acquisition Date</ns0:cell><ns0:cell>Acquisition Time</ns0:cell><ns0:cell>Altitude</ns0:cell><ns0:cell>Images Count</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Wheat-I Max Maturity</ns0:cell><ns0:cell>16-May-2019</ns0:cell><ns0:cell>12:20 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>41</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>Max-Tiller</ns0:cell><ns0:cell>03-Sept-2019</ns0:cell><ns0:cell>12:15 PM</ns0:cell><ns0:cell>120 foot</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>V2 Stage</ns0:cell><ns0:cell>03-Sept-2019</ns0:cell><ns0:cell>12:40 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-II</ns0:cell><ns0:cell>Tiller Stage</ns0:cell><ns0:cell>02-March-2020</ns0:cell><ns0:cell>01:30 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>Max Maturity</ns0:cell><ns0:cell>24-July-2019</ns0:cell><ns0:cell>01:15 PM</ns0:cell><ns0:cell>70 foot</ns0:cell><ns0:cell>39</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SOYBEAN</ns0:cell><ns0:cell>RICE</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>WHEAT TILLERING</ns0:cell><ns0:cell cols='2'>WHEAT MATURITY</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>MAIZEFigure 3. Crops Optical Images captured by using DJI Phantom</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Confusion matrix for classification performed on grayscale images using SVM</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>92.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>72.7</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>62.5</ns0:cell><ns0:cell>59.1</ns0:cell><ns0:cell>88.9</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>70.45%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>selected five crop classes at various phenological stages of the crops. The results of the overall accuracy of all classes and individual crop class performance are organized in separate tables to give a better overview of the performance. Table-4 shows confusion matrix for classification performed on the grayscale images using SVM algorithm. The SVM algorithm was able to classify rice and wheat-T crop correctly. The algorithm with only grayscale images was not able to classify soybean crop correctly and all the images of soybean were classified as wheat (maturity stage) crop. SVM algorithm correctly classifies 93% of wheat crop images and only 7% images were classified as maize, similarly, 73% of maize crop images were correctly classified and only 27% images were classified as wheat-T. The overall accuracy obtained by SVM classification on grayscale images was 70.45%.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using SVM</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>66.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>85.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>81.8</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>85.7</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>84.1%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>Table-5 shows the confusion matrix for classification performed on generated GLCM textural features using SVM algorithm. The SVM algorithm on GLCM based textural features was able to classify rice and wheat-T crop images correctly. The algorithm with GLCM based textural images was able to classify 66.67% soybean crop images correctly, whereas 33.33% images were classified as wheat (maturity stage). Similarly, the algorithm classifies 85.7% wheat crop images correctly, whereas 7.1% images were classified as rice and the remaining 7.1% images were classified as soybean. The algorithm classifies11/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Confusion matrix for classification performed on grayscale images using Random Forest Classifier The Random Forest classifier was able to classify rice, wheat, wheat-T, and maize crop correctly. The algorithm with only grayscale images was not able to classify soybean crop correctly, except one soybean image all the remaining images of soybean were classified as wheat (maturity stage) crop. The overall accuracy obtained by the Random Forest classifier on the grayscale images was 81.82%.Table-7 shows the confusion matrix for the classification performed on generated GLCM textural features</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>12.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>63.6</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>81.82%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Forest classifier.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Confusion matrix for classification performed on GLCM features using Random Forest Classifier The Random Forest classifier based on GLCM based textural features was able to classify rice, wheat, wheat-T, and maize crop correctly. The algorithm with GLCM based textural images was able to classify 55.56% soybean crop images correctly, where 44.4% images were classified to wheat (maturity stage) images. The overall accuracy obtained for the Random Forest classifier on generated GLCM based textural features was 90.91% which showed an improvement of 9.09% compared</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>55.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>77.8</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>90.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>using Random Forest classifier.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using Naive Bayes Classifier</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>66.7</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>79.55%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using Naive Bayes Classifier</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>55.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>82.4</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>90.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>soybean crop images correctly, where 11.11% images were classified as rice crop and 33.33% images</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>were classified as wheat (maturity stage) images. The overall accuracy obtained for the Naive Bayes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>classifier on generated GLCM based textural features was 90.91% which showed an improvement of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>11.36% compared to training the classifier on grayscale images only. This improvement clearly shows the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>impact of textural features extracted from grayscale images for distinguishing the crop types. Table-10</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using Neural Networks</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>31.82%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head /><ns0:label /><ns0:figDesc> shows the confusion matrix for classification performed on generated GLCM textural features using feed-forward neural network classifier. The feed-forward neural network classifier was able to classify maize crop only. The classifier with only GLCM based images</ns0:figDesc><ns0:table /><ns0:note>13/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table -</ns0:head><ns0:label>-</ns0:label><ns0:figDesc>12 shows the confusion matrix for classification performed on grayscale images using long short</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Confusion matrix for classification performed on gray scale images using LSTM LSTM). The LSTM based classifier when applied on the grayscale image was able to classify maize and rice crop correctly. The classifier with only grayscale images classifies soybean and wheat-T as a wheat crop at a mature stage. Also, the wheat crop images at the maturity stage were classified as wheat crop correctly, but only one image was misclassified as maize crop. The overall accuracy obtained for LSTM based classifier on grayscale images was 65.91%. Table-13 shows the</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>93</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>65.91%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>term memory (</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using LSTM</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc> shows the confusion Confusion matrix for classification performed on gray scale images using CNN classifier when applied on a gray scale image was able to classify rice, wheat-tiller, and maize crop correctly. The classifier with only grayscale images classifies soybean as a wheat crop. Also, the wheat crop images at the maturity stage were classified as wheat crop correctly, but only one image was misclassified as maize crop. The overall accuracy obtained from a CNN-based classifier on grayscale images was 77.27%.Table-15 shows the confusion matrix for classification performed on generated GLCM textural features</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='6'>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>93</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell>UA(%)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OAA(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>77.27%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>matrix for classification performed on grayscale images using convolutional neural network (CNN). The</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN-based</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Confusion matrix for classification performed on generated textures features images using CNN</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell>Soybean Rice Wheat-T Wheat Maize PA(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table -</ns0:head><ns0:label>-</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>16 describes the accuracy, precision, recall and F1-score for SVM on the greyscale images and the texture images. The highest accuracy (100%) was achieved for rice crops using grayscale images.Similarly, F1-score was the highest for rice crops when SVM was applied. The highest value of precision 15/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_20'><ns0:head>Table 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Precision, Recall & F1-Score on gray scale images and texture images using SVM Table-17 shows the accuracy, precision, recall, and F1-score for the grayscale images and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell cols='8'>Accuracy (%) Gray scale GLCM Gray scale GLCM Gray scale GLCM Gray scale GLCM Precision Recall F-1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>Soybean</ns0:cell><ns0:cell>79.55</ns0:cell><ns0:cell>91.11</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>Rice</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.56</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat-T</ns0:cell><ns0:cell>93.18</ns0:cell><ns0:cell>97.78</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.91</ns0:cell></ns0:row><ns0:row><ns0:cell>Wheat</ns0:cell><ns0:cell>77.27</ns0:cell><ns0:cell>86.67</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Maize</ns0:cell><ns0:cell>90.91</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>was achieved for rice and wheat-T crops using grayscale images and texture-based images. The recall</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>shows the highest value for rice using grayscale images.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Similarly,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_21'><ns0:head>Table 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Precision, Recall & F1-Score on gray scale images and texture images using Random Forest Classifier helps the machine learning algorithms to perform better as compared to the normal gray scale-based classification.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='17'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54928:2:0:CHECK 14 Apr 2021)</ns0:note>
</ns0:body>
" | "Original Article ID: 54928
Original Article Title: “Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms”.
To: Tariq Masood
Re: Response to reviewers
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to incorporate editor changes.
Best regards,
Uferah Shafi et al.
Editor Comments
First Comment: Just to elaborate on my previous comment related to the design of experiments (DOE), please see following sources:
- Engineering Statistics Handbook, NIST, https://www.itl.nist.gov/div898/handbook/index.htm https://www.itl.nist.gov/div898/handbook/pmd/section3/pmd31.htm https://www.itl.nist.gov/div898/handbook/pri/section1/pri11.htm
- Dan Nettleton, A Discussion of Statistical Methods for Design and Analysis of Microarray Experiments for Plant Scientists, The Plant Cell Sep 2006, 18 (9) 2112-2121; https://doi.org/10.1105/tpc.106.041616.
- iSixSigma, https://www.isixsigma.com/tools-templates/design-of-experiments-doe/design-experiments-%E2%90%93-primer/
- ASQ, https://asq.org/quality-resources/design-of-experiments It may be fine if you don't use the DOE technique for this research.
However, it is important to provide the ratinale of choosing a certain methodology, and what choices were available. These are still not clearly presented in the methodology section, which directly starts by discussing the methodology steps.
Data set (section 3) should also be part of the methodology (section 4). The above questions also apply on the selection of the data set.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: We have added the rationale for choosing our methodology at the start of the methodology section in the manuscript See Section # 3, page # 6. We have also moved the study area and data set to the methodology section. The data set was selected in this way due to security constraints and limited plots assigned for the experimentation during a crop life cycle.
Second comment: The article needs substantial English language improvements based on a complete check. Below are only some examples where such improvements are needed:
L474 the hand engineered GLCM features. Moreover, the accuracy of these models doesn’t improved on the (doesn't improved --> don't improve; OR didn't improve)
L499 are acquired for the allotted field in the study. The second limitation the drone. In order to cover various (The second limitation the drone. --> The second limitation was related to the drone.)
L333 The machine learning and deep learning algorithms used in our study are support vector machine (SVM) (SVM has been used multiple times before L333, e.g. L50 (full form), L115, L165, L171, L183 (full form), L295 (full form), L296) - Please ensure that all abbreviations are used in full when these are first introduced, followed by use of their abbreviated forms.
Please also check the use of present vs past tenses throughout the article, e.g. is vs was, are vs were. This is very important as the wrong use would change the meaning at all, particularly when discussing the experiments. For example, the use of past tense may imply that something was done that way particularly in your experiment while the use of present tense may imply that something is usually done that way.
Author response: Thanks for your valuable comments to improve the manuscript.
Author action: We have revised the whole paper to remove the grammatical issues. Moreover, we have corrected the use of past and present tenses to make the whole paper consistent.
" | Here is a paper. Please give your review comments after reading it. |
99 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the construction of effective and scalable overlay networks, publish/subscribe (pub/sub) network designers prefer to keep the diameter and maximum node degree of the network low. However, existing algorithms are not capable of simultaneously decreasing the maximum node degree and the network diameter. To address this issue in an overlay network with various topics, we present herein a heuristic algorithm, called the constantdiameter minimum-maximum degree (CD-MAX), which decreases the maximum node degree and maintains the diameter of the overlay network at two as the highest. The proposed algorithm based on the greedy merge algorithm selects the node with the minimum number of neighbors. The output of the CD-MAX algorithm is enhanced by applying a refinement stage through the CD-MAXref algorithm, which further improves the maximum node degrees. The numerical results of the algorithm simulation indicate that the CD-MAX and CD-MAXref algorithms improve the maximum node-degree by up to 64% and run up to four times faster than similar algorithms.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In publish/subscribe (pub/sub) systems, publishers forward different types of messages to specific subscribers in a decoupled mode. Publishers broadcast information through logical channels, while subscribers receive them based on their topic interests. Pub/sub systems are divided into two different categories, namely topic-and content-based categories.</ns0:p><ns0:p>In topic-based pub/sub systems, publishers broadcast their messages based on the topic of the message.</ns0:p><ns0:p>Each topic exclusively pertains to a specific logical channel. Subsequently, as stated in <ns0:ref type='bibr' target='#b20'>Yang and Garcia-Molina (2003)</ns0:ref>, subscribers receive all messages associated with the topics to which they have subscribed to . Consequently, as stated in <ns0:ref type='bibr' target='#b14'>Milo et al. (2007)</ns0:ref>, all messages about those topics will be sent to every user who has joined that particular group . Publishers take the responsibility of classifying the messages that subscribers receive.</ns0:p><ns0:p>In contrast, in content-based pub/sub systems, subscribers only receive messages whose attributes match with their interests; hence, <ns0:ref type='bibr' target='#b4'>Carvalho et al. (2005)</ns0:ref> states that these attributes characterize the logical channels . In this category, the matching algorithm between the publishers and the subscribers is based on the attribute values referred to as the content. In other words, the receivers decide which messages they would receive. In both cases, however, publish/subscribe systems show similarities with Information Centric Networks in terms of sharing/distributing information among users.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Pub/sub system has a variety of use cases. As explained by ORiordan (2021), a simple use case of a pub/sub system may be a chat application where a participant can subscribe to any of the chat rooms which has a designated pub/sub topic. When a user sends a message to a chat room, the message is published on that topic of the chat room. The subscribers of the topic/chat room receive the message. As stated by GoogleCloud (2021), the pub/sub systems fit best, when there is a need for durable message storage and real-time delivery for those messages with high availability for massive scale. These features are the foundation of cloud computing where pub/sub systems are heavily used. Balancing load among network clusters, implementation of asynchronous workflows, distributing event notifications, data streaming from various processes or devices are examples to pub/sub systems. Apache Kafka, Microsoft Azure Service Bus, Google Cloud Pub/sub and AWS Kinesis are examples of popular pub/sub systems.</ns0:p><ns0:p>For each topic t ∈ T in a typical fully decentralized topic-based pub/sub system based on the peerto-peer (P2P) connected overlay, a sub graph is derived using the nodes interested in t. Hence, the nodes interested in topic t do not need to rely on other nodes to send or receive their messages. Every node must maintain its connections (e.g., checking the accessibility of neighbors) and monitor data streaming through the connections; therefore, overlay networks with a low maximum number of links emitting from a node and low network diameters are desirable. If a proper correlation exists between node subscriptions, the connectivity of many topics subscribed by those two nodes will be satisfied by adding only one edge between the two nodes. Hence, the maximum number of links of a node and the total number of overlay connections will considerably diminish. The significance and the impact of the topic correlation of nodes in optimizing the overlay networks were highlighted in a relevant paper by <ns0:ref type='bibr' target='#b8'>Chockler et al. (2007a)</ns0:ref>. Constructed sub-graph acts as a selective message delivery among different publishers and subscribers belonging in the same interest group. In a sub-graph, messages are routed to the node destination (subscriber) with the same topic interest.Generally, a sub-group of pub/sub system can be modelled as a trimerous < Π, β , Σ > collections of functions. The sets involved are determined based on their functionality: Π = p 0 , ..., p i − 1 is a set of i processes in the system that act as publishers providing information to those need it. Σ = C 0 , ...,C j −1 is set of j processes referred to as subscribers that are known as consumers of information provided by publishers. The set of publishers and the set of subscribers can have non-zero intersection, which means that the process can also operate as both publisher and subscriber.</ns0:p><ns0:p>Decoupling is an ideal feature of communication systems. Systems with decoupling mechanism do not need to consider any issues such as addressing and synchronization. β = B 0 , ..., B k − 1 presents a logical centralized service that enables publishers and subscribers to connect. In other words, any publisher or subscriber in a sub group can exclusively send or receive specific information through β . Not only do β provide communication between publishers and subscribers, they also keep publishers and subscribers in a decoupled mode during communication process. <ns0:ref type='bibr' target='#b1'>Baldoni et al. (2007)</ns0:ref> and <ns0:ref type='bibr' target='#b12'>Lau et al. (2009)</ns0:ref> states that, reducing the maximum number of links of an overlay can play a vital role in various network fields, such as survivable and wireless network design. <ns0:ref type='bibr' target='#b8'>Chockler et al. (2007a)</ns0:ref> presented the concept of topic connectivity, in which an individual overlay network connects nodes with similar topics. They introduced the Greedy Merge (GM) algorithm to construct an overlay with the least possible number of connections. A number of other solutions for the overlay design were also recently introduced by <ns0:ref type='bibr' target='#b4'>Carvalho et al. (2005)</ns0:ref>. However, all of the existing methods <ns0:ref type='bibr' target='#b5'>(Chen et al. (2010a)</ns0:ref>, <ns0:ref type='bibr' target='#b7'>Chen et al. (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>Chen et al. (2010b)</ns0:ref>) suffer from either a high diameter or a high maximum-node-degree. In addition, all constant-diameter algorithms connect the nodes in a star topology manner, resulting in the best possible diameter while giving rise to nodes with high node degrees. <ns0:ref type='bibr' target='#b9'>Chockler et al. (2007b)</ns0:ref> states that, these nodes are responsible for managing a large number of connections to their neighbors, which results in a high traffic overhead.</ns0:p><ns0:p>In this study, we propose an algorithm for constructing a scalable topic-connected overlay (TCO), which has a low maximum node degree and a constant diameter of 2, to solve the above-mentioned problems. The proposed algorithms (i.e., CD-MAX and CD-MAX-Ref) outperform the existing algorithms in the literature in terms of constructing optimum overlays with a minimum node degree and a small diameter. In addition, the performance of the proposed CD-MAX algorithm in terms of the required running time for constructing overlays provides a suitable conformance on scalability requirements.</ns0:p><ns0:p>The remainder of this paper is presented as follows: Section 2 provides a summary of the previous studies on pub/sub networks, including the concept of the GM algorithm and other algorithms proposed for building overlays with a minimum number of edges; Section 3 presents the details of the proposed </ns0:p></ns0:div>
<ns0:div><ns0:head>OVERLAY DESIGN ALGORITHMS</ns0:head><ns0:p>An effective publication routing protocol is needed in designing an efficient pub/sub system, and it can play a vital role on the system performance. Therefore, <ns0:ref type='bibr' target='#b15'>Onus and Richa (2011)</ns0:ref> stated that the quality of a constructed overlay can be assessed based on the complexity of the routing scheme applied.</ns0:p><ns0:p>The complexity can be minimized if all the nodes interested in a topic t ∈ T can be organized into a dissemination tree. In this case, as <ns0:ref type='bibr' target='#b8'>Chockler et al. (2007a)</ns0:ref> and <ns0:ref type='bibr' target='#b9'>Chockler et al. (2007b)</ns0:ref> states, the topic dissemination trees should have the following issues:</ns0:p><ns0:p>• Each tree for topic m includes only the nodes interested in topic m</ns0:p><ns0:p>• The diameter of the topic trees should be low</ns0:p><ns0:p>The GM algorithm by <ns0:ref type='bibr' target='#b8'>Chockler et al. (2007a)</ns0:ref> solves the two issues by the low-diameter publish/subscribe overlay algorithms. Suppose that G is an overlay network, and the essential prerequisite to solving issue (1) is to ensure topic connectivity, where a sub-graph connects all the nodes interested in topic m. <ns0:ref type='bibr' target='#b8'>Chockler et al. (2007a)</ns0:ref> introduced the topic connectivity concept and the minimum topicconnected overlay problem. They provided an approximation solution, called the GM algorithm, for the problem with the minimum number of links.</ns0:p><ns0:p>The GM algorithm begins with the overlay network G = (N, / 0). There are m ∈ M|n :</ns0:p><ns0:formula xml:id='formula_0'>Int(n, m) = 1|</ns0:formula><ns0:p>individual topic-connected components of G for each topic m ∈ M. Int(x, m) indicates whether or not node x is interested in topic m. The algorithm continues by connecting two nodes at each repetition until the resulting overlay comprises maximally one topic-connected component for each m ∈ M. CD-ODA-II <ns0:ref type='bibr' target='#b15'>Onus and Richa (2011)</ns0:ref> initializes with G = (N, / 0) as the overlay network. A node u which has the maximum connection density is chosen in each iteration. Then, edges are added between u and its neighbors. Then, the interest topics of u are removed from the set of topics. Different than CD-ODA-II, 2D-ODA <ns0:ref type='bibr' target='#b16'>Onus and Richa (2016)</ns0:ref> starts with G = (V, E) as the overlay network and a topic set T is selected in which that topic is in the interest of node u and the selected topic has the maximum node density.</ns0:p><ns0:p>The node with maximum topic density for a topic is chosen at each iteration, then together with the star structure, node is added to the network. After all, the topic is removed from the set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Publish/Subscribe Challenges</ns0:head><ns0:p>The following three main challenges must be handled in building an effective pub/sub system: (1) expression of the interest to the topics by the subscribers, (2) organization of the notification service to deliver interests to topics, and (3) delivery of messages to the subscribers by the publishers. These states are strongly coupled, and their contribution can affect the system performance. For instance, as described in <ns0:ref type='bibr' target='#b18'>Triantafillou and Aekaterinidis (2004)</ns0:ref>, a rudimentary subscription algorithm may improve the functionality of multicasting, but it facilitates a poor expression ability for subscribers to announce the topics they are interested in. <ns0:ref type='bibr' target='#b10'>Eugster et al. (2003)</ns0:ref> states that the architecture of the pub/sub systems can generally be divided into client-server groups and P2P. In client-server architectures, the servers are the providers of information (publishers), while the clients are the subscribers. Intermediate nodes, called brokers, have been introduced to decouple clients and servers and achieve a better performance. Therefore, these architectures are referred to as broker-based architectures. Meanwhile, in P2P architectures, each node performs as either subscribers or publishers or both. In a P2P paradigm, all nodes can operate under various roles (e.g., subscriber, root, or internal node of a multicast tree) and play under a combination of these roles. The P2P architectures are also called decentralized architectures. A typical pub/sub system must hold two principle characteristics of P2P networks: i) scalability and ii) fault tolerance/reliability.</ns0:p><ns0:p>The following sub-sections briefly introduce both architectures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preliminaries</ns0:head><ns0:p>An overlay is defined as an undirected graph G(V,E), where V is the set of nodes, and E is the set of edges.</ns0:p><ns0:p>The number of nodes interested in at least one topic, which node u is interested in, is called the node u interest group, which is computed as</ns0:p><ns0:formula xml:id='formula_1'>n u = |{v ∈ V | ∃t ∈ T, Int(v,t) = Int(u,t) = 1} | 3/15</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>T is the set of topics, which a subscriber can be interested in. Int(u,t) = 1 indicates that subscriber u is interested in topic t, while Int(u,t) = 0 indicates otherwise. The degree of node u denoted by d u is defined as the total number of edges incident to it and given as</ns0:p><ns0:formula xml:id='formula_2'>d u = |{v ∈ V | e(u, v) = 1} |</ns0:formula><ns0:p>where e(u,v) denotes a function indicating if there is an edge between u and v.</ns0:p><ns0:p>The degree of topic t is defined as the number of subscribers interested in that topic as</ns0:p><ns0:formula xml:id='formula_3'>d t = |{v ∈ V | Int(v,t)} |</ns0:formula><ns0:p>The density of node u is given by density u = ∑ t∈T |{v∈V |Int(v,t)=Int(u,t)=1}|</ns0:p><ns0:p>|{v∈V |∃t∈T,Int(v,t)=Int(u,t)=1}|</ns0:p><ns0:p>Additionally, the diameter of a graph is the length of the shortest path between the two nodes which are farthest from each other.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED ALGORITHM</ns0:head><ns0:p>Most of the approaches used for designing scalable overlay networks failed to achieve an appropriate trade-off between the maximum node degree and the diameter of the overlay network. On the one hand, The algorithm continues by connecting two nodes at each repetition until the resulting overlay comprises maximally one topic-connected component for each m ∈ M. The two nodes connected during each repetition are those with the greatest number of topics in common.</ns0:p><ns0:p>On the other hand, a number of solutions presented in the previous section provide overlays with a low diameter, but the maximum node degree of these overlays is considerably high <ns0:ref type='bibr' target='#b4'>Carvalho et al. (2005)</ns0:ref> Onus and Richa ( <ns0:ref type='formula'>2011</ns0:ref>). We propose herein an algorithm, called Low-diameter Topic-based Pub/Sub</ns0:p><ns0:p>Overlay Network with Minimum-Maximum Node Degree (CD-MAX), to simultaneously address both issues. Our proposed method improves the CD-MAX algorithm based on building an overlay using a star topology to avoid a high node degree. The proposed CD-MAX algorithm (Algorithm 1) creates an initial overlay network by selecting the nodes with a minimum interest group at each iteration.</ns0:p><ns0:formula xml:id='formula_4'>Algorithm 1 Constant Diameter Maximum Algorithm (CD-MAX) 1: T ⇐ Set o f topics 2: V ⇐ Set o f nodes 3: Max n ⇐ Number o f Nodes 4: while T is NOT empty do 5: for each u ∈ V do 6: calculate interest group (n u ) 7: if n u < Max n then 8: Max n ⇐ n u 9: if Number o f Max n nodes > 1 then 10:</ns0:formula><ns0:p>u ⇐ w ∈ Max n AND w has the largest d u .</ns0:p><ns0:p>11:</ns0:p><ns0:formula xml:id='formula_5'>if Length o f u > 1 then 12: u ⇐ newest node in u 13: else 14: u ⇐ w ∈ Max n 15: for each t ∈ T do 16: if ∃v, Int(u,t) = Int(v,t) then 17:</ns0:formula><ns0:p>Connect nodes u and v through an edge Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to. These topics are removed from the list of topics, and the algorithm iterates until the list of topics becomes empty. Algorithm 1 presents this procedure. In the last step of Algorithm 1, CD-MAX algorithm finds the node with the smallest n u that is interested in the subsets of topics t. Selected node u will be connected to its neighbors and the topics subscribed by node u will be removed from the original topic set.</ns0:p><ns0:p>Unless there is a topic which is subscribed by a single node, pub/sub overlay network is connected. In order to proceed, a proof that the CD-MAX algorithm terminates in</ns0:p><ns0:formula xml:id='formula_6'>O(|V | 2 * |T | 2 ) is provided below. Lemma 1: The running time of CD-MAX algorithm is O(|V | 2 * |T | 2 ).</ns0:formula><ns0:p>Proof: Finding a node with minimum node degree takes O(|V | 2 * |T |). When node u with minimum node degree has been selected, all the topics to which node u is subscribed is removed. At each iteration, one topic is removed from the original topic set. Thus algorithm takes</ns0:p><ns0:formula xml:id='formula_7'>O(|V | 2 * |T |) * |T | = O(|V | 2 * |T | 2 )</ns0:formula><ns0:p>time steps to terminate.</ns0:p><ns0:p>Lemma 2: CD-MAX algorithm guarantees the diameter of at most 2 for each topic.</ns0:p><ns0:p>Proof: Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. As a justification of the algorithm; at each iteration, at least one node is connected to its neighbors (one edge is required), meaning that finding a node with minimum node degree is achieved. for each q ∈ N do 6:</ns0:p><ns0:formula xml:id='formula_8'>if d q + d t < d u − d t and Int(q,t) = 1 then 7:</ns0:formula><ns0:p>Remove all edges o f topic t connected to u 8:</ns0:p><ns0:p>Connect q to every node n i f Int(n,t) = 1 9:</ns0:p><ns0:p>U pdate node degrees 10:</ns0:p></ns0:div>
<ns0:div><ns0:head>U pdate Max n</ns0:head><ns0:p>The CD-MAX-Ref algorithm (Algorithm 2) takes over when CD-MAX terminates. It examines all topics and finds the center node for each topic. The center node is a node interested in topic t ∈ T which is selected to connect all nodes interested in topic t ∈ T . It then searches the overlay for an alternative node, which will have a lower node degree if it becomes the center node for that topic. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the added and existing edges in v exists. In some cases, the number of added edges will be less than the number of removed edges. In the worst case, CD-MAX-Ref will add the same number of edges to v as was deleted from u. Even in this case, the node degree of v will be smaller (according to the assumption), thereby resulting in a lower maximum node degree. Considering that the node with the highest node degree is processed in the same manner, the algorithm will reduce its node degree and the maximum node The complexity of a constructed pub/sub overlay network can be determined through the cost of broadcast consideration. The total time required to send a chunk of information from a publisher to specific group of nodes that subscribe certain type of information is a crucial factor that truly depends on the resulted diameter between publishers and subscribers. Meanwhile, due to bandwidth and memory constraints, it is also necessary to keep the maximum node degree of an overlay low. For example, a star topology would be the best option in terms of overlay diameter. But, on the other hand, in terms of memory, bandwidth and energy consumption, it will become a big problem in this scenario when number of nodes grows in the network. Hence, providing a trade-off between diameter and maximum node degree of a pub/sub overlay is an ideal way in pub/ sub overlay network design.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of CD-MAX with Examples:</ns0:head><ns0:p>The Example 1:</ns0:p><ns0:p>In this example, we assume that (n−1)/4 nodes are interested in each topic enumerated as {10, 20}, {20, 30}, {30, 40}, and {40, 50} (Figure <ns0:ref type='figure'>1</ns0:ref>). In addition, node u has subscribed to topics {10, 20, 30, 40, 50, 60}. According to the CD-ODA algorithms, node u is the center of all topics, and it will be connected to all the other nodes. Figure <ns0:ref type='figure'>1</ns0:ref> shows that the CD-MAX algorithm constructs an overlay with a maximum node degree of (2 * (n − 1)/4). In this example, the CD-MAX algorithm improved the maximum degree of the overlay by 50%. In this case, the CD-MAX-Ref did not improve the node degree.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Example 2</ns0:note><ns0:p>In this example, it is assumed to have 4 different collections interested in 4 different topics as listed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. In order to construct the overlay network all CD algorithms (including CD-MAX) need 6n-2 edges. The overlay network could be generated by the CD-MAX algorithm at a maximum node degree at a rate of 3n-1, whereas previous constant diameter algorithms have the maximum node degree at a rate of 4n-1.</ns0:p><ns0:p>The red and black arrows demonstrate the implementation of CD-MAX and other existing algorithms respectively (Figure <ns0:ref type='figure'>2</ns0:ref>). This case is not valid for CD-MAX-Ref, which does not have a lower maximum node degree for this example. </ns0:p><ns0:formula xml:id='formula_9'>{x i } N (i, j) {x (i, j) } N (1,2,3,. . . ,n) {x (1,2,3,. . . ,n) }</ns0:formula></ns0:div>
<ns0:div><ns0:head>Evaluation of CD-MAX-Ref with Examples:</ns0:head><ns0:p>Example 3</ns0:p><ns0:p>Table 2 presents eight different nodes located throughout the network. The nodes with their respective topics deployed over the network and their degrees are shown. Node number 7 has the lowest node degree; hence, it is selected as the first node to be connected to its interest group. The node becomes the center of topics 1,6. Therefore, topics 1 and 6 are removed from the original topic list, and the topics list after step 1 becomes {0, 2, 3, 4, 5, 7, 8, 9}.</ns0:p><ns0:p>As previously explained, when more than two nodes have an equal lowest node degree, the node with the higher node density will be selected as a topic center. In the second step, nodes 1 to 6 have the lowest node degrees. However, node 1 has the highest node density (Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>); hence, this node is selected to be connected to its interest group (Figure <ns0:ref type='figure'>4</ns0:ref>). As a result, node 1 becomes the center of topics {0, 1, 3, 5, 7, 8, 9}. The remaining topic list will be {2, 4}.</ns0:p><ns0:p>In the following steps, as shown in Figs. <ns0:ref type='figure'>5 and 6</ns0:ref>, nodes 2 and 6 are selected to become the center of topics 4 and 2, respectively. Consequently, topics 4 and 2 are removed from the topic list.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> demonstrates the result of the CD-MAX implementation over Example 3. For this overlay, CD-MAX provides an overlay with a maximum node degree of 6. The CD-MAX-Ref algorithm is applied to the resulting overlay to further reduce the node degree. The node with the highest degree (i.e., node 1) is checked, and CD-MAX-Ref finds other nodes, which are interested in topics {0, 3, 5, 7, 8, 9}, and have lower node degrees. Therefore, all the edges connecting node 1 to its interest group are removed. Nodes 0 and 4 are then selected to be the center of topic sets {5, 7, 8} and {0, 3, 9}, respectively (Figure <ns0:ref type='figure'>7</ns0:ref>).</ns0:p><ns0:p>As visualized in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, since node 7 has the lowest node degree, it is chosen as the first node to be connected to its interest group and it becomes the center of topics 1,6.</ns0:p><ns0:p>As visualized in Figure <ns0:ref type='figure'>4</ns0:ref>, Nodes 1 to 6 have the lowest node degrees. But, node 1 is the node which has the highest node density; hence, this node is selected to be connected to its interest group and it becomes the center of topics 0,1,3,5,7,8,9.</ns0:p><ns0:p>The next node to be considered is node number 7, which maintains the maximum node degree of the overlay at 5. As visualized in Figure <ns0:ref type='figure'>5</ns0:ref>, Node 2 is the best option as it has the lowest node degree and it becomes the center of topic 4. After that, in the next step, Node 6 is the best option as it has the lowest node degree and it becomes the center of topic 2 (see Figure <ns0:ref type='figure'>6</ns0:ref>). Therefore, CD-MAX-Ref should find alternative nodes for topics 1 and 6. All the edges joined to node number 7 are removed as node numbers 1 and 5 become the center of topics 1 and 6, respectively In this example, there are (3n/2) + 1 nodes placed over the network. Each node subscribes to specific topics which is listed in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. In order to construct the overlay network, all three existing CD algorithms (CD-ODA, CD-ODA I, CD-ODA II, and 2D-ODA), the node interested in topics {x 1 , x 2 , x 3 , . . . , x n } acts as a center of the overlay which would be connected to all other nodes participating in the network.</ns0:p><ns0:p>However, the CD-MAX functions in a different manner. For example, the node which subscribes to x 1 is connected to the nodes which are interested in x 1,2 and {x 1 , x 2 , x 3 , . . . , x n }. This algorithm provides the overlay with a maximum node degree n related to the node interested in topics {x 1 , x 2 , x 3 , . . . , x n }. After this step, CD-MAX-Ref re-constructs the overlay with a lower maximum node degree provided by the normal CD-MAX. To illustrate, the node that is interested in {x 1 , x 2 , x 3 , . . . , x n } has the maximum node degree. All edges which are connected to this node are removed (Grey Arrows) and CD-MAX-Ref then finds other nodes with lower node degree (Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head>Examples</ns0:head><ns0:formula xml:id='formula_10'>Algorithm Name I II III IV CD-ODA n-1 4n-1 8n-1 3n/2 CD-ODA I n-1 4n-1 8n-1 3n/2 CD-ODA II n-1 4n-1 7n-1 3n/2 2D-ODA n-1 4n-1 7n-1 3n/2 CD-MAX & CD-MAX-Ref (2*(n-1)/4)+1 3n-1 5n-1 n/2</ns0:formula></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>This section presents a comparative evaluation of our proposed method using different overlay networks.</ns0:p><ns0:p>The algorithm comparisons were conducted based on the average and maximum node degree over the resulting overlays. Both the number of topics and the number of nodes varied throughout the simulation.</ns0:p><ns0:p>As noted earlier, each node has a specific subscription size and subscribes to 10 topics because of the memory restriction in the experiments. Note that only in the last simulation, this number was increased to values between 15 and 35. Each node n ∈ N can be interested in each topic t ∈ T with a probability of p i , in which ∑ i p i = 1. The topic distribution probabilities p i have a Zipf distribution with α = 0.5 as used in similar studies by <ns0:ref type='bibr' target='#b4'>Carvalho et al. (2005)</ns0:ref> and <ns0:ref type='bibr' target='#b13'>Liu et al. (2005)</ns0:ref>. During the experiments, we considered the impact of varying the number of nodes, topics, and topic popularity distribution on the average and maximum node degrees.</ns0:p><ns0:p>Throughout the experiments, we presented the results of the CD-ODA II algorithm as the representative of the CD-ODA algorithm series because of its better performance. The tables 5-12 has confidence values added to the tables. Since the values shown are the average of the multiple executions with smooth differences, the confidence intervals are calculated. The calculated confidence intervals are also depicted in the aforementioned tables. The confidence interval of the results are calculated as; </ns0:p><ns0:formula xml:id='formula_11'>con f idence(α, std dev, n) = norms inv(1 − α/2).</ns0:formula></ns0:div>
<ns0:div><ns0:head>Average and Maximum Node Degree Values for Varying Number Of Nodes</ns0:head><ns0:p>In this experiment, the number of nodes varied between 200 to 5000, while the number of topics was kept constant (100). The subscription size was fixed at 10. Each node randomly subscribed to different topics.</ns0:p><ns0:p>The average node degrees (see Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> and Figure <ns0:ref type='figure'>10</ns0:ref>) provided by each algorithm slightly decreased as the number of nodes increased. This result indicates that because of the increased probability of having overlaid edges in sub-graphs, a smaller number of edges connected a larger number of nodes, and the average node degree of the overlay will decrease. However, the maximum node degree of the constant-diameter algorithms increased with the increasing number of nodes (see Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_12'>11</ns0:ref>).</ns0:p><ns0:p>This observation is also valid in the case of the 2D-ODA algorithm. Unlike the other constant-diameter algorithms, in which a small number of nodes covered most of the topics, to decrease the node degree of the overlay, the CD-MAX and CD-MAX-Ref algorithms chose nodes with a lower correlation to become the center of the topics. More edges would be needed to connect the nodes, thereby raising the average node degree.</ns0:p><ns0:p>Compared with the GM algorithm, CD-MAX and CD-MAX-Ref for 1000 nodes requires 2,65 and 3,66 times more edges in average, respectively (see Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>). Considering the maximum node degree with a growing number of nodes, more nodes should be connected to the overlay center nodes. Thus, the maximum node degree provided by every constant-diameter algorithm sharply increased (see Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>).</ns0:p><ns0:p>The maximum node degree of the GM algorithm will decrease relatively (does not increase while others are increasing fast enough) because more nodes with a higher correlation distributed the node degree of the overlay. Although the GM algorithm had low maximum and average node degree, it had a higher diameter </ns0:p></ns0:div>
<ns0:div><ns0:head>Average and Maximum Node Degree Values for Varying Number of Topics</ns0:head><ns0:p>In this experiment, the number of nodes and the subscription size were fixed at 100 and 10, respectively.</ns0:p><ns0:p>Meanwhile, the number of topics varied from 200 to 400. The overlays face two different conditions when the number of topics was increased. First, the correlation between the nodes decreased; thus, more edges were used to connect the nodes. Second, the number of nodes without any connection to a neighbor increased. The average node degree will increase if the first condition dominates. The entry in Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref>, which corresponds to CD-MAX-Ref with a number of topics of 250, is an indication of the first condition.</ns0:p><ns0:p>In contrast, the average node degree will decrease if the second condition dominates. The second condition has a greater effect than the first condition (see Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_7'>12</ns0:ref>). Hence, the overall average node degree of the overlay for every algorithm reduced when the quantity of topics increased.</ns0:p><ns0:p>The second condition affected the maximum degree of all the algorithms. Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_13'>13</ns0:ref> shows that the maximum node degree of all the algorithms decreased as the set of topics indicated more diversity. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head><ns0:p>Computer Science changing the subscription size on the selected algorithms. As the subscription size grows, the nodes can get connected with each other with a higher correlation rate. Subsequently, the rate of the average node degree decreases. Meanwhile, the contribution between the nodes rises when the subscription size grows.</ns0:p><ns0:p>Therefore, the GM algorithm can find many node pairs, which dramatically reduces the total number of topic-connected components. Hence, the maximum node degree will decrease as the subscription size increases. For all the algorithms with a star topology, the maximum node degree will increase as a single node may be selected as the center of many topics. The results of the final set of experiments showed that CD-MAX required slightly lesser number of edges to build the overlay network when compared with the other algorithms. From the node degree perspective, all constant-diameter algorithms, including the proposed algorithms, showed similar averages. The decrease in the average node degree in the CD-MAX family of algorithms was slightly higher than that in the other competitors, resulting in a higher scalability. <ns0:ref type='bibr'>±0,94 198,00 ±0,48 197,60 ±0,50 193,60 ±0,50 68,70 ±1,80 25 13,00 ±1,49 199,00 ±0,00 198,90 ±0,19 197,90 ±0,19 82,20 ±0,91 30 10,60 ±0,97 199,00 ±0,00 199,00 ±0,00 198,80 ±0,25 198,80 ±0,25 35</ns0:ref> 9,40 ±0,84 199,00 ±0,00 199,00 ±0,00 199,00 ±0,00 199,00 ±0,00</ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_1'>10</ns0:ref>. Maximum Node Degree for Different Subscription Size (Mean Degree with confidence α = 0.05)</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of the Running Time Cost of the Algorithms</ns0:head><ns0:p>An optimized overlay network not only can forward packets with shorter latencies, but also improve the maintenance of connections and provide resiliency against link failures <ns0:ref type='bibr' target='#b2'>Besta and Hoefler (2014)</ns0:ref>.</ns0:p><ns0:p>This optimization is closely dependent on the total degree of nodes. Although designing a topology for <ns0:ref type='table' target='#tab_11'>11</ns0:ref> shows that the growth rate of the running time for the ODA algorithms are similar while the nodes were increasing from 200 to 5000. The growth rate for CD-MAX-Ref and GM are similar, while that for the CD-MAX algorithm is lower. These promising results of CD-MAX illustrate its suitability for the number of nodes that increases beyond a threshold. CD-MAX is the fastest algorithm for building and maintaining the requested topology. Meanwhile, Table <ns0:ref type='table' target='#tab_2'>12</ns0:ref> shows that the running time for CD-MAX and CD-ODA decreased as the number of subscriptions increased. Considering this, CD-MAX is the fastest algorithm beyond a subscription size of 25. Table <ns0:ref type='table' target='#tab_2'>12</ns0:ref> indicates that CD-MAX is the fastest algorithm among all algorithms, including the GM algorithm. The relatively higher speed of CD-MAX makes it a more suitable option for dynamic environments, where arrivals and departures are higher.</ns0:p><ns0:p>The experiment was carried out under Windows operating system using a computer with an I7-7700HQ processor. As noted in Tables <ns0:ref type='table' target='#tab_11'>11 and 12</ns0:ref>, the confidence intervals may fluctuate among the number of nodes and subscriptions. Due to operating system processor loads in some runs, the running times resulted in higher confidence intervals. </ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>This study presented a novel algorithm (i.e., CD-MAX) that provides overlay networks with a minimum node degree and a low diameter of 2. The algorithm was implemented in a decentralized manner, and was fast and scalable. The proposed algorithm considerably decreased the maximum node degree, thereby resulting in an overlay network that was more scalable compared to the other algorithms studied. The The study results indicated that the proposed algorithm outperforms the ODA-based algorithms in terms of the decreased diameter and average node degree of the overlay networks and approaches the performance of the GM algorithm.</ns0:p><ns0:p>Our analytic and experimental results revealed that the running time of the proposed CD-MAX algorithm is similar to CD-ODA-II and 2D-ODA on average node degree, while outperforming all ODA algorithms excluding GM algorithm on maximum node degree (see Tables <ns0:ref type='table' target='#tab_11'>5-6 and Figures 10-11</ns0:ref>). In addition, CD-MAX builds the network in similar time with ODA algorithms with better maximum node degrees, whereas GM fails to scale out with higher node counts (see Table <ns0:ref type='table' target='#tab_11'>11</ns0:ref> and Figure <ns0:ref type='figure'>16</ns0:ref>). As the number of topics increases, the average node degree for CD-MAX and others are similar; however, the CD-MAX and CD-MAX-Ref outperforms on maximum node degree. (See Tables 7-8 and Figures <ns0:ref type='figure' target='#fig_13'>12-13</ns0:ref>).In terms of different subscription sizes, the CD-MAX algorithm outperforms other ODA algorithms excluding GM on average node degree (see Table <ns0:ref type='table' target='#tab_10'>9</ns0:ref>).</ns0:p><ns0:p>By combining the results obtained from the running time experiments and the measurements of the maximum node degree, we can assert that the CD-MAX algorithm is more suitable for networks requiring high scalability because it simultaneously reduces the communication costs and the running time. In contrast, the CD-MAX-Ref algorithm best suits environments with slow and gradual changes (i.e., having a low churn rate) and those with a large number of topics, and subscription sizes, which are characteristics of typical internet-based systems.</ns0:p><ns0:p>As a future work, studies may concentrate on the simulation of millions of nodes, topics, and subscriptions with a typical churn rate of internet users by employing big data and high-performance computing environments. Further research should consider achieving the best average and low maximum node degrees while optimizing the running time. The proposed algorithms can further be extended to include power consumption optimization <ns0:ref type='bibr' target='#b0'>Alsultan et al. (2016)</ns0:ref>, and location awareness to build clustered overlay structures to reduce delay, increase bandwidth, improve scalability <ns0:ref type='bibr' target='#b3'>Bozda and Oztoprak (2008)</ns0:ref>, resilience to node failures, and have load balancing.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>CD-MAX and CD-MAX-Ref algorithms; Section 4 provides the comparative results of the CD-MAX and 2/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021) Manuscript to be reviewed Computer Science CD-MAX-Ref algorithms against the most recent methods from the literature; and Section 5 presents our drawn conclusion and the possible future directions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>solutions for decreasing the number of connections exist. Chockler et al. presented the problem of constructing overlay networks with the least possible links. They considered this issue as an NP-Complete problem and proposed the GM algorithm to solve it Chockler et al. (2007a). The GM algorithm begins with the overlay network G = (N, / 0). There are ∑ m ∈ M|n : Int(n, m) = 1| individual topic-connected components of G for each topic m ∈ M. Int(x, m) indicates whether or not node x is interested in topic m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>18 :</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Remove all topics t f rom T where Int(u,t) = 1This initial overlay network is refined to further improve the maximum node degree by the CD-MAX-Ref algorithm (Algorithm 2). In constructing a network, the proposed CD-MAX algorithm selects the node with the smallest interest group (n u ) and connects it to every node in its interest group. If more than two nodes with an equal smallest interest group are found, the algorithm will select the node with the highest connection density. Each selected node takes the responsibility of topics to which it has subscribed4/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Lemma 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The space complexity of CD-MAX algorithm is O(|T | * |V |). Proof: CD-MAX uses a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. Hence, it gives us a space use of 2 * |T * V | which gives a space complexity of O(|T | * |V |).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>If CD-MAX-Ref manages to locate such a node, the edges corresponding to that topic are removed from the center node of the topic, and the newly discovered nodes are connected to the subscribers of that topic. CD-MAX-Ref can decrease the maximum node degree obtained from the CD-MAX algorithm. To prove the improvement made by the CD-MAX-Ref algorithm, let us assume that u has a node degree of d u , and is the center for the k topics. If a node, such as v, is found satisfying d v + d t < d u − d t , the algorithm eliminates all the edges connecting u to the other nodes, except v, and adds edges from v to those nodes. If the m edges are eliminated in this stage, the same number of new edges should be added to connect the neighbors of u to v. However, v already has some edges reflected in d v ; hence, the probability of having an overlap between 5/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>degree of the network. Additionally, the CD-MAX-Ref algorithm is not affected on how many topics are associated with a removed edge. In any case, CD-MAX-Ref finds an alternative node for each topic that is related to the removed edge. Before moving forward we prove that the CD-MAX-Ref algorithm terminates in O(|T | 2 * |N| 3 ). Lemma 4: The running time of CD-MAX-Ref algorithm is O(|T | 2 * |N| 3 ). Proof: CD-MAX-Ref takes over where CD-MAX terminates which runs in O(|T | 2 * |N| 2 ). It then examines all topics and finds the center nodes for each topic in O(|N|) in worst case. Thus algorithm takes O(|T | 2 * |N| 3 ) time steps to be terminated. Lemma 5: The space complexity of CD-MAX-Ref algorithm is O(|T | * |N|). Proof: Since CD-MAX-Ref builds over CD-MAX, it uses similar data structure with more iteration on the same space which does not extend the use of space. Hence, it gives us a space use of in the order of |T * N| giving a space complexity of O(|T | * |N|).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>examples 1 and 2 are presented to clarify the steps used by the proposed CD-MAX and CD-MAX-Ref algorithms. Following examples 3-5 additionally compare CD-MAX and CD-MAX-Ref algorithms over other constant diameter (CD) overlay design algorithms (CD-ODA, CD-ODA I, CD-ODA II, and 2D-ODA).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. Implementation of the CD-MAX Algorithm over Example 1</ns0:figDesc><ns0:graphic coords='7,156.19,516.30,170.07,120.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .Figure 4 .Figure 5 .Figure 6 .</ns0:head><ns0:label>3456</ns0:label><ns0:figDesc>Figure 3. Implementation of the CD-MAX Algorithm over Example 3 Part 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Implementation of CD-MAX-Ref Algorithm over Example 3 Part 5</ns0:figDesc><ns0:graphic coords='9,156.19,421.58,170.08,106.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Implementation of CD-MAX Algorithm over Example 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>std dev/ √ n where: norms inv =Inverse of the standard normal cumulative distribution with the given probability std dev = Standard deviation of the given values 9/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021) Manuscript to be reviewed Computer Science n = number of values (in this study, number of runs, i.e. 10) α = significance level, which is a probability between 0 and 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 10. Average Node Degree for Different Number of Nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 12. Average Node Degree for Different Number of Topics</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 14. Average Node Degree for Different Number of Subscriptions</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 16. Running Time for Different Number of Nodes in Seconds</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>2 -</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Topic Assignments</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection A 10,20</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection B 10,30</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection C 20,30</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection D 10,40</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Example 3 -Topic Assignments</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell><ns0:cell>Degree</ns0:cell></ns0:row><ns0:row><ns0:cell>0 1 2 3 4 5 6 7</ns0:cell><ns0:cell>{1,2,3,5,7,8} {0,1,3,5,7,8,9} {1,4,5} {2,4,6} {0,2,3,4,9} {2,3,6} {2,5} {1,6}</ns0:cell><ns0:cell>7 6 6 6 6 6 6 5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Example 4 -Topic Assignments</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell></ns0:row><ns0:row><ns0:cell>N i</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Maximum Node Degree of Overlay Networks Built by CD Algorithms</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average Node Degree for Different Number of Nodes (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Nodes Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>6,62</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>13,56</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,41</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>13,92</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>18,26</ns0:cell><ns0:cell>±0,09</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>6,25</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>13,79</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,73</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,07</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>18,97</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>5,96</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>13,83</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>14,50</ns0:cell><ns0:cell>±0,14</ns0:cell><ns0:cell>14,17</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>19,26</ns0:cell><ns0:cell>±0,02</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>5,80</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>13,91</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,67</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>14,19</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>19,43</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>5,52</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>13,98</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>14,60</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,23</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>19,62</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>5,38</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>13,97</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>14,50</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,27</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>19,72</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>2500</ns0:cell><ns0:cell>4,94</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>14,02</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>14,37</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>14,31</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>19,89</ns0:cell><ns0:cell>±0,00</ns0:cell></ns0:row><ns0:row><ns0:cell>5000</ns0:cell><ns0:cell>4,93</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>14,01</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>14,40</ns0:cell><ns0:cell>±0,10</ns0:cell><ns0:cell>14,37</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>19,89</ns0:cell><ns0:cell>±0,02</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Nodes Degree Conf. Degree</ns0:cell><ns0:cell cols='2'>Conf. Degree</ns0:cell><ns0:cell cols='2'>Conf. Degree</ns0:cell><ns0:cell cols='3'>Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>19,80</ns0:cell><ns0:cell cols='7'>±1,30 137,80 ±2,77 131,40 ±1,94 113,70 ±1,98</ns0:cell><ns0:cell>38,10</ns0:cell><ns0:cell>±0,90</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>21,50</ns0:cell><ns0:cell cols='7'>±0,00 203,80 ±3,68 199,10 ±3,26 176,80 ±1,62</ns0:cell><ns0:cell>51,30</ns0:cell><ns0:cell>±1,34</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>22,80</ns0:cell><ns0:cell cols='7'>±0,00 272,60 ±2,45 261,40 ±4,93 237,50 ±1,99</ns0:cell><ns0:cell>62,60</ns0:cell><ns0:cell>±1,44</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>22,20</ns0:cell><ns0:cell cols='7'>±0,00 337,00 ±1,98 334,70 ±6,27 303,40 ±2,13</ns0:cell><ns0:cell>75,20</ns0:cell><ns0:cell>±1,84</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>22,60</ns0:cell><ns0:cell cols='9'>±0,00 510,90 ±3,72 498,30 ±5,00 458,70 ±3,71 102,90 ±2,87</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>23,90</ns0:cell><ns0:cell cols='9'>±0,00 678,60 ±4,79 677,30 ±6,91 619,10 ±3,41 132,60 ±3,11</ns0:cell></ns0:row><ns0:row><ns0:cell>2500</ns0:cell><ns0:cell>32,60</ns0:cell><ns0:cell cols='9'>±0,00 1686,70 ±7,61 1683,60 ±7,10 1584,20 ±6,78 296,67 ±2,36</ns0:cell></ns0:row><ns0:row><ns0:cell>5000</ns0:cell><ns0:cell>36,00</ns0:cell><ns0:cell cols='9'>±0,00 1692,00 ±7,12 1685,00 ±9,08 1580,00 ±7,65 296,00 ±3,08</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Maximum Node Degree for Different Number of Nodes (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Average Node Degree for Different Number of Topics (Mean Degree with confidence</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Topics Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>9,99</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>13,16</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>12,80</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>13,53</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>14,52</ns0:cell><ns0:cell>±0,11</ns0:cell></ns0:row><ns0:row><ns0:cell>250</ns0:cell><ns0:cell>10,38</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>12,67</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>12,21</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>13,06</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>13,74</ns0:cell><ns0:cell>±0,12</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>10,67</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>12,27</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>11,78</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>12,63</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>12,99</ns0:cell><ns0:cell>±0,12</ns0:cell></ns0:row><ns0:row><ns0:cell>350</ns0:cell><ns0:cell>10,65</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,77</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,36</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>12,14</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>12,34</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>10,37</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>11,25</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>10,88</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,59</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>11,81</ns0:cell><ns0:cell>±0,09</ns0:cell></ns0:row><ns0:row><ns0:cell>α = 0.05)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Topics Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>22,10</ns0:cell><ns0:cell>±1,29</ns0:cell><ns0:cell>42,90</ns0:cell><ns0:cell>±1,22</ns0:cell><ns0:cell>30,30</ns0:cell><ns0:cell>±3,43</ns0:cell><ns0:cell>29,90</ns0:cell><ns0:cell>±0,85</ns0:cell><ns0:cell>18,70</ns0:cell><ns0:cell>±0,51</ns0:cell></ns0:row><ns0:row><ns0:cell>250</ns0:cell><ns0:cell>21,90</ns0:cell><ns0:cell>±1,22</ns0:cell><ns0:cell>37,00</ns0:cell><ns0:cell>±1,78</ns0:cell><ns0:cell>26,90</ns0:cell><ns0:cell>±1,95</ns0:cell><ns0:cell>23,80</ns0:cell><ns0:cell>±0,49</ns0:cell><ns0:cell>16,90</ns0:cell><ns0:cell>±0,46</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>22,70</ns0:cell><ns0:cell>±1,21</ns0:cell><ns0:cell>31,70</ns0:cell><ns0:cell>±1,13</ns0:cell><ns0:cell>23,80</ns0:cell><ns0:cell>±1,54</ns0:cell><ns0:cell>20,10</ns0:cell><ns0:cell>±0,46</ns0:cell><ns0:cell>15,70</ns0:cell><ns0:cell>±0,51</ns0:cell></ns0:row><ns0:row><ns0:cell>350</ns0:cell><ns0:cell>23,00</ns0:cell><ns0:cell>±0,83</ns0:cell><ns0:cell>28,00</ns0:cell><ns0:cell>±1,49</ns0:cell><ns0:cell>22,60</ns0:cell><ns0:cell>±0,98</ns0:cell><ns0:cell>17,50</ns0:cell><ns0:cell>±0,33</ns0:cell><ns0:cell>15,40</ns0:cell><ns0:cell>±0,43</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>20,40</ns0:cell><ns0:cell>±1,25</ns0:cell><ns0:cell>26,10</ns0:cell><ns0:cell>±1,66</ns0:cell><ns0:cell>21,60</ns0:cell><ns0:cell>±2,03</ns0:cell><ns0:cell>15,70</ns0:cell><ns0:cell>±0,30</ns0:cell><ns0:cell>14,30</ns0:cell><ns0:cell>±0,30</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Maximum Node Degree for Different Number of Topics (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Average and Maximum Node Degree Values for Varying Subscription Size For</ns0:head><ns0:label /><ns0:figDesc>the final experiment, the numbers of nodes and topics were kept at 200 and 100, respectively. However, the subscription size was varied between 15 and 35. As noted earlier, each node randomly subscribed</ns0:figDesc><ns0:table /><ns0:note>11/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Average Node Degree for Different Subscription Size (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Subsc. Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>12,86</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>29,92</ns0:cell><ns0:cell>±0,16</ns0:cell><ns0:cell>32,28</ns0:cell><ns0:cell>±0,43</ns0:cell><ns0:cell>15,09</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>27,93</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>12,20</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>28,21</ns0:cell><ns0:cell>±0,19</ns0:cell><ns0:cell>29,42</ns0:cell><ns0:cell>±0,79</ns0:cell><ns0:cell>14,19</ns0:cell><ns0:cell>±0,14</ns0:cell><ns0:cell>37,08</ns0:cell><ns0:cell>±0,10</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>11,51</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>25,34</ns0:cell><ns0:cell>±0,41</ns0:cell><ns0:cell>26,31</ns0:cell><ns0:cell>±0,44</ns0:cell><ns0:cell>12,62</ns0:cell><ns0:cell>±0,23</ns0:cell><ns0:cell>46,04</ns0:cell><ns0:cell>±0,10</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>10,83</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>22,31</ns0:cell><ns0:cell>±0,38</ns0:cell><ns0:cell>23,08</ns0:cell><ns0:cell>±0,43</ns0:cell><ns0:cell>11,20</ns0:cell><ns0:cell>±0,23</ns0:cell><ns0:cell>11,20</ns0:cell><ns0:cell>±0,23</ns0:cell></ns0:row><ns0:row><ns0:cell>35</ns0:cell><ns0:cell>10,21</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>20,93</ns0:cell><ns0:cell>±0,32</ns0:cell><ns0:cell>20,90</ns0:cell><ns0:cell>±0,48</ns0:cell><ns0:cell>10,46</ns0:cell><ns0:cell>±0,16</ns0:cell><ns0:cell>10,46</ns0:cell><ns0:cell>±0,16</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Subsc. Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>16,10</ns0:cell><ns0:cell cols='7'>±0,81 185,80 ±1,38 183,90 ±1,97 174,10 ±0,85</ns0:cell><ns0:cell>51,90</ns0:cell><ns0:cell>±0,81</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>14,10</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Running Time for Different Number of Nodes in Seconds (Mean Values with Confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>CD-ODA-II</ns0:cell><ns0:cell>2D-ODA</ns0:cell><ns0:cell>CD-MAX</ns0:cell><ns0:cell>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Avg.</ns0:cell><ns0:cell cols='3'>Conf. Avg. Conf. Avg. Conf. Avg. Conf.</ns0:cell><ns0:cell>Avg.</ns0:cell><ns0:cell>Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>0,12</ns0:cell><ns0:cell cols='3'>±0,01 0,29 ±0,03 0,11 ±0,01 0,29 ±0,02</ns0:cell><ns0:cell>9,72</ns0:cell><ns0:cell>±0,38</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>0,38</ns0:cell><ns0:cell cols='3'>±0,02 0,61 ±0,04 0,26 ±0,02 0,64 ±0,03</ns0:cell><ns0:cell>31,13</ns0:cell><ns0:cell>±0,95</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>0,88</ns0:cell><ns0:cell cols='3'>±0,06 1,06 ±0,08 0,42 ±0,02 1,12 ±0,05</ns0:cell><ns0:cell>68,12</ns0:cell><ns0:cell>±2,08</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>2,09</ns0:cell><ns0:cell cols='3'>±0,83 1,69 ±0,27 0,63 ±0,04 1,62 ±0,05</ns0:cell><ns0:cell>122,49</ns0:cell><ns0:cell>±1,55</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>5,52</ns0:cell><ns0:cell cols='3'>±0,17 3,44 ±0,08 1,47 ±0,07 3,50 ±0,10</ns0:cell><ns0:cell>387,86</ns0:cell><ns0:cell>±4,48</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell cols='4'>12,08 ±0,04 5,67 ±0,12 2,56 ±0,05 5,84 ±0,15</ns0:cell><ns0:cell>922,76</ns0:cell><ns0:cell>±25,67</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>2500 211,91 ±5,28 37,95 ±1,42 19,69 ±1,61 38,90 ±2,26 4856,00 ±150,14</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>5000 262,86 ±2,23 46,10 ±1,25 23,56 ±0,95 50,20 ±2,45 15049,50 ±402,14</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:1:1:NEW 28 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Respective Editor,
Thank you for giving us the opportunity to submit a revised draft of the manuscript “Low-diameter topic-based pub/sub overlay network construction with minimum–maximum node” for publication in the PeerJ Computer Science Journal. We appreciate the time and effort that you and the reviewers dedicated to providing feedback on our manuscript and are grateful for the insightful comments on and valuable improvements to our paper.
We have incorporated most of the suggestions made by you and the reviewers. Those changes are highlighted within the manuscript. Changes in the initial version of the manuscript are highlighted by using the latexdiff tool. We also added a clean copy of the final version of the manuscript. Please see below, we also provide a point-by-point response explaining how we have addressed each of your or reviewers’ comments in blue.
Thank you for your consideration in advance.
Sincerely,
Editors’ comments to the Authors:
It will be necessary to increase the node-count in the experiments to numbers that compare to related studies, the statistical significance has to be reported and we need high confidence in the results.
Author Response: The authors would like to thank the editor and reviewers pointing out this critical issue.
Upon your suggestion we have increased the node count in our experiments to cover the values tested by other studies. As we missed to indicate the number of tests for each value, it was meaningful to indicate it hence we did with the calculations of the significance values for the results. The results indicate that we have high confidence in the results among all values. Most of the results falls in less than 1% of the confidence intervals while the results of the experiments with high node count may reach up to 2% because of the relative long running time.
The algorithm should be better described and a complexity analysis could be useful.
Author Response: The authors agree with the editor and reviewers and would like to thank about constructive comments.
Hence the editor and some of the reviewers pointed some improvements about the algorithms, we worked on them and clarified them to be more understandable.
As you pointed out the complexity analysis, we value your feedback and included extensive complexity analysis in both algorithms to cover the time and space complexity.
Please also address all issues that have been mentioned with regard to the reporting.
Author Response: We have modified the paper accordingly to address all the feedbacks of the editor and the reviewers. The responses to the reviewers are listed below.
Reviewers’ comments to the Authors:
Reviewer 1
Basic reporting
Overall, your paper is written in a clear and understandable fashion.
However, there are multiple instances of broken math notation (e.g., lines 157, 164, 260, 261, and table 3).
Author Response: The authors would like to thank the reviewer for these syntactic corrections. Lines 157, 164, 260, 261, and table 3 are fixed.
The introduction could benefit from example usecases of publish/subscribe networks.
In lines 45-54, you should add a concrete explanation on how messages actually travel from publisher to subscribers using the sub graph.
Author Response: The authors would like to thank the reviewer for the comment.
Concrete explanations are added to the manuscript as below:
'Constructed sub-graph acts as a selective message delivery among different publishers and subscribers belonging in the same interest group. In a sub-graph, messages are routed to the node destination (subscriber) with the same topic interest. Generally, a sub-group of pub/sub system can be modeled as a trimerous $<\Pi, \beta, \Sigma >$ collections of functions.
The sets involved are determined based on their functionality: $\Pi = p_0, ..., p_i-1$ is a set of i processes in the system that act as publishers providing information to those need it. $\Sigma = C_0, ... , C_j-1$ is set of j processes referred to as subscribers that are known as consumers of information provided by publishers. The set of publishers and the set of subscribers can have
non-zero intersection, which means that the process can also operate as both publisher and subscriber. Decoupling is an ideal feature of communication systems. Systems with decoupling mechanism do not need to consider any issues such as addressing and synchronization. $\beta = B_0, ... , B_k-1$ presents a logical centralized service that enables publishers and
subscribers to connect. In other words, any publisher or subscriber in a sub group can exclusively send or receive specific information through $\beta$ . Not only do $\beta$ provide communication between publishers and subscribers, they also keep publishers and subscribers in a decoupled mode during communication process.'
Constructed subgraph acts as a selective message delivery among different publishers and subscribers belonging in the same interest group. In a sub-graph messages are routed to the node destination (subscriber) with the same topic interest.
It is not clear to me why these particular systems form lines 79 to 99 where chosen as background on publish/subscribe systems.
Author Response: As recommended by the reviewer, examples of use cases of pub-sub systems in IoT, healthcare, etc. are removed in order to simplify the background and targeting the actual use cases.
Lines 113 to 120 use notation that is only introduced later.
Author Response: The authors are grateful about the comment to make the article more readable. The lines from 113 to 120 are moved under Proposed Solution section to describe in detail the GM algorithm after the preliminaries (notation).
Lines 11
Your pseudocode and algorithm descriptions are clear and understandable.
Author Response: The authors are grateful about the comments.
Your figures would benefit from more information, both as part of the figure and in the caption.
As is, the steps explained in figures 2-7 are only understandable with parallel examination of the corresponding sections of your text.
To improve this, add a short description of the executed step to each caption and augment the figures themselves with information such as the current list of topics T.
Author Response: In order to improve the understandability of the figures, additional explanations were added, and the figures were arranged accordingly.
Further, Figure 1 contains an error:
Node u's topic list is missing topic 60, which u has subscribed to according to line 194.
Author Response: The authors would like to thank the reviewer about this minor fix. Figure 1 is replaced with a new corrected figure.
While your examples are helpful in understanding how your algorithms work, the wording of the example's titles makes them sound like contradicting statements.
Author Response: The authors agree on the comment. In order to improve the understandability of the examples, the order and the text of the titles were improved as below. Based on Reviewer 3 comments, Example 3 was removed since it is not required to show the advantage of the proposed algorithms.
Changed Titles:
Example 1: The Maximum Node Degree of the CD-MAX Algorithm is Lower
Evaluation of CD-MAX with Examples:
Example 1:
Example 4: CD-MAX Improved the Maximum Node Degree of the Overlay
Example 2:
Evaluation of CD-MAXref with Examples:
Example 2: The Maximum Node Degree of the Overlay is Improved by CDMAXref
Example 3:
Example 5: CD-MAX Refinement Improves the Maximum Node Degree of the Overlay
Example 4:
Example 3: Maximum Node Degree of all the Algorithms are the Same
As required, your paper is self-contained:
You state your research question, present your proposed algorithms and evaluate them to show that they meet their expectations.
Author Response: The authors would like to thank the reviewer about the comments.
The preliminary section mostly defines the required terms in an understandable way.
It might not hurt to state that |x| denotes the number of elements in x.
Further, there is no definition of the diameter of an overlay network, despite it being a central term of your paper.
Author Response: The authors added an explanation about the definition of the diameter as:
“The diameter of a graph is the length of the shortest path between the two nodes which are farthest from each other.”
Experimental design
Lines 55 to 71 clearly define the relevance of your research question and how your proposed algorithms will improve the current state of the art.
Author Response: The authors would like to thank the reviewer about the comments.
Related work (Chockler 2007a, Section 8), seems to use a much greater number of nodes (up to 10,000 compared to your 100) in their evaluation.
Could you please explain why you you have not evaluated your results using a greater number of nodes.
Author Response: The authors would like to thank the reviewer about the valuable comment. In order to provide additional results, the experiments were re-run up to 10.000 nodes.
You might be able to improve your evaluation by proving bounds of your algorithms, similar to lemmas 6.5 and 6.6. in Chockler 2007a.
Author Response: The authors would like to thank the reviewer about the valuable comment. The authors provided the following Lemmas in order to prove bounds of the algorithms.
“In order to proceed, a proof that the CD-MAX algorithm terminates in $O(|V|^2 *|T|^2)$ is provided below.
Lemma 1: The running time of CD-MAX algorithm is $O(|V|^2 *|T|^2)$.
Proof: Finding a node with minimum node degree takes $O(|V|^2*|T|)$. When node u with minimum node degree has been selected, all the topics to which node u is subscribed is removed. At each iteration, one topic is removed from the original topic set. Thus algorithm takes $O(|V|^2*|T| )*|T|= O(|V|^2 *|T|^2)$ time steps to terminate.
Lemma 2: CD-MAX algorithm guarantees the diameter of at most 2 for each topic.
Proof: Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in.
Lemma 3: The space complexity of CD-MAX algorithm is $O(|T|*|V|)$.
Proof: CD-MAX uses a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. Hence, it gives us a space use of $2*|T * V|$ which gives a space complexity of $O(|T|*|V|)$.”
Validity of the findings
As noted previously, your pseudocode and description of the algorithm is clear and understandable.
However, your included source code lacks any documentation.
The output of the program does not indicate which of your experiments is run.
Author Response: The authors would like to thank the reviewer about the comment.
The authors slightly modified the program to generate more explanatory outputs and re-organized the code to be more parametric and easily understandable. In addition, the authors added explanatory comments on critical points and descriptions on how to run the experiments.
Line 350 could have clearer formulation, since Tables 11 and 12 do not reveal on first glance that CD-MAX is faster.
You should provide more context at which point it becomes infeasible to use CD-MAX-Ref over CD-MAX, seeing that is has a much greater running time.
Author Response: The authors would like to thank the reviewer about the comment.
The authors agree with the reviewer on the comment. On the other hand, as we indicated in the conclusion section of the manuscript, the CD-MAX-Ref algorithm best suits environments with slow and gradual changes (i.e., having a low churn rate).
Additionally, the authors added more precise explanations in the relevant part as below:
'CD-MAX algorithm is similar to CD-ODA-II and 2D-ODA on average node degree, while outperforming all ODA algorithms excluding GM algorithm on maximum node degree (see Tables 5-6 and Figures 11-12). In addition, CD-MAX builds the network in similar time with ODA algorithms with better maximum node degrees, whereas GM fails to scale out with higher node counts (see Table 11 and Figure 17). As the number of topics increases, the average node degree for CD-MAX and others are similar; however, the CD-MAX and CD-MAX-Ref outperforms on maximum node degree. (See Tables 7-8 and Figures 13-14).In terms of different subscription sizes, the CD-MAX algorithm outperforms other ODA algorithms excluding GM on average node degree (see Table 9).'
Overall, I can see the links between your conclusions and the results of your evaluation.
Author Response: The authors would like to thank the reviewer about the comments.
Comments for the author
It's a bit of a stretch to call a paper form 2007 'recent' (line 54).
Author Response: The authors would like to thank the reviewer for these syntactic corrections. Recent is wrong -> is replaced by relevant
Reviewer: Stefanie Roos
Basic reporting
The paper is well written and does a good job of motivating the problem. The authors furthermore present a clear comparison to the related work and put their contributions into context. Raw data is provided, figures are appropriate and easy to understand.
The paper is mostly well structured, though I propose to spend less time on examples and put more emphasis on proofing that (steps of) the algorithms achieve the desired properties.
The main drawback of the paper is the lack of clarity in both the algorithm description and the proof. A more minor issues are some aspects in the notation.
Author Response: The authors would like to thank the reviewer about the comments. Below you may find the changes related to the detailed comments.
=====Proof for Algorithm 2======
There are two issues in the proof of Algorithm 2:
i) The algorithm eliminates all edges of a node associated with a topic, however, what happens if the edge is associated with more than one topic? it seems to me like there might be removals that lead to nodes not receiving all topic information. If I misunderstand, you have to explain in the proof why that is not the case.
Author Response: The authors would like to thank the reviewer about the clarification request, and authors agree on the clarification need. The authors added to following explanation for further clarification: “The CD-MAXref algorithm is not affected on how many topics are associated with a removed edge. In any case, CD-MAXref finds an alternative node for each topic that is related to the removed edge.”
ii) The pseudocode searches for a node q with d_q + d_t <d_u − d_t whereas the text (line 179, page 5) states that the algorithm finds a node with d_q + d_t <d_u...Please either clarify or fix this seeming inconsistency
Author Response: The authors would like to thank the reviewer about the correction of the mis-written equation in the text. The text is corrected in the paragraph as follows:
d_q + d_t <d_u - d_t is correct
=====Proof for Algorithm 1======
There is no proof for Algorithm 1 but I think it is non-obvious that the algorithm does indeed include edges for all topics and users and does not accidentally remove some in Line 18.
Author Response: The authors would like to thank the reviewer about the clarification request on the algorithm. An additional explanation is added to the article at the end of the paragraph describing Algorithm 1 as follows;
“In the last step of Algorithm 1, CD-MAX algorithm finds the node with the smallest $n_u$ that is interested in the subsets of topics t. Selected node u will be connected to its neighbors and the topics subscribed by node u will be removed from the original topic set. Unless there is a topic which is subscribed by a single node, pub/sub overlay network is connected. “
“Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. As a justification of the algorithm; at each iteration, at least one node is connected to its neighbors (one edge is required), meaning that finding a node with minimum node degree is achieved. “
=====Pseudocode=====
The pseudocode is unclear on multiple occasions, all of which can probably be clarified. For Algorithm 1:
-Line 4-7: I assume you'd calculated the maximum after calculating each individual value. Maybe you mean that you compare the old maximum with the value for node u but the code seems like you compare all the values again.
Author Response: The authors would like to thank the reviewer about the recommendation. In order to clarify the issue, the authors first initialized Maxn and set the value to the number of nodes. Then, the content of the corresponding for loop was modified as follows:
\ForEach {$ u \in V $}
\State $ calculate\ interest\ group\ (n_u) $
\If{\textcolor{red}{$ n_u < Max_n $ } }
\State \textcolor{red}{$Max_n \Leftarrow n_u$}
\EndIf
\EndFor
-Line 8: The term 'max_n is not a single' is unclear, I assume you mean |Max_n|>1, however, it isn't even clear that Max_n is a set up to that point, so it's a bit sloppy
Author Response: The authors would like to thank the reviewer about the recommendation. As pointed out by the reviewer, that line means that “if there is more than two node with minimum node degree”. The authors decided to clarify by changing the corresponding line as follows:
“ Number of Max_n nodes > 1”
-Line 9: What happens if there is more than one element in Max_n with maximal degree?
Author Response: The authors would like to thank the reviewer about the question. The implemented algorithm works as if there are two nodes with the same node degree and density, the newer node is selected. Then, in the second iteration, that ignored node is chosen. In order to conform to the case in the pseudocode, the authors added an IF statement to check the number of nodes that existed in u. If there are more than 1 node in u, the newest one is selected.
if Length o f u>1
then
u ⇐ newest node in u
end if
For Algorithm 2:
- My interpretation of all the —s is minus but that does not make sense as the algorithm would never enter the while loop (because -\infty < -FindMaxNodeDegree()), so maybe it means something else or they should just not be there.
Author Response: The authors would like to thank the reviewer about the warning of our mis-interpretation in the algorithm. In order to simplify the pseudocode, the authors removed the corresponding variables and modified the ending criteria as “while Max_n improves”. Below is the simplified Algorithm 2:
\While {$Max_n\ Improves$ }
\ForEach {$ topic\ t \in T $}
\State $ d_t \Leftarrow degree\ of\ topic\ t $
\State $ u \Leftarrow Center\ node\ for\ topic\ t\ $
\ForEach{\textcolor{red}{ $ q \in N $}}
\If{\textcolor{red}{$d_q\ +\ d_t\ < d_u\ -\ d_t\ and\ Int(q,t)=1$}}
\State $ Remove\ all\ edges\ of\ topic\ t\ connected\ to\ u $
\State $ Connect\ q\ to\ every\ node\ n\ if\ Int(n,t)=1$
\State $Update\ node\ degrees\ $
\State $ Update \ Max_n $
\EndIf
\EndFor
\EndFor
\EndWhile
- what happens if there are several such nodes q (Line 10)
Author Response: The authors would like to thank the reviewer about the question. In order to clarify the implemented algorithm in pseudocode, the authors modified the pseudocode as follows:
for each q∈N do
if dq+dt<du−dtand Int(q,t) =1 then
….
- I think you never really defined centre node
Author Response: The authors would like to thank the reviewer emphasizing on the undefined variable center node. A definition of the centre node is inserted to the related paragraph as follows:
“The center node is a node interested in topic $t \in T$ which is selected to connect all nodes interested in topic $t \in T$.”
======Notation=====
There are some small unclarities in the mathematical notation that make reading harder and require some guessing as to what is meant:
- p.3: In Line 114-115, you define Int(x,m) as a binary function and then use it as a set in Line 119
Author Response: The authors would like to thank the reviewer about the comment. The definition of Int(x,m) is correctly defined as it indicates whether or not node x is interested in topic m. The use of it as a set is an erroneous explanation, which is removed from the paragraph.
- p.3: Line 113-114: “There are m ∈ M|n : Int(n,m) = 1| individual” This sounds like m ∈ M|n : Int(n,m) = 1| is a number but it’s not (do you mean the sum over all m?)
Author Response: The authors would like to thank the reviewer about the definition. The authors mistakenly deleted the SUM over all m, it was corrected.
- p.4: Eq. 2, the function e(u,v) is not defined, it could be e=(u,v)?
Author Response: The authors would like to thank the reviewer about the comment. The authors added an explanation for e(u,v) after the equation.
“where e(u,v) denotes a function indicating if there is an edge between u and v.”
- p.4: Eq. 4, can you give an intuition of what the equation conveys? You also state it’s normalized but it seems like the denominator can easily be bigger than the nominator
Author Response: The authors would like to thank the reviewer about the comment. The equation is correct. In order to clarify, the misleading description is removed.
- p.5, Line 164, du -> d_u
Author Response: The authors would like to thank the reviewer about the syntactic correction.
- p. 11, Line 260: there seem to be some \in missing
Author Response: The authors would like to thank the reviewer about the syntactic correction.
Experimental design
The experimental design considers all relevant parameters and explores meaningful scenarios. A meaningful comparison to other algorithms is conducted. It is mainly well explained but for minor two aspects:
i) the number of runs is not given
Author Response: The authors added an explanation as follows at the Experimental results section:
“In order to increase the precision of the results, all experiments were run 10 times and the average results for each experiment are reported on the tables.”
ii) the part on the probability pi is not clear: first it is set to 1, then it is said that it follows a zipf distribution…please clarify
Author Response: The authors would like to thank the reviewer about the comment. The authors corrected the mistake by replacing p_i = 1 with $\sum_{i} p_i = 1 $
Validity of the findings
The conclusions presented by the authors are well justified and provide an in-depth comparison of multiple algorithms. Overall, the validation is one of the strengths of the paper. The only aspect that needs to be improved is that the authors do not present statistical tests, standard deviations, or confidence intervals to show that the results are indeed statistical significant. Such an analysis has to be added to the revised version.
Author Response: The authors would like to thank the reviewer about the comment. The authors made 10 runs for each experiment and provided a confidence score for alpha = 0,05. The confidence scores were added to all tables.
Comments for the author
I liked the idea of the algorithms but I was wondering if there are not other criteria that play a similar important role as diameter and degree, such as i) resilience to node failures, ii) load balancing. Your algorithm might result in edges that distribute content for many topics. The limited bandwidth of these edges might delay dissemination.
Author Response: The authors would like to thank the reviewer about these valuable comments. The resilience to node failures and load balancing are not in the scope of this study; however, the authors added these valuable comments in the future work section.
Furthermore, in the relevant studies covering overlay design algorithms, diameter and degree are used as the main metric to analyze the overlay. Our algorithm tries to decrease maximum node degree of a pub/sub system. Hence, for a set of topics more nodes are selected to be a center of nodes for those topics. Unlike the star topologies where a single node takes the responsibility of connecting node for many topics, in CD-MAX algorithms this responsibility have been distributed among different nodes that makes our pub/sub system more scalable. Additionally, in case of node failure, CD-MAX can find alternative nodes for those topics subscribed by the failure node.
Reviewer 3
Basic reporting
The english is ok, literature references are ok as well. The article structure is mainly ok, but the authors also explain in length five different examples on topologies generated by their algorithm, which is too much. As a result the paper looks a bit unbalanced.
The experimental results are displayed in tables only. Graphs would probably be simpler to read.
Author Response: The authors would like to thank to the reviewer about the valuable comment.
All graphs related to the corresponding tables are additionally included into the article for better understanding.
Experimental design
See general comments.
Validity of the findings
See general comments.
Comments for the author
The paper proposes a novel centralized algorithm to construct low-diameter topologies intended for publish-subscribe content distribution. This is definitely an interesting topic.
However, I am not really confident about this paper. First of all, the proposed algorithm is not presented in sufficient detail. The authors put the actual algorithm, but the algorithm itself is not explained in the text. As a result, the algorithm as main contribution is introduced on a bit more than one page. Instead the authors give in total 5 examples on topologies produced by their algorithm when some nodes and their subscriptions are given as input. That is for sure too much. When the algorithm is explained properly, one or two examples should actually be enough and the authors would also save a lot of space.
Author Response: The authors would like to thank to the reviewer about the valuable comments.
The examples 1,2,4, and 5 are grouped and more effectively explained with the new organizational enhancements. Relatively unrelated one (Example 3) to the new examples structure has been removed from the manuscript. The algorithms are better defined and clarified.
Furthermore, the paper could benefit from a bit more theoretical foundation. A complexity analysis would be interesting here. Moreover, the whole problem could be formalized as optimization problem, which probably someone in related work already did. However, this optimization problem could also serve as a benchmark on how close the proposed algorithm actually can get to the optimum (for small node numbers at least).
Author Response: The authors would like to thank to the reviewer about the valuable comments.
In order to clarify the algorithms, some modifications on the pseudo codes were made and descriptions were enhanced. The authors additionally agree on the comment that a complexity analysis is a suitable method to describe the algorithm better. The following paragraphs were added to the paper in order to clarify that:
“CD-MAX algorithm terminates in $O(|V|^2 *|T|^2)$ and guarantees the diameter of at most 2 for each topic. Finding a node with minimum node degree takes $O(|V|^2*|T|)$. When node u with minimum node degree has been selected, all the topics to which node u is subscribed is removed. At each iteration, one topic is removed from the original topic set. Thus algorithm takes $O(|V|^2*|T| )*|T|= O(|V|^2 *|T|^2)$ time steps to be terminated.
Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. As a justification of the algorithm; at each iteration, at least one node is connected to its neighbors (one edge is required), meaning that finding a node with minimum node degree is achieved. “
On the positive side, it has to be mentioned that the authors compare their approach to several approaches from related work. Nevertheless, none of these approaches seems to be contained in the related work section of the paper. I would recommend to introduce and explain them shortly.
Author Response: The authors would like to thank the reviewer about the comment. The authors wrote short explanations about the related algorithms at the end of the section.
Regarding the experimental results I am not sure if the metrics used by the authors (average and maximum node degree) give the complete picture here. What about the number of edges? What about additional metrics that quantify the load on single nodes, e.g., centrality?
Author Response: The authors would like to thank to the reviewer about the comments. From the definition; average node degree = (number of edges * 2) / number of nodes. Thus, the experiments provide a clear picture of the number of edges. Additionally, the maximum node degree of the constructed overlay also clarifies the maximum load a node throughout the network has.
Minor comments:
---------------
* page 4: you can refer to lines in the algorithm from the text to explain what it actually does
Author Response: The authors would like to thank the reviewer about the comment. The authors provided more explanations and corrections for the algorithms.
* page 5: 'with a lower node degree du' -> du? is that notation or do you mean 'do'?
Author Response: The authors would like to thank to the reviewer about the correction. The notation is fixed as du .
* page 10: 'all three existing CD algorithms (CD-ODA, CD-ODA, CD-ODA, and 2d-ODA)' -> that is three times the same algorithm and in total 4 instead of 3.
Author Response: The authors would like to thank the reviewer about the correction. The corresponding phrase in parentheses was replaced as “(CD-ODA, CD-ODA I, CD-ODA II, and 2D-ODA)”
* Page 10,¸ Figure 10: 'To illustrate, node u that is ...' -> where is node u in Figure 10?
Author Response: The authors would like to thank the reviewer about the comment. In order to correct the sentence, u is deleted from the sentence as follows “To illustrate, the node that is interested…”
* Page 11: what does a probability pi of 1 mean? 100% probability??? probably not. Please explain.
Author Response: The authors would like to thank the reviewer about pointing out the mistake. The authors corrected the mistake by replacing p_i = 1 with $\sum_{i} p_i = 1 $
* Page 11: 'Each node n N' -> do you mean 'n \in N' (n element of N)?, same for 't T'
Author Response: The authors would like to thank the reviewer about the notation error. It is corrected as n \in N an t \in T
* Page 11: All subsection heading in the result section start with the misleading term 'Fixed Average and Maximum Node Degree'. I understood that your approach generates low-diameter topologies and tries to minimize the maximum node degree, but how can the maximum degree then be static? In your results you again refer to results as CD-MAX and this values differ depending on the node size.
Author Response: The authors would like to thank the reviewer about the clarification request. In order to clarify the subsection, the section title was changed into: “Average and Maximum Node Degree Values for Varying Number Of Nodes”
* Page 11: What is the diameter that your algorithm tries to reach here? 2? Not mentioned.
Author Response: The authors would like to thank the reviewer about pointing out the clarification need. The authors added the following information to clarify that the diameter trying to be reached is 2: “Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. As a justification of the algorithm; at each iteration, at least one node is connected to its neighbors (one edge is required), meaning that finding a node with minimum node degree is achieved. “
" | Here is a paper. Please give your review comments after reading it. |
100 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the construction of effective and scalable overlay networks, publish/subscribe (pub/sub) network designers prefer to keep the diameter and maximum node degree of the network low. However, existing algorithms are not capable of simultaneously decreasing the maximum node degree and the network diameter. To address this issue in an overlay network with various topics, we present herein a heuristic algorithm, called the constantdiameter minimum-maximum degree (CD-MAX), which decreases the maximum node degree and maintains the diameter of the overlay network at two as the highest. The proposed algorithm based on the greedy merge algorithm selects the node with the minimum number of neighbors. The output of the CD-MAX algorithm is enhanced by applying a refinement stage through the CD-MAXref algorithm, which further improves the maximum node degrees. The numerical results of the algorithm simulation indicate that the CD-MAX and CD-MAXref algorithms improve the maximum node-degree by up to 64% and run up to four times faster than similar algorithms.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In publish/subscribe (pub/sub) systems, publishers forward different types of messages to specific subscribers in a decoupled mode. Publishers broadcast information through logical channels, while subscribers receive them based on their topic interests. Pub/sub systems are divided into two different categories, namely topic-and content-based categories.</ns0:p><ns0:p>In topic-based pub/sub systems, publishers broadcast their messages based on the topic of the message.</ns0:p><ns0:p>Each topic exclusively pertains to a specific logical channel. Subsequently, as stated in <ns0:ref type='bibr' target='#b24'>Yang and Garcia-Molina (2003)</ns0:ref>, subscribers receive all messages associated with the topics to which they have subscribed to . Consequently, as stated in <ns0:ref type='bibr' target='#b17'>Milo et al. (2007)</ns0:ref>, all messages about those topics will be sent to every user who has joined that particular group . Publishers take the responsibility of classifying the messages that subscribers receive.</ns0:p><ns0:p>In contrast, in content-based pub/sub systems, subscribers only receive messages whose attributes match with their interests; hence, <ns0:ref type='bibr' target='#b6'>Carvalho et al. (2005)</ns0:ref> states that these attributes characterize the logical channels . In this category, the matching algorithm between the publishers and the subscribers is based on the attribute values referred to as the content. In other words, the receivers decide which messages they would receive. In both cases, however, publish/subscribe systems show similarities with Information Centric Networks in terms of sharing/distributing information among users.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Pub/sub systems have a variety of use cases. As explained by ORiordan (2021), a simple use case of a pub/sub system may be a chat application where a participant can subscribe to any of the chat rooms which has a designated pub/sub topic. When a user sends a message to a chat room, the message is published on that topic of the chat room. The subscribers of the topic/chat room receive the message. As stated by GoogleCloud (2021), the pub/sub systems fit best, when there is a need for durable message storage and real-time delivery for those messages with high availability for massive scale. These features are the foundation of cloud computing where pub/sub systems are heavily used. Balancing load among network clusters, implementation of asynchronous workflows, distributing event notifications, data streaming from various processes or devices are examples to pub/sub systems. Apache <ns0:ref type='bibr' target='#b14'>Kafka (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b2'>Azure-SB (2021)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>GoogleCloud (2021)</ns0:ref> and <ns0:ref type='bibr' target='#b1'>AWS-SNS (2021)</ns0:ref> are examples of popular pub/sub systems.</ns0:p><ns0:p>For each topic t ∈ T in a typical fully decentralized topic-based pub/sub system based on the peerto-peer (P2P) connected overlay, a sub graph is derived using the nodes interested in t. Hence, the nodes interested in topic t do not need to rely on other nodes to send or receive their messages. Every node must maintain its connections (e.g., checking the accessibility of neighbors) and monitor data streaming through the connections; therefore, overlay networks with a low maximum number of links emitting from a node and low network diameters are desirable. If a proper correlation exists between node subscriptions, the connectivity of many topics subscribed by those two nodes will be satisfied by adding only one edge between the two nodes. Hence, the maximum number of links of a node and the total number of overlay connections will considerably diminish. The significance and the impact of the topic correlation of nodes in optimizing the overlay networks were highlighted in a relevant paper by <ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref>. A constructed sub-graph acts as a selective message delivery among different publishers and subscribers belonging in the same interest group. In a sub-graph, messages are routed to the node destination (subscriber) with the same topic interest. Generally, a sub-group of pub/sub system can be modelled as a trimerous < Π, β , Σ > collections of functions. The sets involved are determined based on their functionality: Π = p 0 , ..., p i − 1 is a set of i processes in the system that act as publishers providing information to those needing it. Σ = C 0 , ...,C j − 1 is a set of j processes referred to as subscribers that are known as consumers of information provided by publishers. The set of publishers and the set of subscribers can have non-zero intersection, which means that the process can also operate as a publisher and a subscriber at the same time (decoupling). Pub-sub systems are decoupled; therefore, a process may change anything about a publisher if it does not change the way it produces messages. Hence, there is no need to change something about the downstream subscribers. Similarly, the opposite process is true as well. Systems with decoupling mechanisms do not need to consider any issues such as addressing and synchronization. β = B 0 , ..., B k − 1 presents a logical centralized service that enables publishers and subscribers to connect. In other words, any publisher or subscriber in a sub group may exclusively send or receive specific information through β . Not only does β provide communication between publishers and subscribers, publishers and subscribers are kept in a decoupled mode during the communication process. <ns0:ref type='bibr' target='#b3'>Baldoni et al. (2007)</ns0:ref> and <ns0:ref type='bibr' target='#b15'>Lau et al. (2009)</ns0:ref> states that, reducing the maximum number of links of an overlay can play a vital role in various network fields, such as survivable and wireless network design. <ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref> presented the concept of topic connectivity, in which an individual overlay network connects nodes with similar topics. They introduced the Greedy Merge (GM) algorithm to construct an overlay with the least possible number of connections. A number of other solutions for the overlay design were also recently introduced by <ns0:ref type='bibr' target='#b6'>Carvalho et al. (2005)</ns0:ref>. However, all of the existing methods <ns0:ref type='bibr' target='#b7'>(Chen et al. (2010a)</ns0:ref>, <ns0:ref type='bibr' target='#b9'>Chen et al. (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>Chen et al. (2010b)</ns0:ref>) suffer from either a high diameter or a high maximum-node-degree. In addition, all constant-diameter algorithms connect the nodes in a star topology manner, resulting in the best possible diameter while giving rise to nodes with high node degrees. <ns0:ref type='bibr' target='#b11'>Chockler et al. (2007b)</ns0:ref> states that, these nodes are responsible for managing a large number of connections to their neighbors, which results in a high traffic overhead.</ns0:p><ns0:p>In this study, we propose an algorithm for constructing a scalable topic-connected overlay (TCO), which has a low maximum node degree and a constant diameter of 2, to solve the above-mentioned problems. The proposed algorithms (i.e., CD-MAX and CD-MAX-Ref) outperform the existing algorithms in the literature in terms of constructing optimum overlays with a minimum node degree and a small diameter. In addition, the performance of the proposed CD-MAX algorithm in terms of the required running time for constructing overlays provides a suitable conformance on scalability requirements.</ns0:p><ns0:p>The remainder of this paper is presented as follows: Section 2 provides a summary of the previous studies on pub/sub networks, including the concept of the GM algorithm and other algorithms proposed </ns0:p></ns0:div>
<ns0:div><ns0:head>OVERLAY DESIGN ALGORITHMS</ns0:head><ns0:p>An effective publication routing protocol is needed in designing an efficient pub/sub system, and it can play a vital role on the system performance. Therefore, <ns0:ref type='bibr' target='#b19'>Onus and Richa (2011)</ns0:ref> stated that the quality of a constructed overlay can be assessed based on the complexity of the routing scheme applied.</ns0:p><ns0:p>The complexity can be minimized if all the nodes interested in a topic t ∈ T can be organized into a dissemination tree. In this case, as <ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref> and <ns0:ref type='bibr' target='#b11'>Chockler et al. (2007b)</ns0:ref> states, the topic dissemination trees should have the following issues:</ns0:p><ns0:p>• Each tree for topic m includes only the nodes interested in topic m</ns0:p><ns0:p>• The diameter of the topic trees should be low</ns0:p><ns0:p>The GM algorithm by <ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref> solves the two issues by the low-diameter publish/subscribe overlay algorithms. Suppose that G is an overlay network, and the essential prerequisite to solving issue (1) is to ensure topic connectivity, where a sub-graph connects all the nodes interested in topic m. <ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref> node u which has the maximum connection density is chosen in each iteration. Afterwards, edges are added between u and its neighbors. Thereafter, the interest topics of u are removed from the set of topics.</ns0:p><ns0:p>Unlikely, the 2D-ODA algorithm by <ns0:ref type='bibr' target='#b20'>Onus and Richa (2016)</ns0:ref> starts with G = (V, E) as the overlay network and a topic set T is selected in which that topic is in the interest of node u and the selected topic has the maximum node density. The node with maximum topic density for a topic is chosen at each iteration, then together with the star structure, node is added to the network. After all, the topic is removed from the set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Publish/Subscribe Challenges</ns0:head><ns0:p>The following three main challenges must be handled in building an effective pub/sub system: (1) expression of the interest to the topics by the subscribers, (2) organization of the notification service to deliver interests to topics, and (3) delivery of messages to the subscribers by the publishers. These states are strongly coupled, and their contribution can affect the system performance. For instance, as described in <ns0:ref type='bibr' target='#b22'>Triantafillou and Aekaterinidis (2004)</ns0:ref>, a rudimentary subscription algorithm may improve the functionality of multicasting, but it facilitates a poor expression ability for subscribers to announce the topics they are interested in. <ns0:ref type='bibr' target='#b12'>Eugster et al. (2003)</ns0:ref> states that the architecture of the pub/sub systems can generally be divided into client-server groups and P2P. In client-server architectures, the servers are the providers of information (publishers), while the clients are the subscribers. Intermediate nodes, called brokers, have been introduced to decouple clients and servers and achieve a better performance. Therefore, these architectures are referred to as broker-based architectures. Meanwhile, in P2P architectures, each node performs as either subscribers or publishers or both. In a P2P paradigm, all nodes can operate under various roles (e.g., subscriber, root, or internal node of a multicast tree) and play under a combination of these roles. The P2P architectures are also called decentralized architectures. A typical pub/sub system must hold two principle characteristics of P2P networks: i) scalability and ii) fault tolerance/reliability.</ns0:p><ns0:p>The following sub-sections briefly introduce both architectures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preliminaries</ns0:head><ns0:p>An overlay is defined as an undirected graph G(V,E), where V is the set of nodes, and E is the set of edges.</ns0:p><ns0:p>The number of nodes interested in at least one topic, which node u is interested in, is called the node u interest group, which is computed as</ns0:p><ns0:formula xml:id='formula_0'>n u = |{v ∈ V | ∃t ∈ T, Int(v,t) = Int(u,t) = 1} | 3/16</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>T is the set of topics, which a subscriber can be interested in. Int(u,t) = 1 indicates that subscriber u is interested in topic t, while Int(u,t) = 0 indicates otherwise. The degree of node u denoted by d u is defined as the total number of edges incident to it and given as</ns0:p><ns0:formula xml:id='formula_1'>d u = |{v ∈ V | (u, v) ∈ E} |</ns0:formula><ns0:p>The degree of topic t is defined as the number of subscribers interested in that topic as</ns0:p><ns0:formula xml:id='formula_2'>d t = |{v ∈ V | Int(v,t)} |</ns0:formula><ns0:p>The density of node u is given by density u = ∑ t∈T |{v∈V |Int(v,t)=Int(u,t)=1}|</ns0:p><ns0:p>|{v∈V |∃t∈T,Int(v,t)=Int(u,t)=1}|</ns0:p><ns0:p>Additionally, the diameter of a graph is the length of the shortest path between the two nodes which are farthest from each other.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED ALGORITHM</ns0:head><ns0:p>Most of the approaches used for designing scalable overlay networks failed to achieve an appropriate trade-off between the maximum node degree and the diameter of the overlay network. On the one hand, The algorithm continues by connecting two nodes at each repetition until the resulting overlay comprises maximally one topic-connected component for each m ∈ M. The two nodes connected during each repetition are those with the greatest number of topics in common.</ns0:p><ns0:p>On the other hand, a number of solutions presented in the previous section provide overlays with a low diameter, but the maximum node degree of these overlays is considerably high <ns0:ref type='bibr' target='#b6'>Carvalho et al. (2005)</ns0:ref> Onus and Richa ( <ns0:ref type='formula'>2011</ns0:ref>). We propose herein an algorithm, called Low-diameter Topic-based Pub/Sub</ns0:p><ns0:p>Overlay Network with Minimum-Maximum Node Degree (CD-MAX), to simultaneously address both issues. Our proposed method improves the CD-MAX algorithm based on building an overlay using a star topology to avoid a high node degree. The proposed CD-MAX algorithm (Algorithm 1) creates an initial overlay network by selecting the nodes with a minimum interest group at each iteration. </ns0:p><ns0:formula xml:id='formula_3'>for each t ∈ T do 22: if ∃v, Int(u,t) = Int(v,t) then 23:</ns0:formula></ns0:div>
<ns0:div><ns0:head>Connect nodes u and v through an edge</ns0:head></ns0:div>
<ns0:div><ns0:head>24:</ns0:head><ns0:p>Remove all topics t f rom T where Int(u,t) = 1</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This initial overlay network is refined to further improve the maximum node degree by the CD-MAX-Ref algorithm (Algorithm 2). A topic-connected overlay is defined as; for each topic t, all possible node n interested in that topic are connected. If there is a node interested in topic t which is not subscribed by other nodes throughout the network, the overlay for that topic is connected. In constructing a network, the proposed CD-MAX algorithm selects the node with the smallest interest group (n u ) and connects it to every node in its interest group. If more than two nodes with an equal smallest interest group are found, the algorithm will select the node with the highest connection density. Each selected node takes the responsibility of topics to which it has subscribed to. These topics are removed from the list of topics, and the algorithm iterates until the list of topics becomes empty. Algorithm 1 presents this procedure. In the last step of Algorithm 1, CD-MAX algorithm finds the node with the smallest n u that is interested in the subsets of topics t. The selected node u will be connected to its neighbors and the topics subscribed by node u will be removed from the original topic set. In order to proceed, a proof that the CD-MAX</ns0:p><ns0:formula xml:id='formula_4'>algorithm terminates in O(|V | 2 * |T | 2 ) is provided below. Lemma 1: The running time of CD-MAX algorithm is O(|V | 2 * |T | 2 ).</ns0:formula><ns0:p>Proof: In Algorithm 1, between lines 5 and 24, the outer loop (the while loop on line 5) iterates T times. In addition, the inner loop (the for loop on line 6) iterates V times. In the worst case, calculation of interest group takes V times. Thus, finding a node with minimum node degree takes</ns0:p><ns0:formula xml:id='formula_5'>O(|V | 2 * |T |).</ns0:formula><ns0:p>When node u with minimum node degree has been selected, all the topics to which node u is subscribed is removed. At each iteration, one topic is removed from the original topic set. Thus algorithm takes</ns0:p><ns0:formula xml:id='formula_6'>O(|V | 2 * |T |) * |T | = O(|V | 2 * |T | 2 ) time steps to terminate.</ns0:formula><ns0:p>Lemma 2: CD-MAX algorithm guarantees the diameter of at most 2 for each topic.</ns0:p><ns0:p>Proof: Since CD-MAX diameter provides a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. As a justification of the algorithm; at each iteration, at least one node is connected to its neighbors (one edge is required), meaning that finding a node with minimum node degree is achieved.</ns0:p><ns0:p>After the CD-MAX implementation, a refinement process, called CD-MAX-Ref, is applied to the resulting overlay network to further improve the maximum node degree. CD-MAX-Ref checks if any node with a lower node degree d u exists for each topic. The topic overlay center becomes the newly found alternative node if CD-MAX-Ref finds an alternative node. The edges connecting the current center node of the topic with the subscribers to that topic will be removed, and the alternative node is connected to all the subscribers of that topic. Consequently, the new node becomes the center of the topic(s) subscribed by the current node. The center of the overlay topics subscribed by the current node may be at more than one node. CD-MAX-Ref can be used independently with any overlay network from CD-MAX. Algorithm 2 formally describes the CD-MAX-Ref algorithm.</ns0:p><ns0:formula xml:id='formula_7'>Algorithm 2 Constant Diameter Maximum Refinement Algorithm (CD-MAX-Ref) 1: T ⇐ Set o f topics, N ⇐ Set o f nodes 2: while Max n improves do 3:</ns0:formula><ns0:p>for each topic t ∈ T do 4:</ns0:p><ns0:formula xml:id='formula_8'>d t ⇐ degree o f topic t, u ⇐ center node f or topic t 5:</ns0:formula><ns0:p>for each q ∈ N do 6:</ns0:p><ns0:formula xml:id='formula_9'>if d q + d t < d u − d t and Int(q,t) = 1 then 7:</ns0:formula><ns0:p>Remove all edges o f topic t connected to u 8:</ns0:p><ns0:p>Connect q to every node n i f Int(n,t) = 1 9:</ns0:p><ns0:p>U pdate node degrees 10:</ns0:p></ns0:div>
<ns0:div><ns0:head>U pdate Max n</ns0:head><ns0:p>The CD-MAX-Ref algorithm (Algorithm 2) takes over when CD-MAX terminates. It examines all topics and finds the center node for each topic. The center node is a node interested in topic t ∈ T which is selected to connect all nodes interested in topic t ∈ T . It then searches the overlay for an alternative node,</ns0:p></ns0:div>
<ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science v. However, v already has some edges reflected in d v ; hence, the probability of having an overlap between the added and existing edges in v exists. In some cases, the number of added edges will be less than the number of removed edges. In the worst case, CD-MAX-Ref will add the same number of edges to v as was deleted from u. Even in this case, the node degree of v will be smaller (according to the assumption), thereby resulting in a lower maximum node degree. Considering that the node with the highest node degree is processed in the same manner, the algorithm will reduce its node degree and the maximum node The complexity of a constructed pub/sub overlay network can be determined through the cost of broadcast consideration. The total time required to send a chunk of information from a publisher to specific group of nodes that subscribe certain type of information is a crucial factor that truly depends on the resulted diameter between publishers and subscribers. Meanwhile, due to bandwidth and memory constraints, it is also necessary to keep the maximum node degree of an overlay low. For example, a star topology would be the best option in terms of overlay diameter. But, on the other hand, in terms of memory, bandwidth and energy consumption, it will become a big problem in this scenario when number of nodes grows in the network. Hence, providing a trade-off between diameter and maximum node degree of a pub/sub overlay is an ideal way in pub/ sub overlay network design.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of CD-MAX with Examples:</ns0:head><ns0:p>The Example 1:</ns0:p><ns0:p>In this example, we assume that (n−1)/4 nodes are interested in each topic enumerated as {10, 20}, {20, 30}, {30, 40}, and {40, 50} (Figure <ns0:ref type='figure'>1</ns0:ref>). In addition, node u has subscribed to topics {10, 20, 30, 40, 50, 60}.</ns0:p><ns0:p>According to the CD-ODA algorithms, node u is the center of all topics, and it will be connected to all the other nodes. Figure <ns0:ref type='figure'>1</ns0:ref> shows that the CD-MAX algorithm constructs an overlay with a maximum node degree of (2 * (n − 1)/4). In this example, the CD-MAX algorithm improved the maximum degree of the overlay by 50%. In this case, the CD-MAX-Ref did not improve the node degree.</ns0:p></ns0:div>
<ns0:div><ns0:head>Example 2</ns0:head><ns0:p>In this example, it is assumed to have 4 different collections interested in 4 different topics as listed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. In order to construct the overlay network all CD algorithms (including CD-MAX) need 6n-2 edges. The overlay network could be generated by the CD-MAX algorithm at a maximum node degree at a rate of 3n-1, whereas previous constant diameter algorithms have the maximum node degree at a rate of 4n-1.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The red and black arrows demonstrate the implementation of CD-MAX and other existing algorithms respectively (Figure <ns0:ref type='figure'>2</ns0:ref>). This case is not valid for CD-MAX-Ref, which does not have a lower maximum node degree for this example. </ns0:p><ns0:formula xml:id='formula_10'>{x i } N (i, j) {x (i, j) } N (1,2,3,. . . ,n) {x (1,2,3,. . . ,n) }</ns0:formula></ns0:div>
<ns0:div><ns0:head>Evaluation of CD-MAX-Ref with Examples:</ns0:head><ns0:p>Example 3</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> presents eight different nodes located throughout the network. The nodes with their respective topics deployed over the network and their degrees are shown. Node number 7 has the lowest node degree; hence, it is selected as the first node to be connected to its interest group. The node becomes the center of topics 1,6. Therefore, topics 1 and 6 are removed from the original topic list, and the topics list after step 1 becomes {0, 2, 3, 4, 5, 7, 8, 9}.</ns0:p><ns0:p>As previously explained, when more than two nodes have an equal lowest node degree, the node with the higher node density will be selected as a topic center. In the second step, nodes 1 to 6 have the lowest node degrees. However, node 1 has the highest node density (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>); hence, this node is selected to be connected to its interest group (Figure <ns0:ref type='figure'>4</ns0:ref>). As a result, node 1 becomes the center of topics {0, 1, 3, 5, 7, 8, 9}. The remaining topic list will be {2, 4}.</ns0:p><ns0:p>In the following steps, as shown in Figs. <ns0:ref type='figure'>5 and 6</ns0:ref>, nodes 2 and 6 are selected to become the center of topics 4 and 2, respectively. Consequently, topics 4 and 2 are removed from the topic list.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref> demonstrates the result of the CD-MAX implementation over Example 3. For this overlay, CD-MAX provides an overlay with a maximum node degree of 6. The CD-MAX-Ref algorithm is applied to the resulting overlay to further reduce the node degree. The node with the highest degree (i.e., node 1) is checked, and CD-MAX-Ref finds other nodes, which are interested in topics {0, 3, 5, 7, 8, 9}, and have lower node degrees. Therefore, all the edges connecting node 1 to its interest group are removed. Nodes 0 and 4 are then selected to be the center of topic sets {5, 7, 8} and {0, 3, 9}, respectively (Figure <ns0:ref type='figure'>7</ns0:ref>).</ns0:p><ns0:p>As visualized in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, since node 7 has the lowest node degree, it is chosen as the first node to be connected to its interest group and it becomes the center of topics 1,6. As visualized in Figure <ns0:ref type='figure'>4</ns0:ref>, Nodes 1 to 6 have the lowest node degrees. But, node 1 is the node which has the highest node density; hence, this node is selected to be connected to its interest group and it becomes the center of topics 0,1,3,5,7,8,9.</ns0:p><ns0:p>The next node to be considered is node number 7, which maintains the maximum node degree of the overlay at 5. As visualized in Figure <ns0:ref type='figure'>5</ns0:ref>, Node 2 is the best option as it has the lowest node degree and it becomes the center of topic 4. After that, in the next step, Node 6 is the best option as it has the lowest node degree and it becomes the center of topic 2 (see Figure <ns0:ref type='figure'>6</ns0:ref>). Therefore, CD-MAX-Ref should find alternative nodes for topics 1 and 6. All the edges joined to node number 7 are removed as node numbers 1 and 5 become the center of topics 1 and 6, respectively (Figure <ns0:ref type='figure'>8</ns0:ref>). Consequently, the maximum node degree of the overlay decreases by 2. In this example, there are (3n/2) + 1 nodes placed over the network. Each node subscribes to specific topics which are listed in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. In order to construct the overlay network, all three existing CD algorithms (CD-ODA, CD-ODA I, CD-ODA II, and 2D-ODA), the node interested in topics {x 1 , x 2 , x 3 , . . . , x n } acts as a center of the overlay which would be connected to all other nodes participating in the network.</ns0:p><ns0:p>However, the CD-MAX functions in a different manner. For example, the node which subscribes to x 1 is connected to the nodes which are interested in x 1,2 and {x 1 , x 2 , x 3 , . . . , x n }. This algorithm provides the overlay with a maximum node degree n related to the node interested in topics {x 1 , x 2 , x 3 , . . . , x n }. After this step, CD-MAX-Ref re-constructs the overlay with a lower maximum node degree provided by the normal CD-MAX. To illustrate, the node that is interested in {x 1 , x 2 , x 3 , . . . , x n } has the maximum node degree. All edges which are connected to this node are removed (Grey Arrows) and CD-MAX-Ref then finds other nodes with lower node degree (Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>). These new nodes (N (i, j) ) take over the responsibility </ns0:p></ns0:div>
<ns0:div><ns0:head>Examples Algorithm Name</ns0:head><ns0:formula xml:id='formula_11'>I II III IV CD-ODA n-1 4n-1 8n-1 3n/2 CD-ODA I n-1 4n-1 8n-1 3n/2 CD-ODA II n-1 4n-1 7n-1 3n/2 2D-ODA n-1 4n-1 7n-1 3n/2 CD-MAX & CD-MAX-Ref (2*(n-1)/4)+1 3n-1 5n-1 n/2</ns0:formula></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>This section presents a comparative evaluation of our proposed method using different overlay networks.</ns0:p><ns0:p>The algorithm comparisons were conducted based on the average and maximum node degree over the resulting overlays. Both the number of topics and the number of nodes varied throughout the simulation.</ns0:p><ns0:p>As noted earlier, each node has a specific subscription size and subscribes to 10 topics because of the memory restriction in the experiments. Note that only in the last simulation, this number was increased to values between 15 and 35. Each node n ∈ N can be interested in each topic t ∈ T with a probability of p i , in which ∑ i p i = 1. The topic distribution probabilities p i have a Zipf distribution with α = 0.5 as used in similar studies by <ns0:ref type='bibr' target='#b6'>Carvalho et al. (2005)</ns0:ref> and <ns0:ref type='bibr' target='#b16'>Liu et al. (2005)</ns0:ref>. During the experiments, we considered the impact of varying the number of nodes, topics, and topic popularity distribution on the average and maximum node degrees.</ns0:p><ns0:p>Throughout the experiments, we presented the results of the CD-ODA II algorithm as the representative of the CD-ODA algorithm series because of its better performance. The Tables 5-12 have confidence values added to the tables. Since the values shown are the average of the multiple executions with smooth differences, the confidence intervals are calculated. The calculated confidence intervals are also depicted in the aforementioned tables. The confidence interval of the results are calculated as: </ns0:p></ns0:div>
<ns0:div><ns0:head>Average and Maximum Node Degree Values for Varying Number Of Nodes</ns0:head><ns0:p>In this experiment, the number of nodes varied between 200 to 5000, while the number of topics was kept constant (100). The subscription size was fixed at 10. Each node randomly subscribed to different topics.</ns0:p><ns0:p>The average node degrees (see Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> and Figure <ns0:ref type='figure'>10</ns0:ref>) provided by each algorithm slightly decreased as the number of nodes increased. This result indicates that because of the increased probability of having overlaid edges in sub-graphs, a smaller number of edges connected a larger number of nodes, and the average node degree of the overlay will decrease. However, the maximum node degree of the constant-diameter algorithms increased with the increasing number of nodes (see Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_15'>11</ns0:ref>).</ns0:p><ns0:p>This observation is also valid in the case of the 2D-ODA algorithm. Unlike the other constant-diameter algorithms, in which a small number of nodes covered most of the topics, to decrease the node degree of the overlay, the CD-MAX and CD-MAX-Ref algorithms chose nodes with a lower correlation to become the center of the topics. More edges would be needed to connect the nodes, thereby raising the average node degree.</ns0:p><ns0:p>Compared with the GM algorithm, CD-MAX and CD-MAX-Ref for 1000 nodes requires 2,65 and 3,66 times more edges in average, respectively (see Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>). Considering the maximum node degree with a growing number of nodes, more nodes should be connected to the overlay center nodes. Thus, the maximum node degree provided by every constant-diameter algorithm sharply increased (see Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The maximum node degree of the GM algorithm will decrease relatively (does not increase while others are increasing fast enough) because more nodes with a higher correlation distributed the node degree of the overlay. Although the GM algorithm had low maximum and average node degree, it had a higher diameter </ns0:p></ns0:div>
<ns0:div><ns0:head>Average and Maximum Node Degree Values for Varying Number of Topics</ns0:head><ns0:p>In this experiment, the number of nodes and the subscription size were fixed at 100 and 10, respectively.</ns0:p><ns0:p>Meanwhile, the number of topics varied from 200 to 400. The overlays face two different conditions when the number of topics was increased. First, the correlation between the nodes decreased; thus, more edges were used to connect the nodes. Second, the number of nodes without any connection to a neighbor increased. The average node degree will increase if the first condition dominates. The entry in Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref>, which corresponds to CD-MAX-Ref with a number of topics of 250, is an indication of the first condition.</ns0:p><ns0:p>In contrast, the average node degree will decrease if the second condition dominates. The second condition has a greater effect than the first condition (see Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_8'>12</ns0:ref>). Hence, the overall average node degree of the overlay for every algorithm reduced when the quantity of topics increased.</ns0:p><ns0:p>The second condition affected the maximum degree of all the algorithms. Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_16'>13</ns0:ref> shows that the maximum node degree of all the algorithms decreased as the set of topics indicated more diversity. </ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head></ns0:div>
<ns0:div><ns0:head>Average and Maximum Node Degree Values for Varying Subscription Size</ns0:head><ns0:p>For the final experiment, the numbers of nodes and topics were kept at 200 and 100, respectively. However, the subscription size was varied between 15 and 35. As noted earlier, each node randomly subscribed to different topics using the interest function. Tables 9-10 and Figures 14-15 illustrate the effects of changing the subscription size on the selected algorithms. As the subscription size grows, the nodes can get connected with each other with a higher correlation rate. Subsequently, the rate of the average node degree decreases. Meanwhile, the contribution between the nodes rises when the subscription size grows.</ns0:p><ns0:p>Therefore, the GM algorithm can find many node pairs, which dramatically reduces the total number of topic-connected components. Hence, the maximum node degree will decrease as the subscription size increases. For all the algorithms with a star topology, the maximum node degree will increase as a single node may be selected as the center of many topics. The results of the final set of experiments showed that CD-MAX required slightly lesser number of edges to build the overlay network when compared with the other algorithms. From the node degree perspective, all constant-diameter algorithms, including the proposed algorithms, showed similar averages. The decrease in the average node degree in the CD-MAX family of algorithms was slightly higher than that in the other competitors, resulting in a higher scalability. In this experiment, CD-MAX-Ref algorithm tries to find a node with the minimum number of neighbors. In other words, a node with the least connection with others is always an ideal one to be selected. As it is assumed that this algorithm tries to keep the maximum node degree low. Hence, this trend makes the average node degree higher and decrease the maximum node degree of the overlay as the subscription size goes up (see Figure <ns0:ref type='figure'>14</ns0:ref>). However, as the subscription size increases, the overlay reaches to a threshold at which it is almost impossible to find a node with a lower number of neighbors.</ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head><ns0:p>Subscription size has been increased and all nodes have higher correlation with each other. Inevitably, the average node degree sharply decreases and the maximum node degree rises more than expected (See Figure <ns0:ref type='figure' target='#fig_17'>15</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>12/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>±0,97 199,00 ±0,00 199,00 ±0,00 198,80 ±0,25 198,80 ±0,25 35</ns0:ref> 9,40 ±0,84 199,00 ±0,00 199,00 ±0,00 199,00 ±0,00 199,00 ±0,00 </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of the Running Time Cost of the Algorithms</ns0:head><ns0:p>An optimized overlay network not only can forward packets with shorter latencies, but also improve the maintenance of connections and provide resiliency against link failures <ns0:ref type='bibr' target='#b4'>Besta and Hoefler (2014)</ns0:ref>.</ns0:p><ns0:p>This optimization is closely dependent on the total degree of nodes. Although designing a topology for optimum resource consumption is not possible, minimizing the total number of links and network diameter and optimizing the algorithm in terms of the time complexity can be considered as distinguishing metrics <ns0:ref type='bibr' target='#b23'>Voulgaris et al. (2005</ns0:ref><ns0:ref type='bibr' target='#b11'>) Chockler et al. (2007b)</ns0:ref>. Therefore, we included an analytic and experimental run-time analysis of the algorithms for comparison. Tables <ns0:ref type='table' target='#tab_13'>11 and 12</ns0:ref> <ns0:ref type='table' target='#tab_12'>11</ns0:ref> shows that the growth rate of the running time for the ODA algorithms are similar while the nodes were increasing from 200 to 5000. The growth rate for CD-MAX-Ref and GM are similar, while that for the CD-MAX algorithm is lower. These promising results of CD-MAX illustrate its suitability for the number of nodes that increases beyond a threshold. CD-MAX is the fastest algorithm for building and maintaining the requested topology. Meanwhile, Table <ns0:ref type='table' target='#tab_13'>12</ns0:ref> shows that the running time for CD-MAX and Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>CD-ODA decreased as the number of subscriptions increased. Considering this, CD-MAX is the fastest algorithm beyond a subscription size of 25. Table <ns0:ref type='table' target='#tab_13'>12</ns0:ref> indicates that CD-MAX is the fastest algorithm among all algorithms, including the GM algorithm. The relatively higher speed of CD-MAX makes it a more suitable option for dynamic environments, where arrivals and departures are higher.</ns0:p><ns0:p>The experiment was carried out under Windows operating system using a computer with an I7-7700HQ processor. As noted in Tables <ns0:ref type='table' target='#tab_13'>11 and 12</ns0:ref>, the confidence intervals may fluctuate among the number of nodes and subscriptions. Due to operating system processor loads in some runs, the running times resulted in higher confidence intervals. 15 0,16 ±0,00 0,20 ±0,01 0,15 ±0,00 0,19 ±0,01 9,32 ±0,41 20 0,22 ±0,01 0,15 ±0,01 0,22 ±0,00 0,15 ±0,01 9,18 ±0,31 25 0,28 ±0,01 0,14 ±0,01 0,31 ±0,01 0,13 ±0,01 9,68 ±2,45 30 0,35 ±0,01 0,13 ±0,01 0,40 ±0,01 0,12 ±0,01 14,61 ±2,12 35 0,43 ±0,01 0,12 ±0,00 0,52 ±0,01 0,11 ±0,01 15,39 ±1,77 </ns0:p></ns0:div>
<ns0:div><ns0:head>GM</ns0:head></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>This study presented a novel algorithm (i.e., CD-MAX) that provides overlay networks with a minimum node degree and a low diameter of 2. The algorithm was implemented in a decentralized manner, and was fast and scalable. The proposed algorithm considerably decreased the maximum node degree, thereby resulting in an overlay network that was more scalable compared to the other algorithms studied. The minimization of the maximum degree plays a key role in a number of networks of very large domains, such as survivable and wireless networks.</ns0:p><ns0:p>The study results indicated that the proposed algorithm outperforms the ODA-based algorithms in terms of the decreased diameter and average node degree of the overlay networks and approaches the performance of the GM algorithm.</ns0:p><ns0:p>Our analytic and experimental results revealed that the running time of the proposed CD-MAX algorithm is similar to CD-ODA-II and 2D-ODA on average node degree, while outperforming all ODA algorithms excluding GM algorithm on maximum node degree (see . In addition, CD-MAX builds the network in similar time with ODA algorithms with better maximum node degrees, whereas GM fails to scale out with higher node counts (see Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref> and Figure <ns0:ref type='figure'>16</ns0:ref>). As the number of topics increases, the average node degree for CD-MAX and others are similar; however, the CD-MAX and CD-MAX-Ref outperforms on maximum node degree. (See Tables 7-8 and Figures <ns0:ref type='figure' target='#fig_16'>12-13</ns0:ref>).In terms of different subscription sizes, the CD-MAX algorithm outperforms other ODA algorithms excluding GM on average node degree (see Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>).</ns0:p><ns0:p>By combining the results obtained from the running time experiments and the measurements of the maximum node degree, we can assert that the CD-MAX algorithm is more suitable for networks requiring </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021) Manuscript to be reviewed Computer Science for building overlays with a minimum number of edges; Section 3 presents the details of the proposed CD-MAX and CD-MAX-Ref algorithms; Section 4 provides the comparative results of the CD-MAX and CD-MAX-Ref algorithms against the most recent methods from the literature; and Section 5 presents our drawn conclusion and the possible future directions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>introduced the topic connectivity concept and the minimum topicconnected overlay problem. They provided an approximation solution, called the GM algorithm, for the problem with the minimum number of links. The GM algorithm begins with the overlay network G = (N, / 0). There are m ∈ M|n : Int(n, m) = 1| individual topic-connected components of G for each topic m ∈ M. Int(x, m) indicates whether or not node x is interested in topic m. The algorithm continues by connecting two nodes at each repetition until the resulting overlay comprises maximally one topic-connected component for each m ∈ M. The CD-ODA-II algorithm by Onus and Richa (2011) initializes with G = (N, / 0) as the overlay network. A</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>solutions for decreasing the number of connections exist. Chockler et al. presented the problem of constructing overlay networks with the least possible links. They considered this issue as an NP-Complete problem and proposed the GM algorithm to solve it Chockler et al. (2007a). The GM algorithm begins with the overlay network G = (N, / 0). There are ∑ m ∈ M|n : Int(n, m) = 1| individual topic-connected components of G for each topic m ∈ M. Int(x, m) indicates whether or not node x is interested in topic m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>u</ns0:head><ns0:label /><ns0:figDesc>⇐ w ∈ Max n list 21:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Lemma 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The space complexity of CD-MAX algorithm is O(|T | * |V |). Proof: CD-MAX uses a star topology for each topic, each node requires at most 2 edges to receive and send any topic it is interested in. Hence, it gives us a space use of 2 * |T * V | which gives a space complexity of O(|T | * |V |).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>which will have a lower node degree if it becomes the center node for that topic. If CD-MAX-Ref manages to locate such a node, the edges corresponding to that topic are removed from the center node of the topic, and the newly discovered nodes are connected to the subscribers of that topic. CD-MAX-Ref can decrease the maximum node degree obtained from the CD-MAX algorithm. To prove the improvement made by the CD-MAX-Ref algorithm, let us assume that u has a node degree of d u , and is the center for the k topics. If a node, such as v, is found satisfying d v + d t < d u − d t , the algorithm eliminates all the edges connecting u to the other nodes, except v, and adds edges from v to those nodes. If the m edges are eliminated in this stage, the same number of new edges should be added to connect the neighbors of u to</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>degree of the network. Additionally, the CD-MAX-Ref algorithm is not affected on how many topics are associated with a removed edge. In any case, CD-MAX-Ref finds an alternative node for each topic that is related to the removed edge. Before moving forward we prove that the CD-MAX-Ref algorithmterminates in O(|T | 2 * |N| 3 ).Lemma 4: The running time of CD-MAX-Ref algorithm is O(|T | 2 * |N| 3 ). Proof: CD-MAX-Ref takes over where CD-MAX terminates which runs in O(|T | 2 * |N| 2 ). It then examines all topics and finds the center nodes for each topic in O(|N|) in worst case. Thus algorithm takes O(|T | 2 * |N| 3 ) time steps to be terminated. Lemma 5: The space complexity of CD-MAX-Ref algorithm is O(|T | * |N|). Proof: Since CD-MAX-Ref builds over CD-MAX, it uses similar data structure with more iteration on the same space which does not extend the use of space. Hence, it gives us a space use of in the order of |T * N| giving a space complexity of O(|T | * |N|).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>examples 1 and 2 are presented to clarify the steps used by the proposed CD-MAX and CD-MAX-Ref algorithms. Following examples 3-5 additionally compare CD-MAX and CD-MAX-Ref algorithms over other constant diameter (CD) overlay design algorithms (CD-ODA, CD-ODA I, CD-ODA II, and 2D-ODA).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. Implementation of the CD-MAX Algorithm over Example 1</ns0:figDesc><ns0:graphic coords='8,156.19,63.78,170.07,120.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Implementation of the CD-MAX Algorithm over Example 3 Part 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Implementation of the CD-MAX Algorithm over Example 3 Part 3</ns0:figDesc><ns0:graphic coords='9,156.19,278.22,170.06,116.13' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Implementation of CD-MAX-Ref Algorithm over Example 3 Part 5</ns0:figDesc><ns0:graphic coords='9,156.19,511.09,170.08,106.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Implementation of CD-MAX Algorithm over Example 4 of the topics, which are already subscribed by node u. Note that nodes N (i, j) have priority over nodes N i and N j through the CD-MAX-Ref implementation. The red arrows imply the new edges that are added by the CD-MAX-Ref algorithm. In this example, CD-MAX-Ref plays a key role in decreasing the maximum node degree down to (n)/2. As a summary of all examples, in Table 4, maximum node degrees for overlay networks are listed for the existing CD algorithms in comparison with the CD-MAX algorithms.</ns0:figDesc><ns0:graphic coords='10,263.48,161.09,170.08,148.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>con f idence(α, std dev, n) = norms inv(1 − α/2).std dev/ √ n where: norms inv =Inverse of the standard normal cumulative distribution with the given probability std dev = Standard deviation of the given values n = number of values (in this study, number of runs, i.e. 10) α = significance level, which is a probability between 0 and 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b10'>Chockler et al. (2007a)</ns0:ref> Onus and Richa (2011) Carvalho et al. (2005). While the CD-MAX required less number of edges to build the overlay network compared to other ODA algorithms, CD-MAX-Ref outperformed all competitors by at least four times. In addition, it became the only algorithm with the ability of approaching the performance of the GM algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 10. Average Node Degree for Different Number of Nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>FigureFigure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 12. Average Node Degree for Different Number of Topics</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>FigureFigure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 14. Average Node Degree for Different Number of Subscriptions</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 16. Running Time for Different Number of Nodes in Seconds</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021) Manuscript to be reviewed Computer Science high scalability because it simultaneously reduces the communication costs and the running time. In contrast, the CD-MAX-Ref algorithm best suits environments with slow and gradual changes (i.e., having a low churn rate) and those with a large number of topics, and subscription sizes, which are characteristics of typical internet-based systems.As a future work, studies may concentrate on the simulation of millions of nodes, topics, and subscriptions with a typical churn rate of internet users by employing big data and high-performance computing environments. Further research should consider achieving the best average and low maximum node degrees while optimizing the running time. The proposed algorithms can further be extended to include power consumption optimization<ns0:ref type='bibr' target='#b0'>Alsultan et al. (2016)</ns0:ref>, and location awareness to build clustered overlay structures to reduce delay, increase bandwidth, improve scalability<ns0:ref type='bibr' target='#b5'>Bozda and Oztoprak (2008)</ns0:ref>, resilience to node failures, and have load balancing.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Max n list AND w has the largest density u .</ns0:figDesc><ns0:table><ns0:row><ns0:cell>9: 10:</ns0:cell><ns0:cell>Max n ⇐ n u Clean Max n list</ns0:cell></ns0:row><ns0:row><ns0:cell>11:</ns0:cell><ns0:cell>Append node u to Max n list</ns0:cell></ns0:row><ns0:row><ns0:cell>12:</ns0:cell><ns0:cell>if n u == Max n then</ns0:cell></ns0:row><ns0:row><ns0:cell>13: 14:</ns0:cell><ns0:cell>Max n ⇐ n u Append node u to Max n list</ns0:cell></ns0:row><ns0:row><ns0:cell>15:</ns0:cell><ns0:cell>if Length o f Max n list > 1 then</ns0:cell></ns0:row><ns0:row><ns0:cell>16:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>17:</ns0:cell><ns0:cell>if Length o f u > 1 then</ns0:cell></ns0:row><ns0:row><ns0:cell>18: 19:</ns0:cell><ns0:cell>u ⇐ newest node in u</ns0:cell></ns0:row></ns0:table><ns0:note>Algorithm 1 Constant Diameter Maximum Algorithm (CD-MAX) 1: T ⇐ Set o f topics 2: V ⇐ Set o f nodes 3: Max n ⇐ Number o f Nodes − 1 ⊲ Max n : maximum number o f neighbors f or a node 4: Max n list ⇐ Empty List ⊲ Max n list : collection o f nodes with the same Max n value 5: while T is NOT empty do 6: for each u ∈ V do 7: Calculate interest group (n u ) 8: if n u < Max n then u ⇐ w ∈</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Example 2 -Topic Assignments</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection A 10,20</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection B 10,30</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection C 20,30</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Collection D 10,40</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Example 3 -Topic Assignments</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell><ns0:cell>Degree</ns0:cell></ns0:row><ns0:row><ns0:cell>0 1 2 3 4 5 6 7</ns0:cell><ns0:cell>{1,2,3,5,7,8} {0,1,3,5,7,8,9} {1,4,5} {2,4,6} {0,2,3,4,9} {2,3,6} {2,5} {1,6}</ns0:cell><ns0:cell>7 6 6 6 6 6 6 5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Example 4 -Topic Assignments</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Topics</ns0:cell></ns0:row><ns0:row><ns0:cell>N i</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Maximum Node Degree of Overlay Networks Built by CD Algorithms</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average Node Degree for Different Number of Nodes (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Nodes Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>6,62</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>13,56</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,41</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>13,92</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>18,26</ns0:cell><ns0:cell>±0,09</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>6,25</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>13,79</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,73</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,07</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>18,97</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>5,96</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>13,83</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>14,50</ns0:cell><ns0:cell>±0,14</ns0:cell><ns0:cell>14,17</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>19,26</ns0:cell><ns0:cell>±0,02</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>5,80</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>13,91</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>14,67</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>14,19</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>19,43</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>5,52</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>13,98</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>14,60</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,23</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>19,62</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>5,38</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>13,97</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>14,50</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>14,27</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>19,72</ns0:cell><ns0:cell>±0,01</ns0:cell></ns0:row><ns0:row><ns0:cell>2500</ns0:cell><ns0:cell>4,94</ns0:cell><ns0:cell>±0,01</ns0:cell><ns0:cell>14,02</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>14,37</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>14,31</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>19,89</ns0:cell><ns0:cell>±0,00</ns0:cell></ns0:row><ns0:row><ns0:cell>5000</ns0:cell><ns0:cell>4,93</ns0:cell><ns0:cell>±0,02</ns0:cell><ns0:cell>14,01</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>14,40</ns0:cell><ns0:cell>±0,10</ns0:cell><ns0:cell>14,37</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>19,89</ns0:cell><ns0:cell>±0,02</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Nodes Degree Conf. Degree</ns0:cell><ns0:cell cols='2'>Conf. Degree</ns0:cell><ns0:cell cols='2'>Conf. Degree</ns0:cell><ns0:cell cols='3'>Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>19,80</ns0:cell><ns0:cell cols='7'>±1,30 137,80 ±2,77 131,40 ±1,94 113,70 ±1,98</ns0:cell><ns0:cell>38,10</ns0:cell><ns0:cell>±0,90</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>21,50</ns0:cell><ns0:cell cols='7'>±0,00 203,80 ±3,68 199,10 ±3,26 176,80 ±1,62</ns0:cell><ns0:cell>51,30</ns0:cell><ns0:cell>±1,34</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>22,80</ns0:cell><ns0:cell cols='7'>±0,00 272,60 ±2,45 261,40 ±4,93 237,50 ±1,99</ns0:cell><ns0:cell>62,60</ns0:cell><ns0:cell>±1,44</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>22,20</ns0:cell><ns0:cell cols='7'>±0,00 337,00 ±1,98 334,70 ±6,27 303,40 ±2,13</ns0:cell><ns0:cell>75,20</ns0:cell><ns0:cell>±1,84</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>22,60</ns0:cell><ns0:cell cols='9'>±0,00 510,90 ±3,72 498,30 ±5,00 458,70 ±3,71 102,90 ±2,87</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>23,90</ns0:cell><ns0:cell cols='9'>±0,00 678,60 ±4,79 677,30 ±6,91 619,10 ±3,41 132,60 ±3,11</ns0:cell></ns0:row><ns0:row><ns0:cell>2500</ns0:cell><ns0:cell>32,60</ns0:cell><ns0:cell cols='9'>±0,00 1686,70 ±7,61 1683,60 ±7,10 1584,20 ±6,78 296,67 ±2,36</ns0:cell></ns0:row><ns0:row><ns0:cell>5000</ns0:cell><ns0:cell>36,00</ns0:cell><ns0:cell cols='9'>±0,00 1692,00 ±7,12 1685,00 ±9,08 1580,00 ±7,65 296,00 ±3,08</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Max. Node Degree for Different Number of Nodes (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Avg. Node Degree for Different Number of Topics (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Topics Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>9,99</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>13,16</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>12,80</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>13,53</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>14,52</ns0:cell><ns0:cell>±0,11</ns0:cell></ns0:row><ns0:row><ns0:cell>250</ns0:cell><ns0:cell>10,38</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>12,67</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>12,21</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>13,06</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>13,74</ns0:cell><ns0:cell>±0,12</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>10,67</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>12,27</ns0:cell><ns0:cell>±0,11</ns0:cell><ns0:cell>11,78</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>12,63</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>12,99</ns0:cell><ns0:cell>±0,12</ns0:cell></ns0:row><ns0:row><ns0:cell>350</ns0:cell><ns0:cell>10,65</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,77</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,36</ns0:cell><ns0:cell>±0,05</ns0:cell><ns0:cell>12,14</ns0:cell><ns0:cell>±0,08</ns0:cell><ns0:cell>12,34</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>10,37</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>11,25</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>10,88</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>11,59</ns0:cell><ns0:cell>±0,09</ns0:cell><ns0:cell>11,81</ns0:cell><ns0:cell>±0,09</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Ref Topics Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf. Max. Node Degree for Different Number of Topics (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>11/16</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Average Node Degree for Different Subscription Size (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>CD-ODA-II</ns0:cell><ns0:cell cols='2'>2D-ODA</ns0:cell><ns0:cell cols='2'>CD-MAX</ns0:cell><ns0:cell cols='2'>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>Subsc. Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>12,86</ns0:cell><ns0:cell>±0,07</ns0:cell><ns0:cell>29,92</ns0:cell><ns0:cell>±0,16</ns0:cell><ns0:cell>32,28</ns0:cell><ns0:cell>±0,43</ns0:cell><ns0:cell>15,09</ns0:cell><ns0:cell>±0,12</ns0:cell><ns0:cell>27,93</ns0:cell><ns0:cell>±0,07</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>12,20</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>28,21</ns0:cell><ns0:cell>±0,19</ns0:cell><ns0:cell>29,42</ns0:cell><ns0:cell>±0,79</ns0:cell><ns0:cell>14,19</ns0:cell><ns0:cell>±0,14</ns0:cell><ns0:cell>37,08</ns0:cell><ns0:cell>±0,10</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>11,51</ns0:cell><ns0:cell>±0,06</ns0:cell><ns0:cell>25,34</ns0:cell><ns0:cell>±0,41</ns0:cell><ns0:cell>26,31</ns0:cell><ns0:cell>±0,44</ns0:cell><ns0:cell>12,62</ns0:cell><ns0:cell>±0,23</ns0:cell><ns0:cell>46,04</ns0:cell><ns0:cell>±0,10</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>10,83</ns0:cell><ns0:cell>±0,03</ns0:cell><ns0:cell>22,31</ns0:cell><ns0:cell>±0,38</ns0:cell><ns0:cell>23,08</ns0:cell><ns0:cell>±0,43</ns0:cell><ns0:cell>11,20</ns0:cell><ns0:cell>±0,23</ns0:cell><ns0:cell>11,20</ns0:cell><ns0:cell>±0,23</ns0:cell></ns0:row><ns0:row><ns0:cell>35</ns0:cell><ns0:cell>10,21</ns0:cell><ns0:cell>±0,04</ns0:cell><ns0:cell>20,93</ns0:cell><ns0:cell>±0,32</ns0:cell><ns0:cell>20,90</ns0:cell><ns0:cell>±0,48</ns0:cell><ns0:cell>10,46</ns0:cell><ns0:cell>±0,16</ns0:cell><ns0:cell>10,46</ns0:cell><ns0:cell>±0,16</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>Ref Subsc. Degree Conf. Degree Conf. Degree Conf. Degree Conf. Degree Conf.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>GM CD-MAX-15 CD-ODA-II 2D-ODA CD-MAX 16,10 ±0,81 185,80 ±1,38 183,90 ±1,97 174,10 ±0,85 51,90 ±0,81</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>14,10</ns0:cell><ns0:cell>±0,94 198,00 ±0,48 197,60 ±0,50 193,60 ±0,50</ns0:cell><ns0:cell>68,70</ns0:cell><ns0:cell>±1,80</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>13,00</ns0:cell><ns0:cell>±1,49 199,00 ±0,00 198,90 ±0,19 197,90 ±0,19</ns0:cell><ns0:cell>82,20</ns0:cell><ns0:cell>±0,91</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>10,60</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Max. Node Degree for Different Subscription Size (Mean Degree with confidence α = 0.05)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>46</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Average Node Degree</ns0:cell><ns0:cell>16 22 28 34 40</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>GM CD-ODA-II 2D-ODA CD-MAX CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>20 Number of Subscriptions 25 30</ns0:cell><ns0:cell>35</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Running Time for Different Number of Nodes in Seconds (Mean Values with Conf. α = 0.05) Ref Subsc. Avg. Conf. Avg. Conf. Avg. Conf. Avg. Conf. Avg. Conf.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>CD-ODA-II</ns0:cell><ns0:cell>2D-ODA</ns0:cell><ns0:cell>CD-MAX</ns0:cell><ns0:cell>CD-MAX-Ref</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>Avg.</ns0:cell><ns0:cell cols='4'>Conf. Avg. Conf. Avg. Conf. Avg. Conf.</ns0:cell><ns0:cell>Avg.</ns0:cell><ns0:cell>Conf.</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>0,12</ns0:cell><ns0:cell cols='4'>±0,01 0,29 ±0,03 0,11 ±0,01 0,29 ±0,02</ns0:cell><ns0:cell>9,72</ns0:cell><ns0:cell>±0,38</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>0,38</ns0:cell><ns0:cell cols='4'>±0,02 0,61 ±0,04 0,26 ±0,02 0,64 ±0,03</ns0:cell><ns0:cell>31,13</ns0:cell><ns0:cell>±0,95</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>0,88</ns0:cell><ns0:cell cols='4'>±0,06 1,06 ±0,08 0,42 ±0,02 1,12 ±0,05</ns0:cell><ns0:cell>68,12</ns0:cell><ns0:cell>±2,08</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>2,09</ns0:cell><ns0:cell cols='4'>±0,83 1,69 ±0,27 0,63 ±0,04 1,62 ±0,05</ns0:cell><ns0:cell>122,49</ns0:cell><ns0:cell>±1,55</ns0:cell></ns0:row><ns0:row><ns0:cell>750</ns0:cell><ns0:cell>5,52</ns0:cell><ns0:cell cols='4'>±0,17 3,44 ±0,08 1,47 ±0,07 3,50 ±0,10</ns0:cell><ns0:cell>387,86</ns0:cell><ns0:cell>±4,48</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell cols='5'>12,08 ±0,04 5,67 ±0,12 2,56 ±0,05 5,84 ±0,15</ns0:cell><ns0:cell>922,76</ns0:cell><ns0:cell>±25,67</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>2500 211,91 ±5,28 37,95 ±1,42 19,69 ±1,61 38,90 ±2,26 4856,00 ±150,14</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>5000 262,86 ±2,23 46,10 ±1,25 23,56 ±0,95 50,20 ±2,45 15049,50 ±402,14</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>GM</ns0:cell><ns0:cell>CD-ODA-II</ns0:cell><ns0:cell>2D-ODA</ns0:cell><ns0:cell>CD-MAX</ns0:cell><ns0:cell>CD-MAX-</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Running Time for Different Subs. Size in Seconds (Mean Values with Confidence α = 0.05)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='8'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:note>
<ns0:note place='foot' n='9'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53839:2:0:NEW 4 Apr 2021)</ns0:note>
</ns0:body>
" | "Dear Respective Editor,
Thank you for giving us the opportunity to submit a revised draft of the manuscript “Low-diameter
topic-based pub/sub overlay network construction with minimum–maximum node” for publication in
the PeerJ Computer Science Journal. We appreciate the time and effort that you and the reviewers
dedicated to providing feedback on our manuscript and are grateful for the insightful comments on and
valuable improvements to our paper.
We have incorporated the suggestions made by the reviewers. Changes in the initial version of the
manuscript are highlighted by using the latexdiff tool. We also added a clean copy of the final version of
the manuscript. Please see below, we also provide a point-by-point response explaining how we have
addressed each of your or reviewers’ comments in blue.
Thank you for your consideration in advance.
Sincerely,
Reviewers’ comments to the Authors:
Reviewer 1
Basic reporting
The updates section require more proof-reading, e.g.:
- L45: Pub/sub system *have* a variety of use cases.
- L65: *A* constructed..
Author Response: The authors would like to thank the reviewer about the corrections. The updated
sections were carefully proof-read one more time and all the recommended fixes were applied.
- L73:
1. What exactly do you mean by 'decoupling'
2. I'm sure one can find examples where decoupling is not advantageous
3. Either 'systems with decoupling mechanism*s*' or 'systems with *a* decoupling mechanism'
Author Response: The authors would like to thank the reviewer about pointing out the need for
explaining the decoupling. The authors modified the relevant section in order to clarify the issue as
follows:
“... The set of publishers and the set of subscribers can have non-zero intersection, which means that
the process can also operate as a publisher and a subscriber at the same time (decoupling). Pub-sub
systems are decoupled; therefore, a process may change anything about a publisher if it does not
change the way it produces messages. Hence, there is no need to change something about the
downstream subscribers. Similarly, the opposite process is true as well. Systems with decoupling
mechanisms do not need to consider any issues such as addressing and synchronization. ...'
- L152: Why not use $(u,v)\in E$ instead of introducing $e(u,v)$?
Author Response: The authors would like to thank the reviewer about the simplification. e(u,v) function
is removed and $(u,v)\in E$ is used.
- L182: *The* selected node...
- L184: ..., *the* pub/sub overlay network
- L303: ...which *are* listed in...
- L329:
- Tables 5-12 *have* confidence values...
- ...results are calculated as**:**
Author Response: The authors would like to thank the reviewer about the corrections. The updated
sections were carefully proof-read one more time and all the recommended fixes were applied.
Experimental design
Please give some more intuition on why CD-MAX-Ref behaves so different from anything else in Figures
14 and 15
Author Response: The authors would like to thank the reviewer about pointing out the need for a
clarification on Figures 14 and 15. The authors added the following paragraph in order to explain more
Figures 14-15.
“In this experiment, CD-MAX-Ref algorithm tries to find a node with the minimum number of neighbors.
In other words, a node with the least connection with others is always an ideal one to be selected. As it
is assumed that this algorithm tries to keep the maximum node degree low. Hence, this trend makes the
average node degree higher and decrease the maximum node degree of the overlay as the subscription
size goes up (see Figure14). However, as the subscription size increases, the overlay reaches to a
threshold at which it is almost impossible to find a node with a lower number of neighbors. Subscription
size has been increased and all nodes have higher correlation with each other. Inevitably, the average
node degree sharply decreases and the maximum node degree rises more than expected (See
Figure15).”
Validity of the findings
no comment
Comments for the author
Thank you for your updated manuscript and taking the time to implement my previous comments.
Reviewer: Stefanie Roos
Basic reporting
The new version is greatly improved.
However, I still believe that the correctness (in the sense of all nodes receiving messages for their topics) of Algorithm 1
is not addressed properly. The main reason stated for correctness is 'Unless there is a topic which is subscribed by a
single node, pub/sub overlay network is connected.' but you don't i) prove that it is connected, and ii) explain exactly
why connected guarantees that nodes receive all messages. I don't even understand why the statement is correct, as the
PubSub overlay should not be connected if e.g., there are two topics each with three distinct parties interested.
I think your algorithm achieves what you want, but I think you are not using the proper argument here.
Similarly, you start the complexity proof in Line 216 with 'Finding a node with minimum node degree takes O(|V |2 ∗ |T
|)' without giving an explanation on why that is the case.
Author Response: The authors would like to thank the reviewer about pointing out the need for a
clarification in the proof. The proof is clarified based on the explanation of the starting assumption as
follows:
“In Algorithm 1, between lines 5 and 24, the outer loop (the \texttt{while} loop on line 5) iterates T
times. In addition, the inner loop (the \texttt{for} loop on line 6) iterates V times. In the worst case,
calculation of interest group takes V times. Thus, finding a node with minimum node degree takes
$O(|V|^2*|T|)$. ….”
Only remaining minor issues are:
- p.2 can you add a references that PubSub is heavily used in cloud computing systems/that the named systems use
PubSub
Author Response: The authors would like to thank the reviewer about pointing out the need for a
reference. Relevant references for the systems are provided for the cloud computing systems using
pub/sub.
- Line 77: Not only do β -> does
Author Response: The authors would like to thank the reviewer about the grammar fix.
- Algorithm 1: you are still using Max_n as both a set and a number...it can only be one, you probably need two variables
here
- Line 10 of Algorithm 1: u⇐w∈Maxn AND w has the largest du. -> it's not clear from which set of nodes w has the largest
d_u (and shouldn't it be d_w?), is it all nodes or only those in Max_n]
Author Response: The authors would like to thank the reviewer about pointing out these confusing
statements. The authors added a new list variable to hold the nodes which have the same Max_n values.
By that way, it is clarified how a node is chosen if it has the same Max_n value with another node.
Additional comments were added to describe those variables. The changes were applied to Algorithm 1.
Experimental design
Concerns appropriately addressed.
Validity of the findings
Concerns appropriately addressed.
" | Here is a paper. Please give your review comments after reading it. |
101 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this work DAE Tools modelling, simulation and optimisation software, its programming paradigms and main features are presented. The current approaches to mathematical modelling such as use of modelling languages and general-purpose programming languages are analysed. The common set of capabilities required by the typical simulation software are discussed and the shortcomings of the current approaches recognised. A new hybrid approach is introduced and the modelling languages and the hybrid approach are compared in terms of the grammar, compiler, parser and interpreter requirements, maintainability and portability. The most important characteristics of the new approach are discussed, such as: (1) support for the runtime model generation, (2) support for the runtime simulation set-up, (3) support for complex runtime operating procedures, (4) interoperability with the third party software packages (i.e. NumPy/SciPy), ( <ns0:ref type='formula'>5</ns0:ref>) suitability for embedding and use as a web application or software as a service, and ( <ns0:ref type='formula'>6</ns0:ref>) code-generation, model exchange and co-simulation capabilities. The benefits of an equation-based approach to modelling, implemented in a fourth generation object-oriented general purpose programming language such as Python are discussed. The architecture and the software implementation details as well as the type of problems that can be solved using DAE Tools software are described. Finally, some applications of the software at different levels of abstraction are presented and its embedding capabilities and suitability for use as a software as a service demonstrated.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In general, two main approaches to mathematical modelling currently exist: (a) use of modelling languages, either domain specific or multi-domain such as Modelica <ns0:ref type='bibr' target='#b11'>(Fritzson and Engelson, 1998)</ns0:ref>, Ascend <ns0:ref type='bibr' target='#b20'>(Piela et al., 1991)</ns0:ref>, gPROMS <ns0:ref type='bibr' target='#b4'>(Barton and Pantelides, 1994)</ns0:ref>, GAMS <ns0:ref type='bibr' target='#b6'>(Brook et al., 1988)</ns0:ref>, Dymola <ns0:ref type='bibr' target='#b8'>(Elmqvist, 1978)</ns0:ref>, APMonitor <ns0:ref type='bibr' target='#b12'>(Hedengren et al., 2014)</ns0:ref>, and (b) use of general-purpose programming languages, either lower level third-generation languages such as C, C++ and Fortran (i.e. PETSc -a suite of data structures and routines for the scalable solution of scientific applications, <ns0:ref type='bibr' target='#b2'>Balay et al., 2015</ns0:ref>, and SUNDIALS -suite of nonlinear and differential/algebraic equation solvers, <ns0:ref type='bibr' target='#b13'>Hindmarsh et al., 2005)</ns0:ref>, or higher level fourth-generation languages such as Python (i.e. Assimuloa high-level interface for a wide variety of ODE/DAE solvers written in C and Fortran, <ns0:ref type='bibr' target='#b1'>Andersson et al., 2015)</ns0:ref> and multi-paradigm numerical languages: Matlab <ns0:ref type='bibr'>(The MathWorks, Inc., 2015)</ns0:ref>, Mathematica <ns0:ref type='bibr'>(Wolfram Research, Inc., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b26'>Maple (Waterloo Maple, Inc., 2015)</ns0:ref>, Scilab (Scilab <ns0:ref type='bibr' target='#b23'>Enterprises, 2015)</ns0:ref>, and GNU Octave <ns0:ref type='bibr' target='#b7'>(Eaton et al., 2015)</ns0:ref>. The lower-level general purpose languages PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science are also often used for the development of the efficient, tailor-made software (i.e. large-scale finite difference and finite element solvers) targeting one of the available high-performance computing architectures such as general purpose graphics processing units (GPGPU), field-programmable gate arrays (FPGA), vector processors and data flow engines (DFE). In addition, some modelling tools provide the Python scripting interface to the simulator engine: APMonitor, JModelica <ns0:ref type='bibr' target='#b0'>(Akesson et al., 2010)</ns0:ref>, and OpenModelica <ns0:ref type='bibr' target='#b9'>(Fritzson et al., 2005)</ns0:ref>; however, their API is limited to the loading of developed models, execution of simulations and processing of the results only. Domain Specific Languages (DSL) are a special-purpose programming or specification languages dedicated to a particular problem domain and directly support the key concepts necessary to describe the underlying problems. They are created specifically to solve problems in a particular domain and usually not intended to be able to solve problems outside it (although that may be technically possible in some cases). More versatile, multi-domain modelling languages (such as Modelica or gPROMS) are capable of solving problems in different application domains. Despite their versatility, modelling languages commonly lack or have a limited access to the operating system, third-party numerical libraries and other capabilities that characterise full-featured programming languages, scripting or otherwise. In contrast, general-purpose languages are created to solve problems in a wide variety of application domains, do not support concepts from any domain, and have a direct access to the operating system, low-level functions and third-party libraries.</ns0:p><ns0:p>The most important tasks required to solve a typical simulation or optimisation problem include: the model specification, the simulation setup, the simulation execution, the numerical solution of the system of algebraic/differential equations, and the processing of the results. Each task may require a call or a chained sequence of calls to other software libraries, the methods in those libraries must be available to be called with no significant additional pre-processing and must be able to operate on shared/common data structures. All of these require a two-way interoperability between the software and third-party libraries. Also, the model structure is often not fully defined beforehand and a runtime generation of models ('on-the-fly') using the results from other software is required.</ns0:p><ns0:p>Frequently, simulations can not be limited to a straightforward, step-wise integration in time but the custom user-defined operating procedures are required, which can be performed only using the fully-featured programming languages. In addition, it is often desired to compare/benchmark the simulation results between different simulators. This requires the code-generation and the model-exchange capabilities to automatically generate the source code for the target language or export the model definition to a specified (often simulator-independent) model specification language. Exposing the functionality of the developed models to another simulator through a predefined standard interface such as the CAPE-OPEN (http://www.colan.org) and Functional Mockup Interface (FMI, http://www.fmi-standard.org) is another common functionality. Finally, the current trends in IT industry show that there is a high demand for cloud solutions, such as software as a service (SaaS), platform as a service (PaaS) and web applications.</ns0:p><ns0:p>A modelling language implemented as a single monolithic software package can rarely deliver all capabilities required. For instance, the Modelica modelling language allows calls to 'C' functions from external shared libraries but with some additional pre-processing. Simple operating procedures are supported directly by the language but they must be embedded into a model, rather than separated into an independent section or function. gPROMS also allows very simple operating procedures to be defined as tasks (only in simulation mode), and user-defined output channels for custom processing of the results. The runtime model generation and complex operating procedures are not supported. Invocation from other software is either not possible or requires an additional application layer. On the other hand, Python, Matlab and the software suites such as PETSc have an access to an immense number of scientific software libraries, support runtime model generation, completely flexible operating procedures and processing of the results. However, the procedural nature and lack of object-oriented features in Matlab and absence of fundamental modelling concepts in all three types of environments make development of complex models or model hierarchies difficult.</ns0:p><ns0:p>In this work, a new approach has been proposed and implemented in DAE Tools software which offers some of the key advantages of the modelling languages coupled with the power and flexibility of the general-purpose languages. It is a type of hybrid approach -it is implemented using the general-purpose programming languages such as C++ and Python, but provides the Application Programming Interface (API) that resembles a syntax of modelling languages as much as possible and takes advantage of the higher level general purpose languages to offer an access to the operating system, low-level functions and large number of numerical libraries to solve various numerical problems. To illustrate the new concept, the comparison between Modelica and gPROMS grammar and DAE Tools API for a very simple dynamical model is given in the source code listings 1 to 3, respectively. The model represents a cylindrical tank containing a liquid inside with an inlet and an outlet flow where the outlet flowrate depends on the liquid level in the tank. It can be observed that the DAE Tools API mimics the expressiveness of the grammar of modelling languages to provide the key modelling concepts while retaining the full power of general purpose programming languages.</ns0:p><ns0:p>More details about the API is given in the section Architecture.</ns0:p><ns0:p>The article is organised in the following way. First, the DAE Tools programming paradigms and the main features are introduced and discussed. Next, its architecture and the software implementation details are analysed. After that, the algorithm for the solution of DAE systems is presented and some basic information on how to develop models in DAE Tools given. Then, two applications of the software are demonstrated: (a) multi-scale modelling of phase-separating electrodes, and (b) a reference implementation simulator for a new domain specific language. Finally, a summary of the most important characteristics of the software is given in the last section. ¦ ¥</ns0:p></ns0:div>
<ns0:div><ns0:head>MAIN FEATURES AND PROGRAMMING PARADIGMS</ns0:head><ns0:p>DAE Tools is free software released under the GNU General Public Licence. The source code, the installation packages and more information about the software can be found on the http://www. daetools.com website. Models can be developed in Python or C++, compiled into an independent executable and deployed with no additional run time libraries. Problems that can be solved are initial value problems of implicit form described by a system of linear, non-linear, and partial-differential equations (only index-1 DAE systems, at the moment). Systems modelled can be with lumped or distributed parameters, steady-state or dynamic, and continuous with some elements of event-driven systems such as discontinuous equations, state transition networks and discrete events. Automatic differentiation is supported through the operator overloading technique using the modified ADOL-C library <ns0:ref type='bibr' target='#b25'>(Walther and Griewank, 2012)</ns0:ref> Currently, Sundials IDAS <ns0:ref type='bibr' target='#b13'>(Hindmarsh et al., 2005)</ns0:ref> variable-order, variable-coefficient BDF solver is used to solve DAE systems and calculate sensitivities. IPOPT <ns0:ref type='bibr' target='#b24'>(Wächter and Biegler, 2006)</ns0:ref>, BONMIN <ns0:ref type='bibr' target='#b5'>(Bonami et al., 2008)</ns0:ref>, and NLopt <ns0:ref type='bibr' target='#b14'>(Johnson, 2015)</ns0:ref> solvers are employed to solve (mixed integer) non-linear programming problems, and a range of direct/iterative and sequential/multithreaded sparse matrix linear solvers is interfaced such as SuperLU/SuperLU_MT <ns0:ref type='bibr' target='#b16'>(Li, 2005)</ns0:ref>, PARDISO <ns0:ref type='bibr' target='#b22'>(Schenk et al., 2007)</ns0:ref>, Intel PARDISO, and Trilinos Amesos/AztecOO <ns0:ref type='bibr' target='#b21'>(Sala et al., 2006)</ns0:ref>.</ns0:p><ns0:p>Broadly speaking, DAE Tools is not a modelling language (such as Modelica and gPROMS)</ns0:p><ns0:p>nor an integrated software suite of data structures and routines for scientific applications (such as Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>PETSc), but rather a higher level structure -an architectural design of interdependent software components providing an API for: (a) model development/specification, (b) activities on developed models such as simulation, optimisation and parameter estimation, (c) processing of the results, (d) report generation, and (e) code generation and model exchange. However, it can easily be integrated into a software suite with the graphical user interface (GUI), embedded into another software or even run as a web service on the server (as it was demonstrated in the section NineML domain specific language). The hybrid approach provides a combination of strengths of both modelling and general purpose programming languages. The most important feature of domain-specific/modelling languages is that they allow solutions to be expressed in the idiom and at the level of abstraction of the problem domain. They directly support all modelling concepts by the language syntax and provide a clean, concise and an elegant way of building model descriptions. Also, modelling languages could be and often are simulator independent making a model exchange easier. However, all of this comes with a price. For instance, the costs of designing, implementing, and maintaining a domain-specific language as well as the tools required to develop with it are high. In all cases, either a compiler or an interpreter with a lexical parser and an abstract syntax tree (AST) must be developed with all burden that comes with it such as processing of the AST, error handling, grammar ambiguities and hidden bugs. In addition, there is a cost of learning a new language versus its limited applicability: users are required to master a new language with yet another language grammar. Integration of modelling languages with other components is difficult and limited by the existence of wrappers around a simulator engine. Models usually cannot be generated in the runtime or at least not easily and cannot be modified in the runtime. Setting up a simulation is specified in the language grammar and it is difficult to do it programmatically. Simulation operating procedures are not fully flexible and manipulation of models is limited to only those operations provided by the language. Finally, the results typically cannot be processed in a user-defined fashion without investing an effort to master the protocol used by the simulator. In contrast, in DAE Tools a compiler/lexical parser/interpreter are an integral part of the programming language (C++ and Python) with a robust error handling, universal grammar and massively tested. No learning of a new language is required, calling external functions/libraries is a built-in feature and models can be created and modified in the runtime. Setting up a simulation is done programmatically and the initial values can be easily obtained from the other software. Operating procedures are completely flexible (within the limits of a programming language itself) and models can be manipulated in any user-defined way. Processing of the results is also completely flexible. However, the modelling concepts in DAE Tools cannot be expressed directly in the programming language and must be emulated in its API. Also, it is programming language dependent. To certain extent, this can be overcome by the fact that Python shines as a glue language, used to combine components written in different programming languages and a large number of scientific software libraries expose its functionality to Python via their extension modules.</ns0:p><ns0:p>Regarding the available modelling techniques, three approaches currently exist <ns0:ref type='bibr' target='#b19'>(Morton, 2003)</ns0:ref>:</ns0:p><ns0:p>(a) sequential modular, (b) simultaneous modular, and (c) equation-based (acausal). The equationbased approach is adopted and implemented in this work. A brief history of the equation-based solvers and comparison of the sequential-modular and equation-based approaches can be found in <ns0:ref type='bibr' target='#b19'>Morton (2003)</ns0:ref> and a good overview of the equation-oriented approach and its application in gPROMS is given by <ns0:ref type='bibr' target='#b3'>Barton and Pantelides (1993)</ns0:ref>. According to this approach, all equations and variables which constitute the model representing the process are generated and gathered together.</ns0:p><ns0:p>Then, equations are solved simultaneously using a suitable mathematical algorithm <ns0:ref type='bibr' target='#b19'>(Morton, 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the equation-based approach equations are given in an implicit form as functions of state variables and their derivatives, degrees of freedom (the system variables that may vary independently), and parameters:</ns0:p><ns0:formula xml:id='formula_0'>F ( ẋ, x, y, p) = 0</ns0:formula><ns0:p>where x represents state variables, ẋ their derivatives, y degrees of freedom and p parameters. Inputoutput causality is not fixed providing a support for different simulation scenarios (based on a single model) by fixing different degrees of freedom.</ns0:p><ns0:p>The hybrid approach allows an easy interaction with other software packages/libraries. First, other numerical libraries can be accessed directly from the code and since the Python's design allows an easy development of extension modules from different languages, a vast number of numerical libraries is readily available. Second, DAE Tools are developed with a built-in support for NumPy (http://numpy.scipy.org) and SciPy (http://scipy.org) numerical packages; therefore, DAE Tools objects can be used as native NumPy data types and numerical functions from other extension modules can directly operate on them. This way, a large pool of advanced and massively tested numerical algorithms is made directly available to DAE Tools.</ns0:p><ns0:p>The automatic differentiation is always utilised to analytically generate the Jacobian matrix if the direct sparse linear solvers are used, or to generate a preconditioner matrix for the iterative linear solvers using the software suites such as Trilinos AztecOO, IFPACK, and ML. The automatic differentiation is also applied to sensitivity analysis where it is used to calculate derivatives of model equations per parameters with respect to which sensitivities are requested. Only the continuous-time systems are supported and the forward sensitivity method provided by the Sundials IDAS solver is available at the moment. The forward sensitivity equations are integrated together with the original DAE system leading to the DAE system of size N(N s + 1), where N is the size of the original DAE system and N s is the number of model parameters. More information about the sensitivity analysis using the forward sensitivity method can be found in the Sundials documentation.</ns0:p><ns0:p>DAE Tools also provide code generators and co-simulation/model exchange standards/interfaces for other simulators. This way, the developed models can be simulated in other simulators either by generating the source code, exporting a model specification file or through some of the standard co-simulation interfaces. To date, the source code generators for c99, Modelica and gPROMS languages have been developed. In addition, DAE Tools functionality can be exposed to Matlab, Scilab and GNU Octave via MEX-functions, to Simulink via user-defined S-functions and to the simulators that support FMI co-simulation capabilities. The future work will concentrate on support for the additional interfaces (i.e. CAPE-OPEN) and development of additional code generators.</ns0:p><ns0:p>Parallel computation is supported using only the shared-memory parallel programming model at the moment. Since a repeated solution of the system of linear equations typically requires around 90-95% of the total simulation time, the linear equations solver represents the major bottleneck in the simulation. Therefore, the main focus was put on performance improvement of the solution of linear equations using one of the available multi-threaded solvers such as SuperLU_MT, Pardiso and Intel Pardiso. given in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. Interface realisations are given in Fig. <ns0:ref type='figure'>S2</ns0:ref>. Models in DAE Tools are represented by the daeModel class and contain the following elements: domains, parameters, variables, equations, state transition networks, ports, event ports, actions to be performed when a given condition is satisfied, actions to be performed when an event is triggered on a given event port, and components (instances of other models, used to form a hierarchy of models). The daeModel UML class diagram is presented in Fig. <ns0:ref type='figure'>S3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>ARCHITECTURE</ns0:head><ns0:p>Package 'activity' This package contains interfaces that define an API for activities that can be performed on developed models. To date, only two interfaces are defined and implemented:</ns0:p><ns0:p>daeSimulation_t (defines a functionality used to perfom simulations) and daeOptimization_t (defines a functionality used to perform optimisations).</ns0:p><ns0:p>Package 'solvers' This package contains interfaces that define an API for numerical solution of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>daeTCPIPLogServer; used when a simulation is running on a remote computer).</ns0:p><ns0:p>Package 'units' Parameters and variables in DAE Tools have a numerical value in terms of a unit of measurement (quantity) and units-consistency of equations and logical conditions is strictly enforced (although it can be switched off, if required). The package contains only two classes: unit and quantity. Both classes have overloaded operators +, -, *, / and ** to support creation of derived units and operations on quantities that contain a numerical value and units. In addition, the package defines the basic mathematical functions that operate on quantity objects (such as sin, cos, tan, sqrt, pow, log, log10, exp, min, max, floor, ceil, abs etc.).</ns0:p></ns0:div>
<ns0:div><ns0:head>SOLUTION OF A DAE SYSTEM</ns0:head><ns0:p>The solution of a DAE system requires the functionality provided by the following objects: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>that the equation is distributed on). Every residual expression is evaluated to form an evaluation tree. The concept of representing equations as evaluation trees is employed for evaluation of residual equations and their gradients (which represent a single row in the Jacobian matrix). This is achieved by using the operator overloading technique for automatic differentiation adopted from the ADOL-C library <ns0:ref type='bibr' target='#b25'>(Walther and Griewank, 2012)</ns0:ref>. Evaluation trees consist of unary and binary nodes, each node representing a parameter/variable value, basic mathematical operation (+, -, *, /, **) or a mathematical function <ns0:ref type='bibr'>(sin, cos, tan, arcsin, arccos, arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh, arctan2, erf, sqrt, pow, log, log10, exp, min, max, floor, ceil, abs, sum, product, integral, etc.)</ns0:ref>. The mathematical functions are overloaded to operate on a heavily modified ADOL-C class adouble, which has been extended to contain information about domains, parameters and Manuscript to be reviewed Manuscript to be reviewed and Tutorials, respectively. The model describes a block of copper at one side exposed to the source of heat and at the other to the surroundings with the constant temperature and the constant heat transfer coefficient. The process starts at the temperature of the metal of 283K. The integral heat balance can be described by the following ordinary differential equation:</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_1'>mc p dT dt = Q in − αA(T − T surr )</ns0:formula><ns0:p>where m is a mass of the block, c p is the specific heat capacity, T is the temperature, Q in is the input power of the heater, α is the heat transfer coefficient, A is the surface area of the block and T surr is the temperature of the surroundings. The copper block model is simulated for 500 seconds. At certain point in time, the heat produced by the heater becomes equal to the heat removed by natural convection and the system reaches the steady-state. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science language simulator) running on a server and providing its functionality through a web service or web application, and (b) defining the modelling concepts from a new application domain using the DAE Tools fundamental modelling concepts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multi-scale model of phase-separating battery electrodes</ns0:head><ns0:p>In the work of <ns0:ref type='bibr' target='#b17'>Li et al. (2014)</ns0:ref>, DAE Tools has been applied to modelling of lithium-ion batteries.</ns0:p><ns0:p>Lithium-ion batteries operate by shuttling lithium ions from one electrode to the other. In a charged state, the lithium ions are stored in the negative electrode (anode), and the positive electrode (cathode)</ns0:p><ns0:p>has almost no lithium. During a discharge, lithium exits the anode, sending an electron through the outer circuit. Lithium ions then move through an electrolyte phase to the cathode, where they recombine with an electron as they enter the cathode. To correctly describe the physics of this process, transport of lithium within the electrodes and within the electrolyte has to be modelled as well as the electrochemical reactions in which lithium ions separate/combine with an electron to exit/enter the electrode materials. Complicating the modelling process, battery electrodes are typically made out of a porous material composed of large numbers of small, solid active particles with a percolating electrolyte. This provides a large surface area for electrochemical reactions to drive electrons through the outer circuit but also creates a strong separation of length scales. The electrode may have a typical thickness of hundreds of microns, whereas single electrode particles range from tens of nanometers to tens of microns. In addition, the system inherently has highly separated time scales. Particles may have transport time scales less than one second, but the imposed time scale for battery discharge is typically on the order of hours. One approach to simulating this system is referred to as porous electrode theory. Porous electrode theory for battery simulations is a method of systematically coupling the different length scales and physical phenomena involved in battery operation. The basic approach involves writing conservation equations both for lithium transport within the particles (small length scale) and for lithium ion transport through the electrolyte (large length scale). Directly simulating the full micro-structure of the electrode particles and electrolyte pores within the porous electrode would require enormous computational effort. Instead, the two phases are coupled via a volume-averaged approach in which simulated particles act as volumetric source/sink terms as they interact with the electrolyte via reactions. More details about the governing equations of such a model applied to a battery electrode made of LiFePO 4 can be found in <ns0:ref type='bibr' target='#b17'>Li et al. (2014)</ns0:ref> and its supplemental information. Spatial discretisation of the governing equations is carried out using the finite volume method, as solid particles are described as residing within individual electrode volumes, as depicted in Fig. <ns0:ref type='figure' target='#fig_9'>10</ns0:ref>.</ns0:p><ns0:p>The resulting discretised set of equations is a large system of DAE's. Differential equations come from the discretised transport equations, and algebraic constraints arise from electrostatic equations and constraints on the total integrated reaction rate (current). In <ns0:ref type='bibr' target='#b17'>Li et al. (2014)</ns0:ref>, the discretised system of DAE's was integrated in time using Matlab's ode15s solver and subsequently reimplemented using DAE Tools, allowing a direct comparison between the two integrators. Using default solver tolerances for both (10 −3 in Matlab and 10 −5 in DAE Tools), a number of simulations were carried out using both the Matlab implementation and the DAE Tools implementation, and in each case the simulation outputs were indistinguishable. Despite obtaining equivalent outputs, the implementation using DAE Tools consistently ran more quickly (Fig. <ns0:ref type='figure' target='#fig_10'>11</ns0:ref>), up to ten times faster (4.22 times, in average). This speedup is a result of its built-in support for automatic differentiation facilitating rapid and accurate derivative evaluation for solution of the highly non-linear system of equations involved in time stepping. In contrast, the ode15s solver creates a numerical approximation Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The simulator utilises the fundamental modelling concepts in DAE Tools: parameters, variables, equations, ports, models, state transition networks and discrete events as a basis for implementation of the higher-level concepts from the NineML language such as neurons, synapses, connectivity patterns, populations of neurons and projections. Again, the role of DAE Tools software is to process NineML and SED-ML xml input files, generate the model structure, execute the simulation, and produce the results based on inputs from SED-ML file. The simulator implements the synchronous (clock-driven) simulation algorithm and the system of equations is integrated continuously using the variable-step variable-order backward differentiation formula using Sundials IDA DAE solver. The exact event times (spike occurrences) are calculated by detecting discontinuities in model equations using root functions. An overview of the simulator is presented in Fig. <ns0:ref type='figure' target='#fig_11'>12</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>23/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science communication of data.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>between liquid level and holdup HoldUp = CrossSectionalArea * Height * Density; // Outlet flowrate as a function of the liquid level FlowOut = Alpha * sqrt(Height); Buffer Tank model (DAE Tools) § ¤ class BufferTank(daeModel): def __init__(self, Name, Parent = None, Description = ''): daeModel.__init__(self, Name, Parent, Description) self.Density = daeParameter('Density', kg/m ** 3, self) = self.HoldUp.dt() − self.FlowIn() + self.FlowOut() # Relation between liquid level and holdup eq = self.CreateEquation('LiquidLevelHoldup') eq.Residual = self.HoldUp() − self.Area() * self.Height() * self.Density() # Outlet flowrate as a function of the liquid level eq = self.CreateEquation('OutletFlowrate') eq.Residual = self.FlowOut() − self.Alpha() * Sqrt(self.Height())</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>DAE</ns0:head><ns0:label /><ns0:figDesc>Tools consists of six packages: core, activity, solvers, datareporting, logging, and units. All packages provide a set of interfaces (abstract classes) that define the required functionality. Interfaces are realised by the implementation classes. The implementation classes share the same name with 7/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science the interface they realise with the suffix _t dropped (i.e. the class daeVariable implements interface daeVariable_t). Package 'core' This package contains the key modelling concepts. The class diagram with interfaces (abstract classes) is presented in Fig. S1. The most important modelling concepts are</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>systems of differential algebraic equations (DAE), systems of linear equations (LA), and (mixedinteger) nonlinear programming problems (NLP or MINLP), and auxiliary classes. The class diagram with the defined interfaces is presented in Fig. S4: daeDAESolver_t (defines a functionality for the solution of DAE systems), daeNLPSolver_t (defines a functionality for the solution of (MI)NLP problems), daeLASolver_t (defines functionality for the solution of systems of linear equations) and daeIDALASolver_t (derived from daeLASolver_t, used by Sundials IDAS linear solvers). Interface realizations are given in Fig. S5. Current implementations include Sundials IDAS DAE solver, IPOPT, BONMIN and NLOPT (MI)NLP solvers and SuperLU, SuperLU_MT, PARDISO, Intel PARDISO and Trilinos (Amesos and AztecOO) sparse matrix linear solvers. Since all these linear equation solvers use different sparse matrix representations, a generic interface (template daeMatrix<typename FLOAT>) has been developed for the basic operations performed by DAE Tools software such as setting/getting the values and obtaining the matrix properties. This way, DAE Tools objects can access the matrix data in a generic fashion while hiding the internal implementation details. To date, three matrix types have been implemented: daeDenseMatrix, daeLapackMatrix (basically wrappers around C/C++ and Fortran two-dimensional arrays), a template class daeSparseMatrix<typename FLOAT, typename INT> (sparse matrix) and its realization daeCSRMatrix<typename FLOAT, typename INT> implementing the compressed row storage (CSR) sparse matrix representation. Package 'datareporting' This package contains interfaces that define an API for processing of simulation results by the daeSimulation_t and daeDAESolver_t classes, and the data structures available to access those data by the users. Two interfaces are defined: daeDataReporter_t (defines a functionality used by a simulation object to report the simulation results) and daeDataReceiver_t (defines a functionality/data structures for accessing the simulation results). A number of data reporters have been developed for: (a) sending the results via TCP/IP protocol to the DAE Tools Plotter application (daeTCPIPDataReporter), (b) plotting the results using the Matplotlib Python library (daePlotDataReporter), and (c) exporting the results to various file formats (such as Matlab MAT, Microsoft Excel, html, xml, json and HDF5). An overview of the implemented classes is given in Fig. S6. 8/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Concept Description daeVariableType_tDefines a variable type that has the units, lower and upper bounds, a default value and an absolute tolerance daeDomain_t Defines ordinary arrays or spatial distributions such as structured and unstructured grids; parameters, variables, equations and even models and ports can be distributed on domains daeParameter_tDefines time invariant quantities that do not change during a simulation, such as a physical constant, number of discretisation points in a domain etc.daeVariable_t Defines time varying quantities that change during a simulation daePort_t Defines connection points between model instances for exchange of continuous quantities; similar to the models, ports can contain domains, parameters and variables daeEventPort_t Defines connection points between model instances for exchange of discrete messages/events; events can be triggered manually or when a specified condition is satisfied; the main difference between event and ordinary ports is that the former allow a discrete communication between models while latter allow a continuous exchange of information daePortConnection_t Defines connections between two ports daeEventPortConnection_t Defines connections between two event ports daeEquation_t Defines model equations given in an implicit/acausal form daeSTN_t Defines state transition networks used to model discontinuous equations, that is equations that take different forms subject to certain conditions; symmetrical/non-symmetrical and reversible/irreversible state transitions are supported daeOnConditionActions_t Defines actions to be performed when a specified condition is satisfied daeOnEventActions_t Defines actions to be performed when an event is triggered on the specified event port daeState_t Defines a state in a state transition network; contains equations and on_event/condition action handlers daeModel_t Represents a model Package 'logging' This package contains only one interface daeLog_t that define an API for sending messages from the simulation to the user. Interface realizations are given in Fig. S7. Three implementations exist: daeStdOutLog (prints messages to the standard output), daFileLog (stores messages to the specified text file), and daeTCPIPLog (sends messages via TCP/IP protocol to the 9/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>(a) simulation object implementing the daeSimulation_t interface (simulation), (b) DAE solver object implementing the daeDAESolver_t interface (dae_solver), (c) linear equations solver object implementing the daeLASolver_t interface (la_solver), (d) data reporter object implementing the daeDataReporter_t interface(data_reporter), and (e) log object implementing the daeLog_t interface (log). A diagram illustrating the participating objects and their associations are given in Fig. 1. Solution of an optimisation problem includes an identical set of objects with the addition of optimization object implementing daeOptimization_t interface and nlp_solver object implementing the daeNLPSolver_t interface. Solution of a DAE system is performed in five phases: (I) creation and initialisation of objects in the main program, (II) initialisation of the simulation and runtime checks in daeSimulation::Initialize() function, (III) calculation of initial conditions in daeSimulation::SolveInitial() function, (IV) integration of the DAE system in time in daeSimulation::Run() function, and (V) clean-up in daeSimulation::Finalize() function followed by destruction of objects in the main program. A typical sequence of calls during the DAE Tools simulation are given in Fig. 2. Phase I: Creation of objects simulation, dae_solver, la_solver, data_reporter and log objects are instantiated in the main program. All distribution domains, parameters, variables and ports are now instantiated. Phase II: Initialisation and runtime checks A sequence of calls during the initialisation in daeSimulation:Initialize() function is given in Fig. 3. During the stage 1, the daeDataProxy_t instance is created. In DAE Tools approach, variables do not hold the values -the values are stored in a proxy object as a compact block of memory to improve the memory copy performance. A separate storage is created for state variables and their derivatives and for degrees of freedom. The variables access the data using their global index. The user-defined function SetUpParametersAndDomains() from the daeSimulation-derived class is now called where the parameters values are assigned and the distribution domains initialised. In the stage 2 of initialisation, the port and model arrays are created and every variable obtains assigned the global index. Distributed variables obtain a separate index for every point in domains they are distributed on. In the stage 3, based on the number of variables and their types, the memory storage for variables values and derivatives is allocated in the data proxy object, the user-defined function DeclareEquations() from the daeModel-derived classes called to create equations, state transition networks, port connections and OnCondition/OnEvent handlers, and the initial variables values and absolute tolerances are set. In the stage 4, the equations get initialised and expanded into an array of residual expressions (one for every point in domains 10/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>variables. In adition, a new adouble_array class has been introduced to support the above-mentioned operations on arrays of parameters and variables. Once built, the evaluation trees can be used for several purposes: (a) to calculate equation residuals, (b) to calculate equation gradients, (c) to export equation expressions into the MathML or LaTeX format, (d) to generate the source code for different languages, and (e) to perform various types of runtime checks. A typical evaluation tree is presented in Fig.4. In the stage 5, the daeBlock instance is created which is used by a DAE solver during the integration of the DAE system. It represents a block of equations and holds the currently active set of equations (including those from state transition networks) and root functions. Finally, the whole system is checked for errors/inconsistencies and the DAE solver initialised.Phase III: Calculation of initial conditions A sequence of calls during the calculation of initial conditions in daeSimulation:SolveInitial() function is given in Fig.5. The consistent set of initial conditions is obtained using the IDACalcIC() function which repeatedly calls the functions to evaluate equations residuals, Jacobian matrix and root functions, solves the resulting system of linear equations and checks for possible occurrences of discontinuities until the specified tolerance is achieved.Phase IV: Integration in time A sequence of calls during the integration of the system in daeSimulation::Run() function is given in Fig.6. The default implementation calls daeSimulation::IntegrateUntilTime() and daeSimulation::ReportData() functions in a loop until the specified time horizon is reached. The IntegrateUntilTime() function uses the IDASolve() function that repeatedly calls the functions to evaluate equations residuals, Jacobian matrix and root functions, solves the resulting system of linear equations and checks for possible occurrences of discontinuities until the specified tolerance is achieved.Phase V: Clean up This phase includes a call to daeSimulation::Finalize() function which performs internal clean-up and memory release, followed by destruction of objects instantiated during the phase I. 11/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. UML object diagram: DAE Tools simulation/optimisation 12/26</ns0:figDesc><ns0:graphic coords='13,93.40,188.06,425.21,391.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .Figure 3 .Figure 4 .Figure 5 .</ns0:head><ns0:label>2345</ns0:label><ns0:figDesc>Figure 2. UML sequence diagram: DAE Tools simulation</ns0:figDesc><ns0:graphic coords='14,114.66,141.57,382.68,484.46' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>#Figure 7 .Figure 8 .Figure 9 .</ns0:head><ns0:label>789</ns0:label><ns0:figDesc>Figure 7. Temperature profile from the CopperBlock simulation</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Schematic of the multi-scale porous electrode model</ns0:figDesc><ns0:graphic coords='23,213.87,70.87,184.25,140.89' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Parity plot for simulation runs with different inputs (Matlab vs. DAE Tools)</ns0:figDesc><ns0:graphic coords='23,192.61,412.09,226.76,175.03' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. An overview of the NineML reference implementation simulator</ns0:figDesc><ns0:graphic coords='25,135.92,70.87,340.17,216.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,114.65,70.86,382.69,358.88' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,107.58,96.15,396.85,243.67' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,107.58,424.76,396.86,246.68' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>5/26 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The key modelling concepts in DAE Tools software.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='2'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='26'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5664:1:0:NEW 4 Mar 2016)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "DAE Tools: Equation-based object-oriented modelling, simulation and optimisation software
Manuscript ID: #5664
Dear Professor Petzold,
The author would like to thank the reviewers for the very valuable comments which would certainly
improve the quality of this article. All the inputs are really valuable and will definitely help improve
the manuscript.
The detailed response to all comments raised by the journal staff and reviewers are given in the next
section. I look forward to hearing from you regarding my submission and I would be glad to
respond to any further questions and comments that you may have.
Yours sincerely,
Dragan Nikolic
1 of 13
Response to the staff note
One reviewer noted that this article was previously published as a poster at PSE2015 / ESCAPE 25.
This fact should be acknowledged / noted in this article.
A very short version of this article was submitted, reviewed and accepted as a poster at PSE2015 /
ESCAPE 25. However, the submission was cancelled before the conference and the work was not
presented.
Changes to the files and figures
New versions of figures 7 – 11 with a higher resolution were uploaded (to meet the size/resolution
requirements).
The figure 12 was replaced with a new one with the identical content.
The spaces in the filenames of supplemental listings 1 to 4 were replaced with underscores (as per
the reviewer 1 request).
Changes to the manuscript independent of the reviewers' comments
Changes made to the manuscript independent of the reviewers' comments are given in blue colour.
The part of the sentence (line 623):
“, and (b) running on a server and providing its functionality through a web service or web
application”
has been replaced with the following text:
“ running on a server and providing its functionality through a web service or web application, and
(b) defining the modelling concepts from a new application domain using the DAE Tools
fundamental modelling concepts”.
2 of 13
Response to Reviewers' Comments
Reviewer 1 (John Abel)
All changes made to the manuscript as a response to this reviewer comments are given in red colour
and marked with a superscript R1.
Basic reporting
In this work, the author presents DAE Tools, a software for building, simulating, and optimizing
models comprised of DAEs. This software takes an approach in which the author combines the
versatility of general-purpose programming with an interface resembling that of a modeling
language. The software appears mature, and includes significant online documentation and a
tutorial. These are all welcome, and contribute to my overall impression that this is a welldeveloped project suitable for publication in PeerJ Computer Science. However, minor revisions
must first be made.
The Introduction provides a detailed description of the state of the field, and shows the relevance of
DAE Tools. However, it is also relatively lengthy and verbose, and could be condensed without
significant loss of descriptiveness. Lines 192-230 in Features read similarly to lines 20-47 and 5971 of the Introduction. To address this, I propose condensing lines 31-71 to outline the limitations
of existing modeling frameworks and the specific goals of DAE Tools: combining the qualities of
domain-specific and general-purpose languages to handle model specification, simulation setup,
execution of a simulation and/or optimization, and processing the results. The Features section
should (and does) expound on how DAE Tools addresses these shortcomings.
This section of the manuscript (lines 31-71) has been condensed according to the suggestions.
Line 46: the sentence “All of these require a two-way interoperability between the software and
third-party libraries: (a) its methods can be invoked from other software, and (b) it can invoke
methods from other software.” has been shortened to “All of these require a two-way
interoperability between the software and third-party libraries.”
Line 51: the sentence “Such operating procedures need not only an access to the operating system
and calls to other libraries but a complex processing as well that can be accomplished only using
the fully-featured programming languages.” has been removed and the following text added to the
previous sentence “, which can be performed only using the fully-featured programming
languages.”
Line 58: the sentence “Several machine readable model specification languages (mostly xml-based)
have been developed in virtually all scientific fields and listing all of them is outside of the scope of
this work.” has been removed.
Line 61: the text “such as the CAPE-OPEN (http://www.colan.org) and Functional Mock-up
Interface (FMI, http://www.fmi-standard.org)” has been added to the sentence.
Line 62: the sentence “The most widely accepted standard co-simulation interfaces at the moment
are CAPE-OPEN interface (http://www.colan.org) and Functional Mock-up Interface (FMI,
http://www.fmi-standard.org).” has been removed.
Line 66: the sentence “SaaS is often referred to as “on-demand software” accessible through the
cloud platform/infrastructure, while the PaaS model provides a computing platform including
operating system, programming language and the execution environment so that the software
solutions can be provided without the cost and complexity of buying and managing the underlying
hardware and software layers.” has been removed.
The Architecture and Solution of a DAE System sections are well-constructed. While useful for
inclusion in the Documentation (or a supplement), I do not think the class diagrams Figs. 1-7 are
necessary for the main text of the manuscript. I did find Figs. 8-11 (UML sequence diagrams) to be
3 of 13
useful.
The figures 1-7 are moved from the manuscript to the supplementary figures S1-S7 and the
corresponding references in the manuscript updated.
The tone is generally refined, however there are several cases of informal phrases (e.g. 'sort of').
Line 65: “so-called” has been deleted.
Line 72: “needed” has been replaced with “required”.
Line 87: “a sort of a hybrid” has been replaced with “a type of hybrid”.
Line 185: “without a need for” has been replaced with “with no additional”.
Line 397: “get” has been replaced twice with “obtain”.
Line 517: “needs to be” has been replaced with “must be”.
Lines 552, 579: “Let us consider” has been replaced with “consider”.
The future aims of the project are briefly mentioned in the Main Features section, or alluded to
('only Order-1 DAEs, at the moment'). Are there significant plans for further expansion of this
software? If so, it may be worth describing these plans.
The future work will include several improvements and new features. Some of the most important
ones have been listed at the end of the Conclusions section, line 754:
“The current work concentrates on a further support for systems with distributed parameters (i.e.
high-resolution finite volume schemes with flux limiters), the additional optimisation algorithms
and the parallel computation using the general purpose graphics processing units and systems with
the distributed memory. The parallel computation will rely on the code generation capabilities to
produce the C source code for the DAE/ODE solvers that support the MPI interface such as PETSc
and Sundials IDAS/PVODE, including the data partitioning and the routines for the inter-process
communication of data.”
Typographical changes:
- 'log' is listed twice, line 337 and 379
- The definite article 'an' should be used before 'API.' ('an API')
- C++ and Python should be capitalized throughout the manuscript
All suggested changes have been made in the revised manuscript.
Experimental design
In this portion of the review, I have described minor bugs in the software or documentation. None of
these changed drastically affect the conclusions of the work.
- SourceForge currently suggests 1.3.0 as the most recent release, yet 1.4.0 is described on the
website. Since installation is significantly different for the versions, it would be useful to update
SourceForge on the most recent version.
The information on the SourceForge web site has been updated.
- Documentation describing installation of DAE Tools lists dependencies, but should also list
versions (e.g. NumPy >= 1.6.1).
The minimal versions have been added to the Documentation section of the DAE Tools website.
- The Docs (5.2.1) do not include import statements at the top. While it can be inferred, please
include ``from daetools.pyDAE import *''
The section 5.2.1 has been updated with the suggested comment.
- The Docs (5.2.1 & 5.3.1) do not work in Python as-is. Specifically, self.m = myModel('myModel',
4 of 13
'Description') in 5.3.1 is input to the myModel object as myModel(name='myModel',
parent='Description'), thus the Parent is rejected.
The section 5.2.1 and 5.3.1 have been updated with the suggested comments.
- Please rename the Listings files (if allowed by the Journal) to replaces spaces with underscores to
allow simple execution in Python.
The supplemental listings files have been renamed according to the reviewer's comments.
- When running Listings 1 & 2 from the command line, they work well. However, executing these
files line by line in a Jupyter console throws an error due to the long printout--is there a way to
silence it?
The author had no problems executing scripts in Jupyter (Debian Jessie x86_64, python 2.7, Jupyter
4.1.0). The long printout of the GPL licence at the beginning of the simulation can be suppressed in
daetools.cfg config file by changing the option daetools.activity.printHeader to false. The config file
is located in /etc/daetools (GNU/Linux, MacOSX) or c:\daetools (Windows) directory.
- The listings of code in the manuscript should match the listings of code in the supplementary
material, for consistency.
The supplemental source code listings have been updated to match those in the manuscript.
Validity of the findings
As the underlying solvers used in this toolkit are well-established, there is no question of their
validity.
The examples used in the manuscript have also been published independently of the software.
Comments for the author
Although not essential for publication, I recommend the author to consider using PEP8
(https://www.python.org/dev/peps/pep-0008/) conventions (especially naming styles) in the future
for consistency with other Python packages.
The reviewer is correct that the naming style does not follow the conventions outlined in PEP8.
This is mostly due to a different naming convention used in C++ (for instance class and function
names). Since the Python names must (in an ideal case) match those in C++ (for the API
consistency) the conventions given in PEP8 cannot be fully fulfilled (i.e. class names with the
prefix dae). However, the remaining of the code can and will be updated in the next version of the
software to satisfy the PEP8 recommendations as much as possible.
Similarly, I would find it useful for this software to be available on GitHub so that users can easily
compare versions, submit bugs, etc.
Switching to GitHub has already been suggested by several users. The relocation process requires
some time/effort, and daetools will be moved to GitHub the most likely with the next version of the
software.
5 of 13
Reviewer 2 (Anonymous)
All changes made to the manuscript as a response to this reviewer comments are given in green
colour and marked with a superscript R2.
Basic reporting
The manuscript describes DAE Tools, a software tool for equation-based object-oriented modelling,
simulation, and optimization of systems on differential-algebraic form. DAE Tools combines the
strength of DAE modelling languages such as, e.g., Modelica, with the expressiveness and
generality of ordinary programming languages such as Python and C++ by providing a modelling
language API that can be used from Python or C++. After the introduction the paper focuses on the
DAE Tools architecture and the operation of that are performed and the objects involved when a
simulation is performed. Then a small example (CopperBlock) is used to illustrate the steps
involved in modelling and simulation of DAE Tools. Finally some other DAE Tools applications are
briefly described.
DAE modeling, simulation and optimization tools is an area that is currently attracting a lot of
interest. Strong points of DAE Tools is 1) the tight integration with Python/C++ and 2) the fact that
it is open source. However, there are other such tolls available. JModelica.org is a open source
Modelica tool that is very well integrated with Python. In spite of this I cannot find a single
reference to, or comparison with, this. Another example is 'AP Monitor' which also supports
modeling, simulation, and optimization of DAE systems. Still no references or comparisons.
The level of integration with Python is different in DAE Tools from the mentioned software that
provide Python scripting interface. Their API is limited to the loading of already developed models,
execution of simulations and processing of the results while in DAE Tools the complete
functionality (including the modelling concepts) is available through the Python API.
A reference to the AP Monitor has been added to the manuscript in the section about the modelling
languages, line 13. The references to the tools that provide Python scripting interface to the
simulator (APMonitor, JModelica, OpenModelica) and the following sentence have been added to
the manuscript, line 25:
“In addition, some modelling tools provide the Python scripting interface to the simulator engine:
APMonitor, JModelica (Akesson et al., 2010), and OpenModelica (Fritzson et al., 2005); however,
their API is limited to the loading of developed models, execution of simulations and processing of
the results only.”.
DAE Tools also appears to lack vital functionality for DAEs. For example it currently only support
index-1 DAE systems. Competing systems such as JModelica.org supports high-index DAEs
through index reduction using Panetlides algorithm and dummy derivatives. Also, it does not seem
as if DAE Tools currently supports discrete-time sampled systems. This is problematic since a major
use case for DAE tools is simulation of closed control loops where the controller are implemented
in discrete-time.
The reviewer is correct that higher index DAE systems are very important. However, those systems
are not supported in DAE Tools at the moment and the users have to deal with the numerical issues
such as index reduction and consistent initial conditions.
DAE Tools do not support discrete-time sampled systems. The main application of DAE Tools are
continuous-time engineering problems which may contain some discrete elements such as
discontinuous equations and discrete events where the state of the system can be changed in discrete
time.
The manuscript is dominated by large figures containing UML class diagrams and UML sequence
6 of 13
diagrams. I doubt that this is really useful for the readers. The same holds for the code examples.
They are nice, but why duplicate them in both Python and C++. For a scientific paper like this
would be enough to show the examples in one language, e.g., Python and then simply say that one
alternatively could use C++ to express the same thing. It would have better if this page space had
been spent on better describing the functionality provided by the modeling language API.
The first seven UML diagrams have been moved to the supplemental material. The remaining
diagrams are kept as per reviewer #1 request. References to the figures 1 to 7 have been updated
and refer to the supplemental figures S1 to S7 now.
All C++ source code listings have been moved to the supplemental material. The end of the
sentence, line 445:
“and the model developed in c++ in the source code listing 5.”
has been replaced with into:
“and the same model developed in C++ in the supplemental source code listing S2”.
Regarding the modelling API, the most important basic capabilities have been demonstrated in this
article. Including more functionality is beyond the limits of a single article and the readers are
directed to the DAE Tools website where the user guide, the API reference and the tutorials are
described in details in the Documentation section. The following sentence has been added to the
manuscript, line 446:
“More information about the API, the user guide and tutorials can be found in the Documentation
section of the DAE Toools website (http://www.daetools.com/docs/index.html), subsections pyDAE
User Guide, pyDAE API Reference, and Tutorials, respectively.”
The final 'Applications' section is too short to really explain the applications.
Both examples in the Applications section are described in details in the corresponding Nature
Materials publication and the NineML website. The author believes that a general information about
these two projects are sufficient and that there is no need to duplicate the implementation details in
this work.
Experimental design
No Comments
Validity of the findings
No Comments
Comments for the author
Please focus your paper on 1) what is unique with DAE Tools from a DAE modeling point of view
and 2) compare with alternative tools such as JModelica.org and AP Monitor. Focus less on the
architecture.
1) The section “Main Features and Programming Paradigms” contains the description of the unique
features of DAE Tools: the hybrid approach and the programming paradigms (equation-based
approach implemented in the fourth generation object-oriented general purpose language such as
Python) have been presented. In addition, the type of problems that can be solved and the most
important capabilities have been described in details.
2) References to JModelica and APMonitor and the brief information have been added to the
manuscript, lines 13 and 26. The Architecture section has been substantially shortened based on the
suggestions from the reviewer #1 and #2.
7 of 13
Reviewer 3 (Anonymous)
All changes made to the manuscript as a response to this reviewer comments are given in magenta
colour and marked with a superscript R3.
Basic reporting
This manuscript describes the architecture, design philosophy, and structure of DAE Tools, a
package for modeling and solving simulation and optimization problems. DAE Tools combines
strengths from modeling languages and general-purpose programming languages to allow runtime
model generation and simulation set-up, complex runtime operating procedures (for hybrid
discontinuous systems), model exchange and co-simulation, etc.
The project itself seems quite interesting and the manuscript is mostly well written and organized.
As such I recommend the submission to be accepted, pending some minor revisions, as listed below.
See 'General Comments for the Author'
Experimental design
See 'General Comments for the Author'
Validity of the findings
See 'General Comments for the Author'
Comments for the author
Main comments/questions
1. My main concerns relate to the possible loss of flexibility in modeling and simulation when
compared, for example, with user-defined models simulated with libraries such as PETSc or
Sundials. The tools provided by such libraries provide extensive user control which does not seem
to have been completely exposed in DAE Tools. Using the IDAS solver in Sundials as an example
(since this appear to be the workhorse in DAE Tools), I hope that the author can provide
clarifications on the following comments:
In a typical simulation using for instance the Sundials IDA software, the routines for evaluation of
equation residuals, root functions, sensitivity residuals, and the Jacobian matrix (if using a direct
linear equation solver) or the preconditioner matrix (if using an iterative linear equation solver)
need to be provided to the DAE solver. In addition, the functions that set the initial conditions and
control the main simulation loop (the time-stepping, that is the “operating procedure”) have to be
developed. The user has a full control over the above mentioned functions (where the model
structure is defined and the system advanced in time) while the remaining of the code is (more or
less) common for all simulations. These routines need to be developed from scratch, the derivatives
for the Jacobian matrix or for sensitivity residuals evaluated manually, discontinuities also handled
manually making the whole process very difficult and error prone.
On the other hand, similarly to the modelling languages, DAE Tools software offers the key
modelling concepts through its API (i.e. continuous and discontinuous equations, state transition
networks, discrete events, units consistency etc.). In addition, the general purpose object-oriented
programming languages (C++ and Python) facilitate the model development process, creation of
complex models and model hierarchies, simulation set-up and execution of simulations. The
developed model hierarchy is internally transformed into a system of differential-algebraic
equations and the routines for evaluation of equation residuals, an analytical Jacobian matrix (using
AD), root functions and sensitivity residuals are provided to the Sundials IDAS DAE solver. Initial
8 of 13
conditions and the simulation loop are provided in the simulation class. Basically, all functions that
are controlled by users in Sundials IDA are fully user controlled in DAE Tools too (at the slightly
higher level of abstraction, though). The only exception is the preconditioner function (used with
the iterative linear solvers) where the user can only select the type of the preconditioner provided by
one of the following packages: Trilinos AztecOO, IFPACK, or ML.
Therefore, the loss of the flexibility is only minimal.
1.1 From the UML diagram in Fig. 10, it seems that the only user-defined functions relate to
the definition of problem parameters, variables, and equations. However, IDAS itself
provides more flexibility, e.g. specifying Jacobian-related information. Does DAE Tools
expose this capability? Or does it always rely on AD-generated Jacobians? There are
situations when an exact Jacobian may be overkill and a user-supplied Jacobian
approximation (especially for large, multi-scale problems) leads to much more efficient
solutions.
DAE Tools always use the analytically generated Jacobian matrix when the direct sparse
linear solvers are used. On the other hand, the iterative linear solvers from Sundials or
Trilinos AztecOO packages can be optionally used which do not require the full Jacobian
matrix but utilise a preconditioner that uses some information from the Jacobian matrix.
However, the support for iterative linear solvers is not fully complete at the moment and the
user-defined Jacobian approximations are not supported. Theoretically, the implementation
of this feature is rather straightforward: it is just an additional call-back function; however,
the modeller must construct it manually which is difficult and error-prone for large, complex
models.
Since the idea of the user-supplied Jacobian is interesting and easy to implement, this feature
can be a part of the future work on the software.
To clarify the details about the way the Jacobian matrix is evaluated, a new paragraph has
been added to the manuscript, line 267:
“The automatic differentiation is always utilised to analytically generate the Jacobian
matrix if the direct sparse linear solvers are used, or to generate a preconditioner matrix for
the iterative linear solvers using the software suites such as Trilinos AztecOO, IFPACK, and
ML.”
1.2 The statement on line 186 seems to suggest that DAE Tools also exposes the sensitivity
analysis capabilities of IDAS. However, no further mention is made of this. If calculating
sensitivities is supported, how are the sensitivity equations generated? IS it also AD-based?
Also, what are the limitations in terms of problems that can be addressed (given that IDAS
cannot currently perform sensitivity analysis of hybrid discontinuous systems)?
Yes, the software is capable of calculating the sensitivities per given parameters and it is
AD-based. It uses the forward sensitivity method available in IDAS to integrate the forward
sensitivity equations. This leads to the DAE system of size N·(Ns+1), where Ns is the number
of model parameters with respect to which sensitivities are requested. Regarding the
limitations, the type of problems that can be solved are continuous systems with
discontinuous equations/state transitions and discrete events. The discrete-time sampled
systems are not supported.
To clarify the details about the sensitivity calculations, the following sentence has been
added to the manuscript, line 269:
“The automatic differentiation is also applied to sensitivity analysis where it is used to
calculate derivatives of model equations per parameters with respect to which sensitivities
are requested. Only the continuous-time systems are supported and the forward sensitivity
method provided by the Sundials IDAS solver is available at the moment. The forward
9 of 13
sensitivity equations are integrated together with the original DAE system leading to the
DAE system of size N(Ns+1), where N is the size of the original DAE system and Ns is the
number of model parameters. More information about the sensitivity analysis using the
forward sensitivity method can be found in the Sundials documentation.”
2. There is no mention of support for parallel computation (either distributed or shared-memory), a
main area of applications for both PETSc and Sundials, which were designed with large-scale
problems in mind. Are there any plans on developing DAE Tools along those lines?
DAE Tools currently support only the shared-memory parallel computation. The only parallel code
section in DAE Tools at the moment is the linear equations solver, since its repeated solution
typically requires around 90-95% of the total simulation time. This is achieved by using one of the
available multi-threaded sparse linear solvers (SuperLU_MT, Pardiso and Intel Pardiso).
Additionally, the parallel calculation of residuals and derivatives for the Jacobian matrix can be
added in the future (although only a very modest increase in performance can be achieved this
way).
Since the support for parallel computation is only briefly mentioned in line 188: “... a range of
direct/iterative and sequential/multi-threaded sparse matrix linear solvers is interfaced ...”, a new
paragraph has been added at the end of the section “Main features and programming paradigms”,
line 285:
“Parallel computation is supported using only the shared-memory parallel programming model at
the moment. Since a repeated solution of the system of linear equations typically requires around
90-95% of the total simulation time, the linear equations solver represents the major bottleneck in
the simulation. Therefore, the main focus was put on performance improvement of the solution of
linear equations using one of the available multi-threaded solvers such as SuperLU_MT, Pardiso
and Intel Pardiso.”
3. The accuracy and efficiency comparisons between DAE Tools and Matlab's ode15s (lines 728 736) are questionable. First, it is mentioned that the two integrators were using 'default solver
tolerances for both' and that the 'simulation outputs were indistinguishable'. Since both ode15s
and IDAS allow user specified tolerances, why not set them equal to each other and then perform a
more quantitative comparison of the achieved accuracy? Furthermore, there are many other
possible reasons for efficiency differences between the two beyond those listed in the manuscript
(algorithmic differences and language differences being two of them)
A fair comparison between the two software is always a difficult task. However, the purpose of this
comparison is to point out the benefits of the object-oriented DAE Tools API and the automatic
differentiation capabilities it provides (not the efficiency per se). The main two points are: (a) the
manual calculation of analytical derivatives is very difficult and prone to the errors, while the
modelling languages (and DAE Tools) take care of them automatically, and (b) it is possible to
obtain a solution without supplying the derivatives to the solver – in that case the solver creates
their numerical approximation; however, this leads to the slower convergence rate and the
significant loss of performance.
In this case, the large performance differences are definitely due to the way the Jacobian matrix is
constructed which is pointed out in the manuscript, line 733: “This speedup is likely a result of its
built-in support for automatic differentiation facilitating rapid and accurate derivative evaluation
for solution of the highly non-linear system of equations involved in time stepping”. Since the
ode15s solver in Matlab constructs a numerical approximation of the Jacobian matrix if it is not
provided as an input, the convergence rate is much slower. The comparison would be more fair if
the Jacobian function is provided to the ode15s solver. However, calculating derivatives by hand for
all functions in the system is very difficult and error-prone for a very large and complex model
described in this case study. Thus, for practical reasons, it has not been done. The default tolerances
10 of 13
are 10-5 in DAE Tools and 10-3 in Matlab. Setting the tolerances equal would not produce a fair
comparison, again due to the absence of the analytical Jacobian matrix in the Matlab version (this is
also the reason for much higher default tolerances in Matlab: 10 -3; the ode15s solver will the most
likely fail to converge for very tight tolerances and the highly non-linear model equations, or the
convergence rate will be unacceptably low, at least).
To further clarify the performance loss in Matlab due to the missing Jacobian function, the text
“(10−3 in Matlab and 10−5 in DAE Tools)” has been added after “Using default solver tolerances for
both” (line 661), the word “likely” has been removed (line 665), and the following sentences have
been added to the manuscript (line 667):
“In contrast, the ode15s solver creates a numerical approximation of the Jacobian matrix if the
Jacobian calculation function is not provided as an input; therefore, the convergence rate is much
slower. The significant loss in performance illustrates the benefits of the object-oriented DAE Tools
API and the automatic differentiation capabilities it provides, since the calculation of derivatives by
hand for all functions in the Matlab model is very difficult and error-prone for a system of this size
and complexity.”
Minor comments
1. The very first classification of mathematical modeling (lines 10-19) differentiates between
modeling languages (Modelica, gPROMS, etc.) on one hand and what I view more as solver
libraries (PETSc and Sundials) on the other hand. Indeed, neither PETSc nor Sundials really offer
proper modeling support (a view that seems to also be acknowledged by the author later on, see
lines 70-71). In this case, is this classification really appropriate?
In the author's opinion, the following two approaches exist (as given in the manuscript): (1) use of
modelling or domain-specific languages, and (2) use of general-purpose programming languages. In
the first approach, both the modelling concepts and the simulation environment are provided. In the
second one, the users can: (a) use one of the available libraries that provide interface to various
solvers (such as PETSc, Sundials, Assimulo etc.), (b) use the multi-paradigm numerical languages
such as Matlab, Mathematica, etc., (c) develop everything from scratch without any solver library.
Although the development of the whole simulation from scratch is the most difficult route – it has
some advantages: an efficient, tailor-made code can be developed that exploits the specific model
structure and runs on a specific high-performance computing architecture such as GPGPU, FPGA,
vector processors and data flow engines. For instance, very efficient 3D finite difference solvers can
be developed to utilise the data flow engines and the OpenSPL language (MaxGenFD,
https://www.maxeler.com/technology/computation). Therefore, in the author's opinion, the use of
solver libraries is just one of the options when using the general-purpose programming languages.
To further clarify the available options with the general purpose programming languages, the
following sentences have been added to the manuscript (line 21):
“The lower-level general purpose languages are also often used for the development of the efficient,
tailor-made software (i.e. large-scale finite difference and finite element solvers) targeting one of
the available high-performance computing architectures such as general purpose graphics
processing units (GPGPU), field-programmable gate arrays (FPGA), vector processors and data
flow engines (DFE).”
2. Is the author aware of the Assimulo project? If so, what (if anything) is common between these
two efforts of providing simpler interfaces to solvers such as those in Sundials?
Both software provide a high-level interface to various ODE/DAE solvers. However, the main
difference between DAE Tools and Assimulo is that DAE Tools also provide the fundamental
modelling concepts (similar to modelling languages) that can be used to build models or model
hierarchies. The information that a modeller provides to Assimulo are low level and procedural, for
11 of 13
instance a function that calculate residuals. On the other hand, the high-level object-oriented
modelling concepts are utilised in DAE Tools to construct models.
A reference and a short description of the Assimulo have been added to the manuscript, line 17:
“(i.e. Assimulo - a high-level interface for a wide variety of ODE/DAE solvers written in C and
Fortran, Andersson et al., 2015)”.
3. The term 'degrees of freedom' (lines 242-244) is used here in a rather unconventional sense.
While one can better infer what their meaning is herein (from the description of the CopperBlock
simulation), it may be useful to clarify that early on, when the term is first introduced (or else
maybe come up with a better name).
The term “degree of freedom” of some system, used in this work is also used in several scientific
fields and represents “a parameter of that system that can vary independently”.
To clarify what is meant by the term “degree of freedom” in this work, the short definition has been
added to the manuscript when the term is first introduced, line 253:
“(the system variables that may vary independently)”.
4. Since DAE Tools relies on a 'heavily modified ADOL-C', what (if any) are the plans on staying
up-to-date and incorporate possible new releases of ADOL-C?
DAE Tools utilise just a very small subset of ADOL-C functionality: only the operator overloading
part. Since the derivative expressions for the basic mathematical operations and functions are
precisely defined and therefore fixed, there is no need for incorporation of new releases. The
mentioned modifications are related to the results of mathematical operations on ADOL-C adouble
objects: apart from evaluation of derivatives, the modified code creates the equation evaluation trees
which can be used for code generation, generation of Latex and MathML expressions etc.
Language, grammar, manuscript structure and organization
1. While the manuscript reads pretty well, there are places that sound as if they were written by
someone else! The Abstract and Conclusions sections, as well as the last paragraph in the
Introduction, stand out in this respect. In particular: using past tense in the Abstract and at the end
of the Introduction section (before anything has really been presented); very repetitive statements
made in the Conclusions section.
The past tense in the Abstract (line 7) and Introduction (lines 99-106) sections have been replaced
with the present tense.
The repetitive statements have been removed from the Conclusions, and the whole section
rewritten:
“DAE Tools modelling, simulation and optimisation software, its programming paradigms, the main
features and capabilities have been presented in this work. Some shortcomings of the current
approaches to mathematical modelling have been recognised and analysed, and a new hybrid
approach proposed. The hybrid approach offers some of the key advantages of modelling languages
paired with the flexibility of the general purpose languages. Its benefits have been discussed such
as the support for the runtime model generation, runtime simulation set-up and complex runtime
operating procedures, interoperability with the third party software packages, and embedding and
code-generation capabilities. The software architecture and the procedure for transformation of the
model hierarchy into a DAE system as well as the algorithm for the solution of the DAE system
have been presented. The most important modelling concepts available in the DAE Tools API
required for model development and simulation execution have been outlined.
The software has successfully been applied to two different scientific problems. In the first example,
the authors took advantage of the object-oriented characteristics of the software and the
interoperability with the NumPy library for the development of a model hierarchy to
12 of 13
mathematically describe operation of lithium-ion batteries at different physical scales. In the
second example, the DAE Tools software has been used as a reference implementation simulator for
the new XML-based domain specific language (NineML). DAE Tools embedding capabilities have
been utilised to provide a simulator available in three versions: (a) desktop application, (b) web
application and, (c) web service.”
2. There are numerous missing or incorrectly used definite and indefinite articles throughout the
manuscript.
Some of the missing/incorrectly used articles have been added/corrected (lines 41, 42, 50, 55, 72,
86, 87, 100, 197, 236, 238, 260, 261, 445).
3. The description of the various interfaces in the six DAE Tools packages (in the Architecture
section) is very difficult to parse (the one for the 'core' package is effectively a 20-line long
sentence). These may be better presented in tables or else bulleted lists.
The description of the key modelling interfaces in the package “core” are now presented in a table
(Table 1).
4. 'linear' is misspelled on line 304
The spelling has been corrected.
13 of 13
" | Here is a paper. Please give your review comments after reading it. |
102 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This paper proposes a computational framework for automated, landmark-free hypothesis testing of 2D contour shapes (i.e., shape outlines), and implements one realization of that framework. The proposed framework consists of point set registration, point correspondence determination, and parametric full-shape hypothesis testing. The results are calculated quickly (<2 s), yield morphologically rich detail in an easy-to-understand visualization, and are complimented by parametrically (or nonparametrically) calculated probability values. These probability values represent the likelihood that, in the absence of a true shape effect, smooth, random Gaussian shape changes would yield an effect as large as the observed one. This proposed framework nevertheless possesses a number of limitations, including sensitivity to algorithm parameters. As a number of algorithms and algorithm parameters could be substituted at each stage in the proposed data processing chain, sensitivity analysis would be necessary for robust statistical conclusions. In this paper, the proposed technique is applied to nine public datasets using a two-sample design, and an ANCOVA design is then applied to a synthetic dataset to demonstrate how the proposed method generalizes to the family of classical hypothesis tests. Extension to the analysis of 3D shapes is discussed.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The statistical analysis of shape variation is relevant to a wide variety of academic fields including: evolutionary biology <ns0:ref type='bibr' target='#b44'>(Mitteroecker and Gunz, 2009)</ns0:ref>, biomechanics <ns0:ref type='bibr'>(Pedoia et al., 2017), computer vision (Murphy-Chutorian and</ns0:ref><ns0:ref type='bibr' target='#b48'>Trivedi, 2008)</ns0:ref>, and many others <ns0:ref type='bibr' target='#b22'>(Da Costa and Cesar, 2000;</ns0:ref><ns0:ref type='bibr' target='#b66'>Rohlf and Marcus, 1993;</ns0:ref><ns0:ref type='bibr' target='#b2'>Adams et al., 2004</ns0:ref><ns0:ref type='bibr'>Adams et al., , 2013))</ns0:ref>. A key methodological framework for the statistical analysis of shape to have emerged in the literature is Geometric Morphometrics <ns0:ref type='bibr' target='#b20'>(Corti, 1993;</ns0:ref><ns0:ref type='bibr' target='#b11'>Bookstein, 1996;</ns0:ref><ns0:ref type='bibr' target='#b70'>Slice, 2007;</ns0:ref><ns0:ref type='bibr' target='#b79'>Zelditch et al., 2012)</ns0:ref>. Geometric Morphometrics consists of a variety of statistical techniques, ranging from classical hypothesis testing (e.g. <ns0:ref type='bibr' target='#b30'>Goodall, 1991)</ns0:ref> and classical dimensionality reduction techniques like principal component analysis <ns0:ref type='bibr' target='#b2'>(Adams et al., 2004)</ns0:ref> to machine learning techniques like unsupervised clustering <ns0:ref type='bibr' target='#b60'>(Renaud et al., 2005)</ns0:ref>. This paper is concerned primarily with classical hypothesis testing as it pertains to shape analysis.</ns0:p><ns0:p>A common geometric morphometric approach to classical hypothesis testing regarding group differences (depicted in Fig. <ns0:ref type='figure'>1a</ns0:ref>), consists of: (1) landmark definition, (2) spatial registration, and (3) Procrustes ANOVA <ns0:ref type='bibr' target='#b30'>(Goodall, 1991)</ns0:ref>. Landmark definition refers to the manual identification and digitizing (i.e., XYZ coordinate specification) of homologous points on multiple objects, for example the corners on polyhedra. Spatial registration refers to the optimal, non-shearing affine alignment of a set of landmarks; that is, the optimal translation, rotation and scaling of each set of landmarks is calculated so that the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science the landmarks are optimally aligned in space. Procrustes ANOVA is effectively equivalent to classical ANOVA, where Procrustes distance is the dependent variable <ns0:ref type='bibr' target='#b79'>(Zelditch et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Landmarks with evolutionary, developmental or functional homology are essential for accurate interpretation of results <ns0:ref type='bibr' target='#b32'>(Hallgrimsson et al., 2015)</ns0:ref>, especially for biological studies which seek to understand morphological variation in the context of evolution (e.g. <ns0:ref type='bibr' target='#b71'>Stayton, 2005;</ns0:ref><ns0:ref type='bibr' target='#b46'>Morgan, 2009;</ns0:ref><ns0:ref type='bibr' target='#b15'>Casanovas-Vilar and Van Dam, 2013;</ns0:ref><ns0:ref type='bibr' target='#b25'>Dumont et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b52'>Page and Cooper, 2017)</ns0:ref>, ontogeny (e.g <ns0:ref type='bibr' target='#b41'>Klingenberg and McIntyre, 1998;</ns0:ref><ns0:ref type='bibr' target='#b45'>Mitteroecker et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b69'>Singleton, 2015)</ns0:ref> or function (e.g. <ns0:ref type='bibr' target='#b74'>Terhune et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b75'>Toro-Ibacache et al., 2016)</ns0:ref>. A key practical advantage of landmark approaches is that they impose problem tractability; they convert abstract, usually high-dimensional shape representations including images, scans and line contours, to a relatively small set of numeric coordinates which can be assembled into readily processable data formats like text files and spreadsheets. This practical advantage is reinforced by well-established statistical theory (e.g. <ns0:ref type='bibr' target='#b31'>Gower, 1975;</ns0:ref><ns0:ref type='bibr' target='#b36'>Kendall, 1977</ns0:ref><ns0:ref type='bibr' target='#b37'>Kendall, , 1984</ns0:ref><ns0:ref type='bibr' target='#b38'>Kendall, , 1985;;</ns0:ref><ns0:ref type='bibr' target='#b40'>Kent, 1994;</ns0:ref><ns0:ref type='bibr' target='#b62'>Rohlf, 1999)</ns0:ref> which describes a comprehensive solution for dealing with shape data's inherent dimensionality problem <ns0:ref type='bibr'>(Rohlf, 2000b,a;</ns0:ref><ns0:ref type='bibr' target='#b19'>Collyer et al., 2015)</ns0:ref>.</ns0:p><ns0:p>A common approach to landmark-based hypothesis testing is Procrustes ANOVA. While landmark data themselves are multivariate (i.e., multiple landmarks, each with multiple coordinates are used to describe a single shape), Procrustes ANOVA uses a univariate metric (Procrustes distance) to test shaperelevant hypotheses. One problem with this approach is that a single value is likely inadequate to fully characterize shape effects. Many other shape descriptors exist <ns0:ref type='bibr' target='#b42'>(Kurnianggoro et al., 2018)</ns0:ref>, including both univariate metrics like eccentricity and multivariate metrics like geometric moments <ns0:ref type='bibr' target='#b80'>(Zhang and Lu, 2004)</ns0:ref>.</ns0:p><ns0:p>It has been argued that focus on relatively low dimensional shape metrics like these is necessary in order to achieve suitable statistical power, with the assumption that too many variables relative to the number of phenotypes can preclude hypothesis testing via parametric methods, especially for small samples <ns0:ref type='bibr' target='#b19'>(Collyer et al., 2015)</ns0:ref>; one aim of this paper is to challenge that assertion, and to show that hypothesis testing is indeed possible for even high-dimensional representations of shape, and with suitably high statistical power for even relatively small sample sizes.</ns0:p><ns0:p>A related sample size-relevant theoretical limitation of Procrustes ANOVA is that there is no known parametric solution to the underlying Procrustes distance probability distributions. Consequently, statistical inference is conducted nonparametrically, often using bootstrapping or permutation techniques <ns0:ref type='bibr'>(Zelditch et al., 2012, pp.248-259)</ns0:ref>. These nonparametric procedures are inherently poor for small sample sizes <ns0:ref type='bibr' target='#b6'>(Anderson and Braak, 2003;</ns0:ref><ns0:ref type='bibr' target='#b13'>Brombin and Salmaso, 2009)</ns0:ref> because the probability distributions are constructed empirically and numerically, using the actual data, and both the precision and accuracy of these nonparametrically constructed distributions can decrease substantially with small sample sizes.</ns0:p><ns0:p>A variety of landmark-free or landmark-minimal methods also exist, including for example techniques that fit mathematical curves to shape outlines <ns0:ref type='bibr' target='#b61'>(Rohlf, 1990)</ns0:ref>. One technique that has been particularly widely used is elliptical Fourier analysis <ns0:ref type='bibr' target='#b18'>(Claude, 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bonhomme et al., 2014)</ns0:ref>, which considers the spatial relations amongst neighboring points, and characterizes the spatial frequencies along the contour perimeter as a change-relevant representation of shape. Elliptical Fourier analysis has been frequently employed to analyse structures on which few homologous landmarks can be identified such as fins, jaws and teeth (e.g. <ns0:ref type='bibr' target='#b28'>Fu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Hill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Cullen and Marshall, 2019)</ns0:ref>. These methods are highly relevant to the methods described in this paper, in that they deal with original, high-dimensional shape data like 2D contours and 3D surface scans.</ns0:p><ns0:p>While landmark-free or landmark-minimal methods initially operate on original high-dimensional shape data, they tend to use much lower-dimensional representations of shape when conducting classical hypothesis testing. For example, elliptical Fourier analysis tends to conduct hypothesis testing using a relatively small number (fewer than ten) harmonic coefficients <ns0:ref type='bibr' target='#b9'>(Bonhomme et al., 2014)</ns0:ref>. Common landmark and landmark-free methods are thus similar from from a hypothesis testing perspective in that the hypothesis tests ultimately pertain to relatively low-dimensional shape metrics.</ns0:p><ns0:p>This main aim of this paper was to show that classical hypothesis testing is possible on original, highdimensional shape data, and in particular on continuous surfaces, without the need for low-dimensional shape representations, and with suitably high power even for analyses of relatively small samples. The methodology, which we refer to as 'continuous, mass-multivariate analysis' consists of a number of previously described techniques including: (1) point set registration, (2) correspondence, and (3) massmultivariate hypothesis testing. This combination of techniques allows one to conduct landmark-free hypothesis testing on original surface shapes. For interpretive convenience we limit focus to 2D contours</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>Analyses were conducted in Python 3.6.10 (van Rossum, 2019) using <ns0:ref type='bibr'>Anaconda 3.6.10 (Anaconda, 2020)</ns0:ref> and in R 3.6.2 (R Core Team, 2019) . Data processing scripts are available along with all original and processed data in this project's public repository at: https://github.com/0todd0000/ lmfree2d.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>Nine datasets were analyzed (Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>). All datasets were taken from the the open-source 2D Shape Structure database <ns0:ref type='bibr' target='#b14'>(Carlier et al., 2016)</ns0:ref> (http://2dshapesstructure.github.io). The database consists of 70 different shape classes. Inclusion criteria for shape class were: (i) qualitatively similar geometry in at least 10 shapes (Fig. <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>), and (ii) at least four readily identifiable landmarks for all contour shapes.</ns0:p><ns0:p>Each dataset consisted of 20 contour shapes, where a 'dataset' represents a shape class (e.g., 'Bell' or 'Face') and individual shapes represent morphological variation within that shape class. We selected just 10 shapes to analyze from each dataset, primarily because it has been suggested that parametric procedures are unsuitable for the morphological analyses of small samples <ns0:ref type='bibr' target='#b19'>(Collyer et al., 2015)</ns0:ref>, and we wished to demonstrate that the proposed parametric technique is indeed sufficiently powerful for small-sample analyses. Secondary reasons for considering just 10 shapes included: (1) qualitatively different withinclass geometry, implying that statistical comparisons would be dubious if all 20 shapes were used, (2) inconsistent curvature characteristics (e.g., some with sharp corners, others with no discernible corners), implying landmarking difficulties, and (3) untrue contour data (e.g., internal loops and thus non-convex polygons) implying that contour parameterization was not possible for all shapes. We assigned five shapes to each of two groups in a pseudo-random manner, to span a range of effect sizes and p values (see Results). For replicability, the final set of 10 shapes selected for analysis from each class are redistributed in this project's repository at: https://github.com/0todd0000/lmfree2d.</ns0:p><ns0:p>We manually selected ten shapes from each dataset in order to span a range of effect sizes; in the Results, note that p values span a wide range (p < 0.001 to p > 0.9). Note also that the ten selected contours had a variable number of contour points within each dataset (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p><ns0:p>Each dataset was separated into two groups, where the first and second groups consisted of the first five and last five shapes, respectively. Two-sample tests were conducted on each dataset using the four approaches as described below. </ns0:p></ns0:div>
<ns0:div><ns0:head>Data processing</ns0:head><ns0:p>The 2D contour shape data were analyzed using four related approaches, consisting of the four combinations of (i) landmarks vs. contours, and (ii) univariate (UV) vs. mass-multivariate (mass-MV). These four approaches are summarized in Fig. <ns0:ref type='figure'>1</ns0:ref>. The Landmarks-UV approach (Fig. <ns0:ref type='figure'>1a</ns0:ref>) is common in the literature, none of the other approaches is common. The primary purpose of this study was to compare and contrast the Landmarks-UV and Contours-MassMV approaches (Fig. <ns0:ref type='figure'>1a,d</ns0:ref>). We also employed intermediary approaches (Fig. <ns0:ref type='figure'>1b,c</ns0:ref>) to more clearly highlight the differences between the two main approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b14'>(Carlier et al., 2016)</ns0:ref>. For each dataset in this figure, one representative shape is highlighted, along with its numbered landmarks. Note that shape variance ranges from relatively small (e.g. Bell, Face) to relatively large (e.g. Device8, Heart). </ns0:p></ns0:div>
<ns0:div><ns0:head>Landmarks univariate (UV) analysis 139</ns0:head><ns0:p>Landmarks were defined for each dataset as depicted in Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. Both the number of landmarks (Table <ns0:ref type='table' target='#tab_0'>140</ns0:ref> 1) and their locations were selected in an ad hoc manner, with the qualitative requirement of readily Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>tests (see below), implying that the landmarking was performed without grouping bias.</ns0:p><ns0:p>The landmarks were spatially registered using Generalized Procrustes Analysis (GPA) <ns0:ref type='bibr' target='#b31'>(Gower, 1975)</ns0:ref>, and the resulting registered landmarks were analyzed in a univariate manner, using Procrustes ANOVA <ns0:ref type='bibr' target='#b30'>(Goodall, 1991)</ns0:ref> -a method which considers the variance in the Procrustes distance across a dataset.</ns0:p><ns0:p>Note that the Procrustes distance is a scalar quantity that summarizes shape difference, and thus that this method is univariate. GPA and Procrustes ANOVA were both conducted using the geomorph package for R <ns0:ref type='bibr' target='#b1'>(Adams and Otárola-Castillo, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Landmarks mass-multivariate (mass-MV) analysis</ns0:head><ns0:p>This approach was identical to the Landmarks-UV approach described above, except for statistical analysis.</ns0:p><ns0:p>The two-sample Hotelling's T 2 statistic was calculated for each landmark according to its definition:</ns0:p><ns0:formula xml:id='formula_0'>T 2 i = n 1 n 2 n 1 + n 2 r 1i − r 2i ⊤ W −1 i r 1i − r 2i<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where i indexes landmarks, the subscripts '1' and '2' index the two groups, n is sample size, r i is the mean position vector of landmark i, and W i is the pooled covariance matrix for landmark i:</ns0:p><ns0:formula xml:id='formula_1'>W i = 1 n 1 + n 2 − 2 n 1 ∑ j=1 (r 1i j − r 1i )(r 1i j − r 1i ) ⊤ + n 2 ∑ j=1 (r 2i j − r 2i )(r 2i j − r 2i ) ⊤ (2)</ns0:formula><ns0:p>where the i index is dropped for convenience in Eqn.2.</ns0:p><ns0:p>Statistical inference was conducted in a mass-multivariate manner, using Statistical Parametric Mapping (SPM) <ns0:ref type='bibr' target='#b26'>(Friston et al., 2007)</ns0:ref>. SPM bases statistical inferences on the distribution of the maximum T 2 value T 2 max , which can be roughly interpreted as the largest landmark effect, and which is defined as:</ns0:p><ns0:formula xml:id='formula_2'>T 2 max ≡ max i∈L T 2 i (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where L is the number of landmarks.</ns0:p><ns0:p>SPM provides a parametric solution to the distribution of T 2 max under the null hypothesis, so significance can be assessed by determining where in this distribution the observed T 2 max lies. Classical hypothesis testing involves the calculation of a critical threshold (T 2 ) critical , defined as the (1 − α)th percentile of this distribution, and all landmarks whose T 2 values exceed (T 2 ) critical are deemed significant at a Type I error rate of α. This is a correction for multiple comparisons (i.e., across multiple landmarks) that is 'massmultivariate' in the following sense: 'mass' refers to a family of tests, in this case a family of landmarks, and 'multivariate' refers to a multivariate dependent variable, in this case is a two-component position vector. This is similar to traditional corrections for multiple comparisons like Bonferroni corrections, with one key exception: rather than using the total number of landmarks L as the basis for the multiple comparisons correction, as the Bonferroni correction does, SPM instead solves the mass-MV problem by assessing the correlation amongst neighboring landmarks or semilandmarks, and using the estimated correlation to provide a less severe correction than the Bonferroni correction, unless there is no correlation, in which case the SPM and Bonferroni corrections are equivalent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contours univariate (UV) analysis</ns0:head><ns0:p>Similar to the Landmarks UV approach, this approach ultimately conducted Procrustes ANOVA, but did so on contour data rather than landmark data. This was achieved through two main processing steps: coherent point drift (CPD) point set registration (Fig. <ns0:ref type='figure'>4</ns0:ref>) and optimum roll correspondence (Fig. <ns0:ref type='figure'>5</ns0:ref>).</ns0:p><ns0:p>Coherent point drift (CPD) <ns0:ref type='bibr' target='#b49'>(Myronenko and Song, 2010)</ns0:ref> is a point set registration algorithm that spatially aligns to sets of points that belong to the same or a similar object. Neither an equal number of points nor homologous points are required (Fig. <ns0:ref type='figure'>4</ns0:ref>), making this approach useful for contours that have an arbitrary number of points.</ns0:p><ns0:p>Since contour points from arbitrary datasets may generally be unordered (Fig. <ns0:ref type='figure'>5a</ns0:ref>), we started our analyses by randomly ordering all contour points, then applying CPD to the unordered points. We acknowledge that many 2D contour datasets consist of ordered points -including those in the database</ns0:p></ns0:div>
<ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science used for this study <ns0:ref type='bibr' target='#b14'>(Carlier et al., 2016)</ns0:ref> -but since 3D surface points are much more likely to be unordered, we regard unordered point support as necessary for showing that the proposed method is generalizable to 3D analyses. Following CPD, we re-ordered the points using parametric surface modeling <ns0:ref type='bibr' target='#b8'>(Bingol and Krishnamurthy, 2019)</ns0:ref>, which fits a curved line to the contour, and parameterizes the contour using position u, where u ranges from zero to one (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>). This contour parameterization results in a continuous representation of the contour, from which an arbitrary number of ordered points (Fig. <ns0:ref type='figure'>5b</ns0:ref>) can be used to discretize the contour of each shape for subsequent analysis. We used NURBS parameterization with B-spline interpolation <ns0:ref type='bibr' target='#b8'>(Bingol and Krishnamurthy, 2019)</ns0:ref> to calculate specific contour point locations.</ns0:p><ns0:p>We then applied an optimum roll transformation, which found the value of u for one contour that minimized the deformation energy across the two contours (Fig. <ns0:ref type='figure'>5c,d</ns0:ref>).</ns0:p><ns0:p>We repeated contour parameterization, ordering, and optimum roll correspondence across all contour shapes, using the shape with the maximum number of contour points in each dataset as the template shape to which the nine other shapes were registered. Last, we conducted Procrustes ANOVA on the registered contour shapes for each dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contours mass-multivariate (mass-MV) analysis</ns0:head><ns0:p>This approach was identical to the Contours-UV approach, with the exception of statistical analysis, which we conducted using SPM as outlined above. Unlike the landmark data above, which are generally spatially disparate, contour points are spatially proximal, and neighboring points tend to displace in a correlated manner. For example, if one contour point in a specific shape lies above the mean point location, its immediate neighbors also tend to lie above the mean location). SPM leverages this correlation to reduce the severity of the multiple comparisons correction, and SPM solutions converge to a common (T 2 ) critical regardless of the number of contour points, provided the number of contour points is sufficiently large to embody the spatial frequencies of empirical interest, as outlined in classical signal processing theory <ns0:ref type='bibr' target='#b51'>(Nyquist, 1928)</ns0:ref>.</ns0:p><ns0:p>As SPM uses parametric inference to calculate the critical T 2 threshold, and Procrustes ANOVA uses nonparametric inference, we also conduct Contours Mass-MV analysis using statistical non-parametric mapping <ns0:ref type='bibr' target='#b50'>(Nichols and Holmes, 2002)</ns0:ref>, which uses permutation to numerically build the T 2 max distribution under the null hypothesis. This permutation approach converges to the parametric solution when the residuals are normally distributed (i.e., point location variance follows an approximately bivariate Gaussian distribution). All SPM analyses were conducted in spm1d <ns0:ref type='bibr' target='#b53'>(Pataky, 2012)</ns0:ref>; note that one-dimensional SPM is sufficient because the contour domain (U) is one-dimensional (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The four analyses approaches produced a range of p values from very low (p < 0.001) to very high (p > 0.9), and even yielded a large range of p values for single datasets (e.g. Heart: 0.016 < p < 0.940) (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). Of the nine datasets, only two yielded consistent hypothesis testing conclusions (at α = 0.05) across the four analysis approaches: for the Comma dataset all approaches failed to reject the null hypothesis, and for the Flatfish dataset all approaches rejected the null hypothesis. The seven other datasets showed a range of disagreement on the methods. For example, for the Key dataset neither Landmarks approach reached significance, but both Contours approaches did reach significance. For the Hammer dataset, three approaches failed to reach significance, but the Contours Mass-MV approach produced a very low p value (p < 0.001). The Landmarks approaches executed comparatively rapidly ( 50 ms) compared to the Contours approaches ( 2 s) (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>).</ns0:p><ns0:p>Since Procrustes ANOVA results are commonly used in the literature, and are summarized for the current study in (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>), the remainder of the results considers the Mass-MV approaches' results.</ns0:p><ns0:p>First, the Landmarks Mass-MV approach indicate a wide range of T 2 statistic values at each landmark (Fig, <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>). For example, Landmark 5 in the Horseshoe dataset (Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>) had a very high T 2 value, and all other landmarks had comparatively low p values (Fig, <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>). This suggests that (a) shape differences can be highly localized, and that (b) univariate methods that employ an overall shape change metric, like Procrustes ANOVA, may not be able to detect these changes, even when the landmarks are identical (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>).</ns0:p><ns0:p>The Contour Mass-MV results showed little qualitative difference between parametric and nonparametric inference (Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>), with minor exceptions regarding specific locations and spatial extent of supra-threshold contour points (e.g. Key, Horseshoe). Since this Contour Mass-MV approach is sensitive to point-specific variation, it was generally more sensitive at detecting changes, as shown in the relatively high rate of null hypothesis rejection relative to the other approaches (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>); that is, even though the Contours-UV and Contours Mass-MV approaches consider the same data, the latter reached significance more often than the former, implying that it is more sensitive to location-specific effects. Whether this sensitivity is a benefit or not is considered in the Discussion. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Test statistic value Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>This study's main result is the demonstration that it is possible to conduct fully automated, landmark-free, parametric hypothesis testing regarding whole 2D contour shapes, irrespective of the number of points and point ordering in the original contour data. These analyses can be executed relatively quickly; the current non-optimized implementation required less than 2 s for all analysis steps (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> ). The proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) consists of families of previous techniques including: point set registration (e.g. <ns0:ref type='bibr' target='#b49'>Myronenko and Song, 2010)</ns0:ref>, point correspondence algorithms (e.g. <ns0:ref type='bibr' target='#b43'>Loy et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b49'>Myronenko and Song, 2010)</ns0:ref>, and mass-multivariate testing <ns0:ref type='bibr' target='#b26'>(Friston et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b73'>Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b16'>Chung et al., 2010)</ns0:ref>, and some of these techniques have been used for classical hypothesis testing regarding shapes in the past <ns0:ref type='bibr' target='#b73'>(Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b16'>Chung et al., 2010)</ns0:ref>. A variety of landmark-free techniques have also been previously proposed (e.g. <ns0:ref type='bibr' target='#b77'>Wuhrer et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b73'>Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b16'>Chung et al., 2010)</ns0:ref> Nevertheless, these techniques have not, to our knowledge, been previously combined into a general hypothesis testing framework -from raw data to statistical results -as depicted in Fig. <ns0:ref type='figure'>1d</ns0:ref>. The main novelty of this paper is thus the demonstration that it is possible to fully automate data processing from raw 2D contour data to final hypothesis testing results.</ns0:p><ns0:p>The second main novelty of this paper is the demonstration that parametric hypothesis testing is possible when conducted at the whole-contour level. We stress that 'possible' implies neither 'valid' nor 'appropriate'; demonstrating the validity and appropriateness of the proposed method would require substantial empirical efforts over a range of datasets, data modalities, experimental designs, and applications, in addition likely to simulation studies, and as such assessing validity and appropriateness are beyond the scope of this paper. We also stress that 'possible' does not imply that one should use the proposed technique in isolation. We believe that the proposed technique offers unique information that is complimentary to other techniques, and that ideally the results of multiple analysis techniques should be corroborated to build interpretive robustness.</ns0:p><ns0:p>SPM, like most parametric tests, assumes normality, so in this case SPM assumes that the spatial variability of all contour points are distributed in a bivariate Gaussian manner. This distributional assumption could be directly tested using distributional tests in a point-by-point manner. In this paper, instead of directly testing for distributional adherence, we instead tested the assumption indirectly, by conducting nonparametric tests (Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>), which do not assume bivariate normality. In this case there were minor quantitative differences between the parametric and nonparametric results, but overall the qualitative interpretations were largely unaffected by the use of parametric vs. nonparametric analysis.</ns0:p><ns0:p>This represents relatively strong (albeit indirect) evidence that the parametric approach's distributional assumptions are appropriate at best, or largely inconsequential at worst, for these particular datasets.</ns0:p><ns0:p>This however does not apply that parametric inference is appropriate for all datasets, so distributional assumptions should generally be tested for all datasets, possibly indirectly through nonparametric tests like those conducted in this paper.</ns0:p><ns0:p>The proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) offers various improvements over landmark analysis (Fig. <ns0:ref type='figure'>1a</ns0:ref>) including: (1) the modeling flexibility of classical hypothesis testing, (2) increased objectivity due to avoidance of subjective landmark definition and selection, (3) increased speed due to avoidance of manual work, and (4) unique, implicit morphological meaning in hypothesis testing results. We acknowledge that each of these improvements also involve limitations, and we address these limitations below. We stress that 'objectivity' implies none of 'accurate', 'useful' or 'interpretable'. We use 'objective' instead primarily to mean 'algorithmic'.</ns0:p><ns0:p>Although this paper considered only two-sample tests, SPM supports all classical hypothesis testing procedures, ranging from simple linear regression to MANCOVA <ns0:ref type='bibr' target='#b26'>(Friston et al., 2007)</ns0:ref>, thereby making the proposed framework highly flexible to arbitrary experimental designs. To emphasize this point, and how it may be valuable for general shape analysis, we conducted a set of supplementary analyses using synthetic data involving simple, circular shapes with controlled morphological effects (Fig. <ns0:ref type='figure' target='#fig_8'>9a,b</ns0:ref>). The controlled effects included a size-dependent signal, which was modeled using a Gaussian contour pulse that increased in amplitude with increasing shape size (as defined by the shape's average radius) (Fig. <ns0:ref type='figure' target='#fig_8'>9a</ns0:ref>), and a group-dependent signal, which was modeled similarly, but which was applied to just one of two hypothetical groups (Fig. <ns0:ref type='figure' target='#fig_8'>9b</ns0:ref>). To isolate and emphasize design flexibility, and to eliminate registration and correspondence as potential sources of error, we controlled both by sampling at 101 evenly distributed angular displacements with respect to the horizontal axis. We considered two MANCOVA possibilities:</ns0:p><ns0:p>analysis of the original, unscaled dataset (Fig. <ns0:ref type='figure' target='#fig_8'>9a</ns0:ref>), and analysis of the scaled / registered dataset (Fig. <ns0:ref type='figure' target='#fig_8'>9b</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>12/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We applied a single MANCOVA model, which modeled both shape size (i.e., mean shape radius) and group, and which thereby afforded consideration of both (1) size effects, with group effects linearly removed, and (2) group effects, with size effects linearly removed. Size effects for the original, unscaled data naturally showed very large test statistic values at all contour points (Fig. <ns0:ref type='figure' target='#fig_8'>9c</ns0:ref>). In contrast, size effects for the registered data correctly isolated the modeled size-dependent signal (Fig. <ns0:ref type='figure' target='#fig_8'>9d</ns0:ref>). Group effects were practically identical for both the original, unscaled data and the registered data (Fig. <ns0:ref type='figure' target='#fig_8'>9e,f</ns0:ref>), emphasizing the point that MANCOVA can be used to remove size-related effects in lieu of registration.</ns0:p><ns0:p>More generally, this analysis shows that the proposed framework is highly flexible, and can be used with arbitrary continuous and categorical independent variables, provided these variables adhere to the requirements of classical linear design modeling. We nevertheless caution readers that the (Fig. <ns0:ref type='figure' target='#fig_8'>9</ns0:ref>) analyses consider close-to-ideal data, for which registration and correspondence are near-perfectly controlled. For real dataset analysis, both registration and correspondence generally introduce errors that may or not affect the ultimate hypothesis testing results. Results' sensitivity to data processing algorithms and their parameters must be considered in general analyses.</ns0:p><ns0:p>The proposed methodology partially overcomes limitations of landmark selection, and the corresponding susceptibility to bias <ns0:ref type='bibr' target='#b7'>(Arnqvist and Martensson, 1998;</ns0:ref><ns0:ref type='bibr' target='#b65'>Rohlf, 2003;</ns0:ref><ns0:ref type='bibr' target='#b27'>Fruciano, 2016)</ns0:ref>; shape-to-shape landmark identification is often manual and therefore subjective. Algorithmic landmark identification is nevertheless possible <ns0:ref type='bibr' target='#b17'>(Claes et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b72'>Strait and Kurtek, 2016)</ns0:ref>, and indeed modern machine learning techniques have been shown to substantially improve landmark detection, with the promise of eliminating landmark-associated subjectivity <ns0:ref type='bibr' target='#b47'>(Morris, 2003;</ns0:ref><ns0:ref type='bibr' target='#b78'>Young and Maga, 2015;</ns0:ref><ns0:ref type='bibr' target='#b72'>Strait and Kurtek, 2016;</ns0:ref><ns0:ref type='bibr' target='#b24'>Devine et al., 2020)</ns0:ref>. Like automated landmarking, the proposed method can be used with little-to-no subjective intervention, implying generally more repeatable results. Here 'objective' does not necessarily mean 'accurate' or 'appropriate'; it simply means that results are expected to be more reproducible than the results from more subjective methods. Determining the accuracy and appropriateness of all methods, including the proposed one, requires substantial empirical effort across a range of data modalities and applications.</ns0:p><ns0:p>We also note that the proposed landmark-free approach is just one end of the spectrum, where manual landmark definition is the other, and that a variety of alternative techniques occupy positions between these two extremes. For example, semilandmarks <ns0:ref type='bibr' target='#b44'>(Mitteroecker and Gunz, 2009)</ns0:ref> provide an objective way to fill spatial gaps between landmarks, thereby creating a continuous surface. From the perspective of the proposed method, semilandmarks represent the results of piecewise registration over the domain u (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>), or equivalently a hybrid registration method consisting of both algorithmic and manual components <ns0:ref type='bibr' target='#b59'>(Ramsay and Li, 1998)</ns0:ref>. As there are a plethora of automated techniques for geometrical matching <ns0:ref type='bibr' target='#b35'>(Holden, 2008)</ns0:ref>, the proposed framework regards these techniques each as objective, substitutable, yet each imperfect components, whose assumptions and parameters could ultimately affect the final results.</ns0:p><ns0:p>From this perspective, a second layer of objectivity could be added to the proposed framework, whereby different techniques and/or parameters are iteratively substituted in a sensitivity framework, to objectively discern the numerical stability of the final results, as well as the boundaries of that stability <ns0:ref type='bibr' target='#b55'>(Pataky et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Landmarks and other low-dimensionality representations of shape -including harmonic coefficients from elliptic Fourier analysis <ns0:ref type='bibr' target='#b9'>(Bonhomme et al., 2014)</ns0:ref> -embody a second important limitation: a potentially over-simplified representation of shape. In the case of landmarks, a danger of over-simplification arises from the Nyquist theorem: under-sampling a continuous process (including the continuous spatial surface of an object) can lead to aliasing, whereby the under-sampled measurement can misrepresent the true characteristics of the underlying object <ns0:ref type='bibr' target='#b51'>(Nyquist, 1928)</ns0:ref>, and can even reverse statistical interpretations through mechanisms such as regional conflation <ns0:ref type='bibr' target='#b54'>(Pataky et al., 2008)</ns0:ref>. This latter problem of shape simplification can nevertheless be solved by the use of semi-landmarks <ns0:ref type='bibr' target='#b12'>(Bookstein, 1997;</ns0:ref><ns0:ref type='bibr' target='#b2'>Adams et al., 2004)</ns0:ref> which, as argued above, can be regarded as a specific approach to shape registration, implying that semi-landmark approaches could interface easily with the proposed technique.</ns0:p><ns0:p>A third advantage of the proposed method is processing speed. The current, non-optimized analyses executed in under 2 s, with statistical inference itself requiring well under 100 ms (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). We acknowledge that other data processing steps, including image segmentation and registration for example, can require substantial effort, so we caution readers that the reported execution speeds do not necessarily translate to reduced laboratory hours. The primary advantage in our view is instead the promotion of sensitivity analysis: since the entire data processing chain can be executed relatively rapidly, it would be Size effects from MANCOVA for the original and registered data; the test statistic is presented as √ T 2 because a linear T 2 scale would result in imperceivable color differences (i.e., the panel (c) points would be all white, and the points in the other panels would all be close-to-black). (e,f) Group effects from MANCOVA for the original and registered data; note that the (e) and (f) results are similar because MANCOVA accounts for size-related effects in the 'Original' data. possible to systematically adjust algorithm parameters, and even swap algorithms, in a sensitivity loop, to 354 probe the robustness of particular results.</ns0:p></ns0:div>
<ns0:div><ns0:head>355</ns0:head><ns0:p>A final advantage is implicit morphological information. The proposed method yields results that are 356 rich in morphological detail (Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>) which, much like a highlighted photograph or x-ray image, can be Manuscript to be reviewed Computer Science abstract metrics (like elliptical Fourier analysis), SPM results embody morphological meaning insofar as contours themselves embody morphological meaning. While individual contour points do not necessarily embody meaning, one could argue that the set of all contour points collectively embodies substantial morphological meaning. This perspective is analogous to a pixel-and-image argument. The color of a single pixel is largely irrelevant to the overall interpretation and meaning of an image. Similarly, the test statistic value at a single contour point is itself largely irrelevant to the overall morphological interpretation of SPM results; morphological meaning is instead encapsulated implicitly in the overall excursion set, where 'excursion set' means the set of supra-threshold contour points, like those in Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>. Regardless of the quality of morphological meaning, SPM results must be viewed as just one set of results, which may or may not embody useful morphological information, and which should be considered along with other, more explicit morphological methods like Procrustes ANOVA and elliptical Fourier analysis.</ns0:p><ns0:p>Considering next specific results from this paper, a particularly unintuitive set of results was observed for the Device8 dataset, for which UV analysis yielded the smallest p value (0.022), and for which no other method yielded significance (p > 0.2) (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). This result was likely caused by widespread but relatively small-magnitude mean-shape differences (Fig. <ns0:ref type='figure' target='#fig_7'>8c</ns0:ref>); since the deformation is widespread it would be detected by a general deformation metric like Procrustes distance, but since the deformation magnitude is relatively small it would not be detected by local contour-point methods like SPM. The interpretation is emphasized in the Flatfish dataset, where general deformations were similarly broadly distributed across the contour, but maximal local deformations were greater (Fig. <ns0:ref type='figure' target='#fig_7'>8e</ns0:ref>), which yielded significance in all methods (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). Nevertheless, this interpretation appears to be inconsistent with the Horseshoe dataset, which exhibited both large and widely distributed deformation (Fig. <ns0:ref type='figure' target='#fig_7'>8h</ns0:ref>), but which also failed to yield significant UV results (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). Nevertheless, this apparent consistency may be resolved by considering the large variability in the Horseshoe dataset, particularly at the selected landmarks (Fig. <ns0:ref type='figure' target='#fig_0'>2h</ns0:ref>). To more completely resolve such apparent these inconsistencies, and more generally to understand the nature of landmark-vs. contour-based methods, it would be necessary to consider individual contour points, their deformations, and their covariances.</ns0:p><ns0:p>While this paper was limited to 2D analysis, it should be noted that the proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) can be readily extendable to the morphological analysis of 3D surfaces. Similar to the unwrapping of 2D contours onto a 1D domain u (Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>), 3D surfaces can be unwrapped onto a 2D domain uv Fig. <ns0:ref type='figure' target='#fig_10'>10</ns0:ref>, and methods like SPM <ns0:ref type='bibr' target='#b26'>(Friston et al., 2007)</ns0:ref> can be used to conduct domain-level hypothesis testing regarding these unwrapped data. This domain-wide testing is possible due to the underlying model of domain-level variance, which SPM models as smooth, Gaussian random fields, and which can be extended to arbitrarily high-dimensional domains with arbitrary geometry <ns0:ref type='bibr' target='#b4'>(Adler and Taylor, 2007)</ns0:ref>. For the current paper involving 2D shapes, the (flattened) domain is one-dimensional, and the dependent variable is a two-component position vector; that is, a two-component position is defined at all locations u along the contour. Similarly, for 3D surfaces, the (flattened) domain is two-dimensional and the dependent variable is a three-component position vector, where position is defined at all locations uv across the surface. A variety of computational tools exist for 3D geometry flattening (e.g. <ns0:ref type='bibr' target='#b23'>Dale et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b68'>Sawhney and Crane, 2017)</ns0:ref>, so 3D implementations of the proposed method could presumably proceed in a fully automated manner.</ns0:p><ns0:p>The proposed mass-multivariate framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) has a number of limitations. The most severe of these is sensitivity to algorithmic specifics. For example, simply by randomly changing the order of the points, it is possible to yield qualitatively different results (Fig. <ns0:ref type='figure'>11</ns0:ref>). Systematic, random variations of point ordering would be necessary for assessment of the results' sensitivity, but in our view this would be insufficient because ultimate results may also be sensitive to other particulars including, for example, specific parameter values used in contour parameterization, registration, and correspondence algorithms.</ns0:p><ns0:p>In other words, one should regard the results as potentially sensitive to all data processing steps, and not just to point ordering. The current paragraph considers just one example (point ordering) as a potential source of sensitivity concern. In (Fig. <ns0:ref type='figure'>11</ns0:ref>), the qualitative change in results can be attributed to a minor shift in point correspondence (Fig. <ns0:ref type='figure'>11a-b</ns0:ref>), which created a small shift in pointwise covariance, but a shift that was large enough to alter the hypothesis rejection decision at α = 0.05. That is, point-specific covariance is direction dependent, so small changes in point-deformation direction can yield qualitative changes in test statistics <ns0:ref type='bibr' target='#b55'>(Pataky et al., 2014)</ns0:ref>. Nevertheless, we observed this type of sensitivity to random point ordering only occasionally, with most randomizations resulting in qualitatively similar results. Also, in most cases we noticed that probability results, while variable, were generally stable. The problem only emerged qualitatively when that variability spanned α=0.05, as depicted in Fig. <ns0:ref type='figure'>11</ns0:ref>). This problem of probability value variability <ns0:ref type='bibr' target='#b33'>(Halsey et al., 2015)</ns0:ref> partially reflects a weakness of classical hypothesis testing, which has a binary interpretation of continuous probability. We acknowledge that we did not systematically conduct sensitivity testing, and also that each stage of processing involves a variety of components or parameters that could be subjected to sensitivity analysis. Comprehensive consideration of this sensitivity would require a large research effort, so we leave this for future work.</ns0:p><ns0:p>The datasets and analyses presented in this paper also have limitations. We analyzed shapes from just one database <ns0:ref type='bibr' target='#b14'>(Carlier et al., 2016)</ns0:ref> and, for each dataset, we selected only ten shapes for analysis, and only conducted two-sample tests. While we do not expect analysis of datasets from other databases to appreciably affect this paper's messages, we acknowledge that analyses of relatively small samples, and just one simple experimental design, fully exposes neither the advantages nor disadvantages of the proposed analysis framework. We selected just ten shapes for each dataset primarily to emphasize that the proposed parametric procedure is sufficiently sensitive to detect morphological effects for small sample sizes. The specific ten shapes were selected in an ad hoc manner to emphasize particular concepts including, for example: interpretation agreement between the proposed and landmark methods' results, and the opposite: interpretation disagreement. Since these datasets were selected in an add hoc manner, from a single database, and with only two-sample analyses, the reader is left to judge the relevance of these results to other datasets and experimental designs.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>This paper demonstrates that parametric hypothesis testing can be conducted at the whole-contour level with suitably high statistical power for the analysis of even relatively small samples of 2D shapes (N = 10).</ns0:p><ns0:p>We describe a general framework for automated, landmark-free hypothesis testing of 2D contour shapes, but this paper implements just one realization of that framework. The main advantages of the proposed framework are that results are calculated quickly (<2 s in this paper), and yield morphologically rich </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Overview of analyzed datasets. All contour data are available in the 2D Shape Structure Dataset<ns0:ref type='bibr' target='#b14'>(Carlier et al., 2016)</ns0:ref>. For each dataset in this figure, one representative shape is highlighted, along with its numbered landmarks. Note that shape variance ranges from relatively small (e.g. Bell, Face) to relatively large (e.g. Device8, Heart).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Shape class exclusion examples. Shape classes were excluded if they contained shapes with qualitatively different contour geometry. For example: (a) the 'cup' class was excluded because some shapes had unattached handles with holes and others had attached handles without holes. (b) The 'octopus' class was excluded because the eight appendages appeared in non-homologous locations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>141 identifiable, homologous locations. The ultimately selected landmarks arguably span a representative 142 range of landmarking possibilities. 143 One operator used a mouse to manually digitize the landmarks for each of the 90 shapes (10 shapes 144 for each of 9 datasets). The operator was ignorant of the final shape groupings for the ultimate two-sample 145 5/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Example point set registration using the coherent point drift (CPD) algorithm<ns0:ref type='bibr' target='#b49'>(Myronenko and Song, 2010)</ns0:ref>. Note that CPD requires neither corresponding points, nor an equal number of points.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Example parametric representations of 2D contour shape. Dots represent manually defined landmarks, and are shown as visual references. Left panel (XY plane): the spatial plane in which shape data are conventionally presented. The three colors represent different shapes. Bottom panel (UX plane) and right panel (UY plane): abstract planes in which U represents the parametric position (from 0 to 1) along the contour; positions U=0 and U=1 are equivalent.</ns0:figDesc><ns0:graphic coords='9,141.73,385.78,413.57,232.63' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Landmark results from mass-multivariate testing. Landmark-specific T 2 values are presented along with the critical threshold at α=0.05, and probability values for the overall mass-multivariate test.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Contours mass-multivariate results using Statistical Parametric Mapping (SPM). Results for both parametric and nonparametric inference are shown. P values represent the probability that random variation in the Mean A contour would produce a deformation as large as in the observed Mean B, given the estimated contour variance. Dots on the Mean B contour represent contour points whose T 2 values exceeded the threshold for significance at α=0.05; if the maximum T 2 value did not reach this threshold, the p value is greater than α, and no dots are shown.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9. Example MANCOVA using synthetic data; for simplicity, data were generated to have (i) a relatively large signal:noise ratio, and (ii) close-to-perfect correspondence, by sampling at 101 equally spaced angular distances around the contour. (a) The original contour dataset, consisting of five noisy circles for each of two groups, with systematically different mean radii, and also with both group-and size-dependent signal, where 'size' was considered to be the mean radius, and where 'signal' implies true morphological difference. Note that the size-dependent signal is more easily perceived in panel (a), and that the group-dependent signal is more easily perceived in the next panel. (b) Registered contours. (c,d) Size effects from MANCOVA for the original and registered data; the test statistic is presented as √ T 2 because a linear T 2 scale would result in imperceivable color differences (i.e., the panel (c) points would be all white, and the points in the other panels would all be close-to-black). (e,f) Group effects from MANCOVA for the original and registered data; note that the (e) and (f) results are similar because MANCOVA accounts for size-related effects in the 'Original' data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>357readily interpreted at a glance. Since SPM operates directly on (registered) contours, without reducing the 358 object-of-hypothesis-testing to a single abstract metric (like Procrustes ANOVA), or to a small handful of359 14/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Example 3D surface unwrapping. (a) Original 3D geometry. (b) Unwrapped geometry; this is a 2D parametric (UV) representation of the original geometry. Colors represent changes in surface normal direction. The thick black line in panel (a) represents a seam along which the 3D geometry is cut so that it can be flattened into a 2D shape. Unwrapping was performed here using boundary first flattening (Sawhney and Crane, 2017).</ns0:figDesc><ns0:graphic coords='17,224.45,63.78,248.15,259.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>results in an easy-to-interpret manner. Since innumerable realizations of the proposed framework are16/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 . Dataset count summary. Point counts refer to the original data from Carlier et al. (2016).</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>Shapes</ns0:cell><ns0:cell /><ns0:cell>Points</ns0:cell><ns0:cell /><ns0:cell>Landmarks</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Min Median Max</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bell</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>185</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Comma</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Device8</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Face</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>103</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatfish</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Hammer</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Horseshoe</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>Key</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>103</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>115</ns0:cell><ns0:cell>5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistical results summary, probability values. As nonparametric inference yielded similar p values (see Results), only parametric p values are reported in this table for brevity.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell cols='2'>Landmarks</ns0:cell><ns0:cell cols='2'>Contours</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell></ns0:row><ns0:row><ns0:cell>Bell</ns0:cell><ns0:cell>0.130</ns0:cell><ns0:cell>0.302</ns0:cell><ns0:cell>0.084</ns0:cell><ns0:cell>0.041</ns0:cell></ns0:row><ns0:row><ns0:cell>Comma</ns0:cell><ns0:cell>0.155</ns0:cell><ns0:cell>0.294</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.327</ns0:cell></ns0:row><ns0:row><ns0:cell>Device8</ns0:cell><ns0:cell>0.022</ns0:cell><ns0:cell>0.214</ns0:cell><ns0:cell>0.433</ns0:cell><ns0:cell>0.681</ns0:cell></ns0:row><ns0:row><ns0:cell>Face</ns0:cell><ns0:cell>0.025</ns0:cell><ns0:cell>0.103</ns0:cell><ns0:cell>0.052</ns0:cell><ns0:cell>0.013</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatfish</ns0:cell><ns0:cell>0.023</ns0:cell><ns0:cell>0.016</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Hammer</ns0:cell><ns0:cell>0.708</ns0:cell><ns0:cell>0.206</ns0:cell><ns0:cell>0.417</ns0:cell><ns0:cell>< 0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart</ns0:cell><ns0:cell>0.940</ns0:cell><ns0:cell>0.976</ns0:cell><ns0:cell>0.544</ns0:cell><ns0:cell>0.016</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Horseshoe 0.084</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Key</ns0:cell><ns0:cell>0.532</ns0:cell><ns0:cell>0.270</ns0:cell><ns0:cell>0.013</ns0:cell><ns0:cell>0.022</ns0:cell></ns0:row></ns0:table><ns0:note>9/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Execution</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Procedure</ns0:cell><ns0:cell cols='2'>Landmarks</ns0:cell><ns0:cell cols='2'>Contours</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>UV Mass-MV</ns0:cell><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CPD</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>414.1</ns0:cell><ns0:cell>414.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Point Ordering</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>327.9</ns0:cell><ns0:cell>327.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Registration</ns0:cell><ns0:cell>Interpolation</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>835.1</ns0:cell><ns0:cell>835.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Correspondence</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>40.9</ns0:cell><ns0:cell>40.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GPA</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>8.5</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypothesis test</ns0:cell><ns0:cell>Proc-ANOVA SPM</ns0:cell><ns0:cell>60.0 -</ns0:cell><ns0:cell>-39.3</ns0:cell><ns0:cell>99.0 -</ns0:cell><ns0:cell>-66.8</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>66.7</ns0:cell><ns0:cell>46.0</ns0:cell><ns0:cell>1725.5</ns0:cell><ns0:cell>1684.8</ns0:cell></ns0:row></ns0:table><ns0:note>durations (unit: ms). Averages across the nine datasets. Procrustes ANOVA (Proc-ANOVA) involved 1000 iterations for each dataset. Average SnPM durations (not shown in this table) were 344.0 and 6336.0 ms for Landmarks Mass-MV and Contours Mass-MV, respectively.10/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:1:1:NEW 18 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "PeerJ Computer Science - Responses to Reviews
Article 53545
Title: “Automated, landmark-free, parametric hypothesis tests regarding two-dimensional
contour shapes using coherent point drift registration and statistical parametric mapping”
We thank the Editors and Referees very much for your time. Please find that we have reproduced your
comments below in blue text, numbered each comment for cross-referencing purposes, and responded
immediately below each comment. Please also find that orange text below specifies line numbers in
the revised manuscript, and that all main changes to the main manuscript are colored red.
Thank you for re-considering our work,
Todd Pataky, Masahide Yagi, Noriaki Ichihashi and Phil Cox
_______________________________________________________________________
REVIEWER #1
Basic reporting
{1.1} The manuscript is written in clear, unambiguous, and professional English. The overall structure
of the manuscript meets PeerJ standards, although some minor deviations might have been missed by
this reviewer. The figures chosen are relevant, high quality, and well labelled. However, there are a
couple additions that could be made in order to increase the readers ability to interpret the results and
various claims of methodological value made by the authors. While the introductory and background
sections do provide important context for this research, some references are missing, and a variety of
similar methods have been totally missed by the authors.
Response: Thank you for this general comment. We agree that the context we laid out in the
Introduction was incomplete. We have attempted to improve the context as suggested, and have
responded to your specific points below.
{1.2} First, the authors should add the missing references to lines 70 and 93.
Response: We apologize for our carelessness. We have corrected these issues.
{1.3} Second, the authors explicitly focus on the weaknesses of Procrustes based GM analyses and
the strengths of their contour-based MV methods (with some statements of contour method weakness
in the discussion that are largely brushed aside). This is nowhere near a fair and balanced comparison
for readers who may be unfamiliar with both methods. Of course, as stated by the authors, it is not
appropriate to review all the detailed literature of GM here in this research article. But, the authors
should take at least a few sentences to describe the scenarios in which GM methods are valuable,
including those where biological homology (either morphological or developmental) is critical for
interpretation of results. These are situations for which randomly identified points on a contour cannot
provide the information necessary to interpret results effectively. Some relevant examples are likely
found in the already cited Zelditch, et al., 2012 textbook. Other arguments for the value of explicit
homology for interpretations of biological shape space can be found in recent reviews of GM within
Biology such as Hallgrimsson, et al, 2015 (https://doi.org/10.1016/bs.ctdb.2015.09.003). Explicit
recent reports on the strength of Procrustes ANOVA and related analyses within the study of
multivariate shape data have been written by Adams and Coyler, among others (e.g., https://doi.org/
10.1038/hdy.2014.75). The authors should (at minimum) acknowledge and cite previous papers that
describe the value of these methods and the contexts within which they are most useful.
Response: We agree that the comparison was neither fair nor balanced. While our original intention
was to limit the scope of this article to only classical hypothesis testing methods, we realize from your
comments --- and we agree --- that the original scope was over-limited, especially for readers who are
new to GM analysis. Please find that we have made relevant changes throughout the manuscript,
especially in the Introduction and Discussion. In particular, in the Introduction we have attempted to
remove most discussions regarding the strengths and/or weaknesses of particular techniques, and to
instead provide a more neutral overview of the literature. We hope you will agree that the revised
Introduction more accurately represents the breadth of the literature, and also more clearly exposes the
gap that we feel the proposed method fills.
{1.4} Third, there is no introduction of other non-Procrustes Distance based contour morphometric
comparative methods (e.g., elliptic fourier analysis described in Bonhomme, et al., 2014 (https://
hal.inrae.fr/hal-02635896) and Claude, 2013, Hystrix 24(1):94-102) or automated landmark
correspondence quantification methods that have been previously published (e.g., Peter Claes’s 3D
facial surface automated methods (https://www.esat.kuleuven.be/psi/members/00041773). While the
MV method described in this manuscript is interesting and potentially useful as an alternative method
to sparse landmark GM, it is not the first method that has been suggested as an alternative. Related
and relevant previously published methods should be acknowledged. A literature dive to identify the
range of these previously published methods and research is strongly suggested so that the authors can
provide a broader theoretical and methodological context for the currently proposed approach. This
will also allow the authors to compare their 2D contour method (and proposed 3D extension) with
other contour and surface-based alternatives to GM (such as the Claes et al., methods) within their
discussion.
Response: We agree that the proposed method is not the first alternative to sparse landmark GM, but
we beg to disagree that our original paper argued that it is the first. Instead we have attempted to argue
that it is the first fully-parametric hypothesis testing technique that operates directly on contours.
The papers cited in your comment do indeed consider contours, but hypothesis testing within
frameworks like elliptical Fourier analysis is, similar to Procrustes ANOVA, limited to a relatively
small number of abstract shape metrics, and is not conducted directly on contours. In attempts to
clarify, and in response to your comment, please find that we have made the following revisions:
•
We have added the cited references along with a variety of other references to the Introduction.
•
We have attempted to clarify in the last two paragraphs of the Introduction that the key
distinction between the proposed technique and other techniques is really only the nature of
the dependent variable(s) upon which hypothesis testing is conducted.
We hope that these revisions provide both a more appropriate, and broader overview of existing
techniques, and also clarify our claims regarding novelty, but please advise if you feel that this
remains unsatisfactory.
{1.5} Fourth, the fact that homologous landmarks have been collected manually (as stated on line 50)
does not mean that automated methods are impossible. The issue until recently has been that
automated methods are simply bad at successfully identifying biological relevant homologous points
across a broad range of morphology. However, there are recent pushes to use modern nonlinear
registration and machine learning to improve the quality of automated homologous landmark
identification. This recent work should be acknowledged (e.g., https://doi.org/10.1007/
s11692-020-09508-8 and https://doi.org/10.1186/s12983-015-0127-8).
Response: We agree, and we apologize for our poor original wording: we meant to imply only that (a)
automation is not possible when (b) landmarks are defined manually. Due partially to the
reorganization of the Introduction, and partially to our impression that this issue is tangential to the
main points of the manuscript, please find that we have moved this passage to the Discussion (Lines
318-327) and have added context regarding algorithmic landmark identification.
{1.6} Finally, a raw dataset of contours and landmark points is not provided. It is true that the authors
used contours from a previously published dataset which is (presumably) publicly available. However,
in order to replicate their results, a complete list of the 10 (out of 20) chosen shapes for each shape
category is required. It would also be necessary to report which of those 10 shapes were placed in
group A and placed in group B for each category. The best way to allow for replication would be to
include a raw dataset that contains only the contours they analyzed, the group identities of each
contour, and the manually identified landmarks of each contour.
Response: We beg to disagree. All data are available in the project’s repository (https://github.com/
0todd0000/lmfree2d) as described in the first paragraph of the Methods in the original manuscript
(Lines 103-106 in the revised manuscript). Please see the Data folder in the repository, where you will
find “contours.csv” and “landmarks.csv” files for all datasets. The Python and R folders in the
repository contain scripts that replicate all reported results, including all figures. The Notebooks
folder additionally contain step-by-step instructions for users unfamiliar with R and/or Python. We
apologize if this was unclear in the original manuscript, so please advise if you believe there is a
clearer way to report this in the manuscript.
Experimental design
{1.7} The research within this manuscript appears to fit within the scope of the journal, although it
largely represents a combination of previously published methods into a novel analysis pipeline. The
authors do a good job identifying the purpose of their new pipeline and their comparison of relevant
statistical methods. It appears that the authors have completed the work at a high technical and ethical
standard. In most cases, the authors have done a great job describing the methodological details.
However, there are a few places where further detail will be helpful to readers.
Response: We agree. We hope that our responses to your other comments have provide more
complete details for readers.
{1.8} Line 117 – Since there are 20 shapes within each category that were presumably produced using
explicit randomization parameters, why not have two random groups of 10 shapes. In other words, for
each category, why were two groups of 5 chosen and explicitly how were they chosen? Were they
chosen randomly or were they subjectively chosen in an effort to produce two groups that would
probably display a significant difference in shape?
Response: We apologize for not making this clearer in our original manuscript. Our short response is:
(1) we feel that it is most important to demonstrate that the proposed method is sufficiently powerful
to detect effects in small samples, and (2) it was not feasible to use all 20 shapes.
Regarding (1): a key reason for using just 10 shapes is that the literature suggests that parametric
procedures are inappropriate for hypothesis testing purposes. For example, from Collyer et al. (2019,
p.358):
“...increasing the number of variables can decrease statistical power or preclude hypothesis testing
about shape differences, altogether, using parametric multivariate tests (as parametric tests use
probability distributions based on error degrees of freedom).”
Since SPM refutes this statement (Friston et al. 2007), we feel that it is more important to both (i)
implicitly address this criticism of parametric techniques --- by considering small samples, and (ii)
show some relatively serious limitations of the proposed method (Fig.11), than to consider slightly
larger samples. While we could indeed add larger-sample analyses, we do not believe that these
results would add appreciably to our paper’s conclusions, so we have opted to retain our original
datasets.
Were they chosen randomly or were they subjectively chosen in an effort to produce two groups that
would probably display a significant difference in shape?
They were chosen pseudo-randomly, to span a range of effect sizes and p values. For example: Fig.8h
depicts relatively large differences, and Fig.8c depicts relatively small differences.
Please find additional details regarding (2) in the revised manuscript (Lines 114-125).
{1.9} Line 148 – It would be useful to indicate what data is being used as the input for the T2 statistic.
Two dimensional coordinate data is implied later in the section, but the input data should be made
explicit. Also, it is not immediately clear what is meant by T2max. Is this the maximum T2 value
between group A and B across all landmarks (which the results imply) or is this the maximum
difference between specimens as calculated with a summary value of all landmarks within a
specimen?
Response: We agree. Please find that we have added equations detailing the T2 statistic and its
relation to T2max and to inference in general (Lines 155-176). Please find these changes throughout
both the Landmarks MV and Contours mass-MV sections of the Methods.
{1.10} Line 165 – Certainly, a dataset of unordered points is possible within an automated
phenotyping framework. However, it seems unlikely that 2D contour points generated within a real
world scientific framework would be unordered, because there is clearly an order in position along the
contour (i.e., it is a fairly simple ~1.5 dimension object). Are there examples of common real-world
research datasets where the contour points are usually (i.e., generally) unordered at the beginning of
the statistical analysis?
Response: We agree for 2D contours, but we believe that it is necessary to demonstrate unordered
point support in order to (indirectly) demonstrate generalizability to 3D analysis. If the proposed
method cannot handle unordered points, it will likely not be able to handle 3D surface analysis. Please
find that we have attempted to clarify this in the “Contours UV analysis” section (Lines 189-192).
{1.11} Line 175 – In order to complete a valid Procrustes ANOVA of automatically identified contour
points, it would be necessary for all contours to be represented by the same number of points. At what
stage in the Contours UV analysis is the number of points within all shapes equalized? Is there an
interpolation to increase the number of points in all shapes to match the contour with the largest
number of points (as implied in this passage)? If not, then which points are chosen to be included in
the Procrustes analysis and how?
Response: The same number of points is achieved at the contour parameterization stage (Lines
192-194). Once the surface is parameterized over the 1D domain u for 2D contours (Fig.6) or over the
2D domain (u,v) for 3D surfaces (Fig.10), an arbitrary number of points can be used to discretize the
domain. We apologize, we thought this was implicit in the phrase “contour parameterization”. Please
find that we have edited this paragraph to emphasize that, following parameterization, an arbitrary
number of points can be selected, and that these points are indeed calculated using interpolation
(Lines 195-197).
As an aside, please note that we tested a variety of different parameterizations and subsequent
discretizations, including for example varying the number of points between 50 and 1000. We found
negligible qualitative effects on the results, as expected, because low-frequency content dominated the
spatial power spectrum; note in Fig.6, for example, that the UX and UY planes show very-low
frequency content, about 1 Hz over the domain [0,1]. For simplicity and reporting convenience, we
decided to use the maximum number of points across the shapes in a given dataset, as stated in the
original and current manuscripts. We interpret your comment to imply that use of the maximum
number of points itself is not problematic, but please advise if we have misunderstood.
Validity of the findings
{1.12} The underlying data and results appear robust, statistically sound, and controlled. The results
are presented plainly and clearly, which is appreciated. There are a couple minor issue in the results
that can be quickly fixed.
Response: Thank you for this comment. We have responded to your individual points below.
{1.13} Table 2 – Face UV Contours p-value of 0.052 is not actually lower than the 0.050 cutoff. This
should be corrected in the table and associated text
Response: We agree that 0.052 is no below the 0.05 cutoff, but please note that the original table’s
caption stated that blue values are below 0.1 but not below 0.05. Please find that we have attempted to
clarify by using bold font to highlight only those values below 0.05.
{1.14} Line 215 – Table number is missing
Response: We apologize, this was a source code error. We have fixed the problem.
{1.15} Moving into the Discussion, most statements are reasonable suppositions from the presented
results. The author’s attempt to present the data at face value for the judgement of readers is
somewhat successful. In addition to highlighting some strengths of their proposed pipeline, the
authors do mention a few weaknesses. This is also appreciated. However, in some parts of the
discussion, the strength of the proposed pipeline and the weakness of Procrustes ANOVA analysis are
overexaggerated to the point of being misleading. These statements should be amended to more
accurately reflect the general value of these methods.
Response: We agree. Also in response to comment {1.3}, please find that we have attempted to
provide a fairer interpretation throughout the Introduction and Discussion. Please advise if there are
any remaining points you feel we have exaggerated or otherwise unfairly portrayed.
{1.16} Line 226-229 – It is shown that the parametric T2 method and the nonparametric method
identify some of the same significant differences. This is true. However, this does not show that
parametric hypothesis testing is valid for these types of datasets. The author’s statement that their use
is “possible” implies that it is statistically valid and appropriate to use for these datasets. If the authors
want to say that parametric approaches are statistically valid and appropriate, they need to show that
the input data for the T2 test actually meets the assumptions of a T2 distribution. If not, then they need
to be clear that this may not be a statistically valid approach even if it does identify some of the same
significant differences in shape as Procrustes ANOVA.
Response: We agree that this does not show validity, but we beg to disagree that this passage make
that claim. Our only point was that the convergence of parametric and nonparametric results suggests
that the parametric approach’s assumption of Gaussian variance is a reasonable one. We agree that
this has nothing to do with validity, but again, please note that our text does not claim that this implies
validity. Perhaps this is our misunderstanding of the word “possible”, but in our opinion “possible”
means only the opposite of “impossible” and does not directly relate to validity. Please find that we
have attempted to clarify by explicitly stating that this implies neither validity nor appropriateness
(Lines 263-270)
We wish to disagree with the premise of the following statement:
If the authors want to say that parametric approaches are statistically valid and appropriate, they need
to show that the input data for the T2 test actually meets the assumptions of a T2 distribution.
We believe that showing adherence to a T2 distribution is unnecessary, as this is not an assumption of
the test. The test assumes only residual (bivariate) normality, so it would be sufficient to demonstrate
the adherence of point residuals’ variance to (bivariate) Gaussian distributions. Our more important
objection to the statement is above is that is impossible to prove distributional adherence; it is only
possible to test for deviations from distributional assumptions. Furthermore, failing to reject
hypotheses regarding distributional assumptions does not imply that the data do in fact adhere to the
assumed distribution, because distributional tests rely on a number of factors including sample size,
for example. An additional problem is that rejecting distributional hypotheses tests also does not
imply that the data fail to adhere to distributional assumptions; it could instead indicate the presence
of an outlier, for example. In other words, while we could indeed test SPM’s distributional
assumptions in a point-by-point manner, failing to reject the null hypothesis of distributional
adherence would, by itself, be unconvincing. Instead of directly testing for distributional deviations,
we felt that it would be more compact and also ultimately more convincing to conduct nonparametric
tests (Fig.8), which do not assume normality. Insofar as parametric and nonparametric results
qualitatively agree, it can be concluded that the parametric approach’s assumption of normality is a
reasonable one, or even less strongly: that parametric assumptions were largely inconsequential for
the presented results. Fig.8 does indeed show a few quantitative differences between the parametric
and nonparametric results, but we would argue that these differences are generally minor, and that by
and large the qualitative interpretations were unaffected by the use of parametric vs. nonparametric
techniques. In our opinion, this represents relatively strong (albeit indirect) evidence that the
parametric approach’s distributional assumptions are appropriate, or at least inconsequential, for these
particular datasets. Please find that we have added a paragraph to the Discussion to emphasize these
points (Lines 271-282).
{1.17} Line 232 (point 1) – It is true that manual landmark placement requires subjective judgement
on the placement of landmarks on a surface or contour. In this way, it is more subjective than the
placement of landmarks using the automated method proposed here. However, the choice of
parameters and procedures for the automated landmark placement algorithm are also subjective. In
addition, the fact that a given version of the algorithm produces 100% consistent landmark placement
(i.e., high repeatability) does not necessarily mean that the automatically identified landmarks provide
accurate, useful, or interpretable results. The authors should acknowledge that this method (as
described here) might be more useful for some datasets and research questions than others. In other
cases, care should be taken by researchers to validate that this “objective” method quantifies shape in
a useful and interpretable way.
Response: We agree with your comment, but we believe that the original text neither states nor
implies that objective, algorithmic procedures are “accurate”, “useful” or “interpretable”; in our view
the text uses “objective” only to imply “algorithmic”. Regardless, we agree that it would be useful to
clarify this point, so please find that we have attempted to clarify the meaning of “objective” on Lines
288-289 and Lines 323-327.
{1.18} Line 232 (point 2) – The results do not been show that this method generally provides “direct
morphological meaning in hypothesis testing results.” Yes, the T2 method does produce a p-value for
all points along a contour, because a test is completed for each contour point. This is true. However,
because those points have been randomly placed (rather than being morphologically defined
homologous points of biological meaning), the points have no inherent meaning. It is absolutely still
necessary to produce a visualization of significant landmark location along the contour (as done in
Figure 8) in order to begin interpreting the meaning of those significant landmark coordinate
differences. Both the Procrustes ANOVA and the proposed method require additional steps of
visualization and follow up tests to generate valuable interpretations of results. Furthermore,
randomly identified points along a contour have no inherent biological, morphological,
developmental, or other meaning while well-defined homologous (and “subjective” manually placed)
landmarks or semi-landmarks often do.
Response: We agree with this comment, as we fear that our choice of “direct” was a poor one. As a
brief response, please note that we have changed the phrase “direct morphological meaning” to
“implicit morphological meaning”. We expect that “implicit” is more appropriate in this case. Please
find relevant revisions on Lines 359-373.
In supplement to those revisions, please note that we wish to object to the following statement:
“...randomly identified points along a contour have no inherent biological, morphological,
developmental, or other meaning”
We object to this statement because our original text neither states nor implies that individual contour
points have inherent meaning. We fully agree that a single contour point itself has no morphological
meaning, but we would argue that the full set of all contour points embodies substantial
morphological meaning. By extension, we would argue that a method that operates on the full set of
all contour points, and which reports statistical results in the context of the full set of all contour
points, also embodies substantial morphological meaning. Please find that we have attempted to
clarify our perspective on Lines 359 - 373.
{1.19} Line 255 – The authors correctly state that manual landmark placement can be very time
consuming. However, they also imply that this contour based method requires much less researcher
time to complete. This is not true in this reviewer’s experience with landmarking, segmentation, and
contour definition. The authors have completely ignored the amount of time that it takes to produce
the contours upon which automated landmarks are placed in a real world research scenario. On a 2D
image, manually identifying landmarks might take many minutes to identify carefully. However,
identifying a sparse set of landmarks on an image is much less time consuming than drawing an
accurate contour on a 2D image. This is a major reason why landmark based methods, rather than
contour-based methods have been preferred in the past by researchers. Even if a semi-automated
method is used to identify a contour (such as with edge detection), a researcher usually needs to open
up every image in order to verify that the automated contours are accurate and often needs to make at
least a few corrections to each contour. So, this reviewer agrees that the automated placement of
points along an already defined contour using the described method is faster than manually
identifying landmarks on an already defined contour. But, this will not translate into real world time
savings for researchers in the way that the authors imply. The time and effort required to produce
accurate contours in real world research situations needs to be acknowledged.
Response: We agree, please find that we have revised the text accordingly (Lines 351-358).
{1.20} Line 277 & Figure 8 – The authors correctly point out that the proposed multivariate method
fails to identify differences in group A and group B shape across many of the shape categories,
including around the horseshoe, the belly of the fish, differences in the head of key, and the top of the
bell. As mentioned, this is likely because the drawn mean shapes in Figure 8 do not represent the full
scope of the shape variance within group A and group B of each category. It is highly likely that high
contour shape variance explains 1) the failure of the multivariate method to identify these horseshoe
differences AND 2) the failure of the UV method to identify significant shape differences for several
of the shape categories. However, in order for the reader to easily judge this possibility, they need to
have a visualization of the shape variance. To clarify this issue, the authors should include the outlines
of group A mean, group B mean (perhaps as thick colored lines) and all individual shape contours
(perhaps as thin dotted colored lines) within Figure 8’s panels. This will improve reader understanding
of the dataset and improve the strength of the discussion about why each method failed to identify
significant shape differences in some cases.
Response: We agree. Please find that we have added all contours to Fig.8 as suggested. We agree with
you that this helps to portray the variance and how variance magnitude relates to the results.
However, you will notice that the suggested plot does not completely resolve interpretation issues. For
example, in panel (e) the apparent mean difference at the belly of the fish looks relatively large with
respect to the variance, and in panel (h) the mean difference at the top of the horseshoe does not look
appreciably smaller than the top-left and top-right edges where significance was observed. We believe
that there are two additional factors, not visible in these plots, that are important to consider:
(Factor 1) Covariance
The effect at any given point depends not only on the magnitude of the variance, but also on the
structure of the covariance. For example, Fig.R1 (below) depicts hypothetical data for a single contour
point; the test statistic (T2) value at this point depends not only on the magnitudes of the mean
difference and variance --- as correctly indicated in your comment --- it also depends on the direction
of the covariance ellipse’s principal axes with respect to the mean difference vector. In the Fig.R1
example, the mean difference vector (ΔF) is approximately aligned with the direction of minimum
variance, implying that the T2 is relatively large. If the data were instead distributed with the
maximum variance direction in the same direction as ΔF, the T2 value would be much smaller.
The first problem here is that both (i) the mean difference vector and (ii) the covariance ellipses are
generally required to understand why a particular T2 value is large or small. The second problem is
that it is not possible to visualize (i) and (ii) on contour plots like those in Fig.8, because there are too
many contour points, and the resulting covariance plots would become unintelligible.
(Factor 2) Non-perpendicular correspondence lines
Consider Fig.5d (main manuscript), and in particular the bottom right corner, where the
correspondence lines are not perpendicular to the contour. If the closest points on each contour were
in correspondence, these lines would appear much more perpendicular. However, a lack of
perpendicular correspondence lines does not imply incorrect correspondence. Consider Fig.R2
below, which depicts correspondence lines for a simple line segment. From this simple depiction it is
clear that deformation generally yields non-perpendicular correspondence lines. Since the goal of
registration and correspondence algorithms is generally to achieve minimum net deformation, nonperpendicular correspondence lines naturally emerge.
This is highly relevant to the Fig.8 results, because one is tempted to regard the closest points on two
contours to be in correspondence. For example, in Fig.8e one is tempted to regard the closest points in
the fish bellies to be in correspondence. However, Fig.R2 shows that following this temptation could
potentially lead to illusions regarding true correspondence.
Thus, while we agree with Reviewer #1 that variance is essential to consider, and that the suggested
variance visualization (reflected in the new Fig.8) provides a useful qualitative impression of
variance, we fear that it is impossible to resolve all Factor 1 and Factor 2 visualization issues in a
single plot. While we could create additional figures to explain effects at specific contour points, we
have opted not to do so for a simple reason: the hypothesis tests already explicitly embody both
Factor 1 and Factor 2.
Figure R1. Depiction of variance at a single contour point. F is a position vector. Small circles depict
individual observations. Thick colored arrows depict mean positions. The thick black arrow depicts the (vector)
difference between the two groups, and thin black lines indicate its x and y components. The ellipses depict
within-group (co)variance, scaled to two standard deviations (to encompass all responses).
(a) Before deformation
(b) After deformation
Figure R2. Depiction of deformation’s qualitative effects on correspondence lines. (a) Initial configuration in
which two line segments are in correspondence. Here vertical lines represent the correspondence between three
arbitrary points. (b) Following deformation of one line segment, the correspondence lines needn’t be
perpendicular to either line. Non-perpendicular correspondence lines therefore (i) generally emerge when
geometry is allowed to deform, and (ii) do not necessarily reflect incorrect correspondence. These concepts are
important for considering correspondence results like those depicted in Fig.5d (main manuscript), in which
non-perpendicular correspondence lines do not necessarily represent an incorrect correspondence result.
{1.21} Line 295 – As mentioned in my comments about the introduction, previous methods of 3D
automated landmarking and analysis like the Claes methods should be cited here.
Response: We agree. Please find that we have added a variety of references to both the Introduction
and Discussion, and have also substantially revised both.
{1.22} Line 297-305 – Given the described sensitivity to algorithmic specifics, would the authors
recommend that multiple separate MV tests be run with random point location seeds? This might
allow researchers to identify whether or not their shape is at ~0.05 p-value and to determine the
typical level of significance for the shape. Computationally, this should be possible given the small
amount of time required for each run.
Response: We agree that random point locations would indeed be useful to consider. However, we
don’t think that this would sufficient, because as Reviewer #1 has pointed out above, algorithmic
specifics including parameters and algorithm choices are also important to consider. Please find that
we have added a clarification in the text (Lines 405-411).
{1.23} Comments for the Author
Generally speaking, this reviewer was glad to read this paper, found the quality of the results
presentation to be high, and will genuinely consider the automated contour point identification
method for future use.
Response: Thank you very much for this comment, we have no response.
_______________________________________________________________________
REVIEWER #2
Basic reporting
{2.1} Line 79: It is absolutely incorrect to claim that geometric morphometric analyses are generally
univariate, relying on analysis of PCs. There are innumerable examples that show this is not correct
and as such this statement suggests that the authors do not know the GM literature. It is also not true
that a univariate Procrustes ANOVA is generally used to test for shape differences. The criticism of
Procrustes ANOVA and RRPP is also unnecessary for this paper. Validating a robust landmark-free
method would be tremendously useful and entering into what appears to be an insufficiently well
informed (or at least insufficiently supported) critique of the statistical hypothesis testing in GM is an
irrelevant distraction for this paper.
Response: We agree, and we apologize for this statement and our inappropriate literature review.
Please find that we have made substantial changes to the Introduction and Discussion to provide a
clearer literature context, especially in response to Comments {1.3} and {1.4} above.
{2.2} A major issue is that it is not clear how one might use this form of morphometric qualification
beyond the simple task of statistical comparison of mean shape and the visualization of differences in
shape between groups. This is only the most basic of morphometric tasks. A key requirement for
morphometric methods is the ability to extract measures of shape variation that can be quantitatively
compared to covariates. For example, the regression of shape on size to quantify allometry is a basic
task in most morphometric analyses. Similarly, regression on covariates that one wishes to control for
or remove from an analysis is a basic requirement. Finally, much of GM focuses on analysis of
covariance structure with methods such as PLS used to quantify covariation in shape between parts of
an organism. There are existing landmark-free methods that enable comparison of means and
visualization of mean differences. Most fail to be adopted by the morphometric community because
they fail to allow for analyses of integration, covariance structure or analyses involving multiple
covariates. The claim is made in the paper that the method is amenable to standard (including
multivariate statistical) techniques such as regression. This is a critical point and needs to be
developed further. How might one do a regression of shape on size, visualize the resulting variation
and then remove the covariance of shape on size prior to further analyses? For this method to be
viewed as promising by the morphometric community, the pathway to such analyses must be clear.
Response: We agree that generalization beyond mean shape comparisons is important, and we also
agree that our paper failed to address this issue in detail. In response to your comment, please find that
we have conducted a new supplementary analysis using synthetic data, and have considered the
results (Fig.9) in the Discussion (Lines 290-315).
How might one do a regression of shape on size, visualize the resulting variation and then remove the
covariance of shape on size prior to further analyses?
The short answer is: ANCOVA or MANCOVA, but with two important caveats:
1. A clearly defined, and generally accepted metric for “size” exists, and.
2. Registration is acceptable.
If either (1) or (2) is imperfect, then the argument spirals back to numerical and algorithmic
sensitivities (see Comment {1.22} above.)
In order to isolate the issues raised in this comment, Fig.9 uses a synthetic dataset where (1) and (2)
are well controlled. In this case, one can use standard MANCOVA techniques to conduct both
regression analysis (Fig.9c,d) and group analysis (Fig.9e,f).
Please note:
•
We are uncertain what you mean by “visualize the resulting variation”; if Fig.9 does not
satisfy this concern, we respectfully request clarification of “resulting variation”.
•
We appreciate that perfect (1) and (2) would never be the case in the analysis of real datasets,
and we have considered this point in the Discussion.
{2.3} Experimental design
I have no concerns in this area.
(No response)
{2.4} Validity of the findings
The validation is robust as far as it goes, although see more general concerns as outlined above.
Response: Thank you very much for this confirmation. Please find our responses to your general
concerns above.
" | Here is a paper. Please give your review comments after reading it. |
103 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This paper proposes a computational framework for automated, landmark-free hypothesis testing of 2D contour shapes (i.e., shape outlines), and implements one realization of that framework. The proposed framework consists of point set registration, point correspondence determination, and parametric full-shape hypothesis testing. The results are calculated quickly (<2 s), yield morphologically rich detail in an easy-to-understand visualization, and are complimented by parametrically (or nonparametrically) calculated probability values. These probability values represent the likelihood that, in the absence of a true shape effect, smooth, random Gaussian shape changes would yield an effect as large as the observed one. This proposed framework nevertheless possesses a number of limitations, including sensitivity to algorithm parameters. As a number of algorithms and algorithm parameters could be substituted at each stage in the proposed data processing chain, sensitivity analysis would be necessary for robust statistical conclusions. In this paper, the proposed technique is applied to nine public datasets using a two-sample design, and an ANCOVA design is then applied to a synthetic dataset to demonstrate how the proposed method generalizes to the family of classical hypothesis tests. Extension to the analysis of 3D shapes is discussed.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The statistical analysis of shape variation is relevant to a wide variety of academic fields including: evolutionary biology <ns0:ref type='bibr' target='#b43'>(Mitteroecker and Gunz, 2009)</ns0:ref>, biomechanics <ns0:ref type='bibr'>(Pedoia et al., 2017), computer vision (Murphy-Chutorian and</ns0:ref><ns0:ref type='bibr' target='#b47'>Trivedi, 2008)</ns0:ref>, and many others <ns0:ref type='bibr' target='#b21'>(Da Costa and Cesar, 2000;</ns0:ref><ns0:ref type='bibr' target='#b66'>Rohlf and Marcus, 1993;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adams et al., 2004</ns0:ref><ns0:ref type='bibr'>Adams et al., , 2013))</ns0:ref>. A key methodological framework for the statistical analysis of shape to have emerged in the literature is Geometric Morphometrics <ns0:ref type='bibr' target='#b19'>(Corti, 1993;</ns0:ref><ns0:ref type='bibr' target='#b10'>Bookstein, 1996;</ns0:ref><ns0:ref type='bibr' target='#b69'>Slice, 2007;</ns0:ref><ns0:ref type='bibr' target='#b78'>Zelditch et al., 2012)</ns0:ref>. Geometric Morphometrics consists of a variety of statistical techniques, ranging from classical hypothesis testing (e.g. <ns0:ref type='bibr' target='#b29'>Goodall, 1991)</ns0:ref> and classical dimensionality reduction techniques like principal component analysis <ns0:ref type='bibr' target='#b1'>(Adams et al., 2004)</ns0:ref> to machine learning techniques like unsupervised clustering <ns0:ref type='bibr' target='#b59'>(Renaud et al., 2005)</ns0:ref>. This paper is concerned primarily with classical hypothesis testing as it pertains to shape analysis.</ns0:p><ns0:p>A common geometric morphometric approach to classical hypothesis testing regarding group differences (depicted in Fig. <ns0:ref type='figure'>1a</ns0:ref>), consists of: (1) landmark definition, (2) spatial registration, and (3) Procrustes ANOVA <ns0:ref type='bibr' target='#b29'>(Goodall, 1991)</ns0:ref>. Landmark definition refers to the manual identification and digitizing (i.e., XYZ coordinate specification) of homologous points on multiple objects, for example the corners on polyhedra. Spatial registration refers to the optimal, non-shearing affine alignment of a set of landmarks; that is, the optimal translation, rotation and scaling of each set of landmarks is calculated so that the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science the landmarks are optimally aligned in space. Procrustes ANOVA is effectively equivalent to classical ANOVA, where Procrustes distance is the dependent variable <ns0:ref type='bibr' target='#b78'>(Zelditch et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Landmarks with evolutionary, developmental or functional homology are essential for accurate interpretation of results <ns0:ref type='bibr' target='#b31'>(Hallgrimsson et al., 2015)</ns0:ref>, especially for biological studies which seek to understand morphological variation in the context of evolution (e.g. <ns0:ref type='bibr' target='#b70'>Stayton, 2005;</ns0:ref><ns0:ref type='bibr' target='#b45'>Morgan, 2009;</ns0:ref><ns0:ref type='bibr' target='#b14'>Casanovas-Vilar and Van Dam, 2013;</ns0:ref><ns0:ref type='bibr' target='#b24'>Dumont et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Page and Cooper, 2017)</ns0:ref>, ontogeny (e.g <ns0:ref type='bibr' target='#b40'>Klingenberg and McIntyre, 1998;</ns0:ref><ns0:ref type='bibr' target='#b44'>Mitteroecker et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b68'>Singleton, 2015)</ns0:ref> or function (e.g. <ns0:ref type='bibr' target='#b73'>Terhune et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b74'>Toro-Ibacache et al., 2016)</ns0:ref>. A key practical advantage of landmark approaches is that they impose problem tractability; they convert abstract, usually high-dimensional shape representations including images, scans and line contours, to a relatively small set of numeric coordinates which can be assembled into readily processable data formats like text files and spreadsheets. This practical advantage is reinforced by well-established statistical theory (e.g. <ns0:ref type='bibr' target='#b30'>Gower, 1975;</ns0:ref><ns0:ref type='bibr' target='#b36'>Kendall, 1977</ns0:ref><ns0:ref type='bibr' target='#b37'>Kendall, , 1984</ns0:ref><ns0:ref type='bibr' target='#b38'>Kendall, , 1985;;</ns0:ref><ns0:ref type='bibr' target='#b39'>Kent, 1994;</ns0:ref><ns0:ref type='bibr' target='#b61'>Rohlf, 1999)</ns0:ref> which describes a comprehensive solution for dealing with shape data's inherent dimensionality problem <ns0:ref type='bibr'>(Rohlf, 2000b,a;</ns0:ref><ns0:ref type='bibr' target='#b18'>Collyer et al., 2015)</ns0:ref>.</ns0:p><ns0:p>A common approach to landmark-based hypothesis testing is Procrustes ANOVA. While landmark data themselves are multivariate (i.e., multiple landmarks, each with multiple coordinates are used to describe a single shape), Procrustes ANOVA uses a univariate metric (Procrustes distance) to test shaperelevant hypotheses. One problem with this approach is that a single value is likely inadequate to fully characterize shape effects. Many other shape descriptors exist <ns0:ref type='bibr' target='#b41'>(Kurnianggoro et al., 2018)</ns0:ref>, including both univariate metrics like eccentricity and multivariate metrics like geometric moments <ns0:ref type='bibr' target='#b79'>(Zhang and Lu, 2004)</ns0:ref>.</ns0:p><ns0:p>It has been argued that focus on relatively low dimensional shape metrics like these is necessary in order to achieve suitable statistical power, with the assumption that too many variables relative to the number of phenotypes can preclude hypothesis testing via parametric methods, especially for small samples <ns0:ref type='bibr' target='#b18'>(Collyer et al., 2015)</ns0:ref>; one aim of this paper is to challenge that assertion, and to show that hypothesis testing is indeed possible for even high-dimensional representations of shape, and with suitably high statistical power for even relatively small sample sizes.</ns0:p><ns0:p>A related sample size-relevant theoretical limitation of Procrustes ANOVA is that there is no known parametric solution to the underlying Procrustes distance probability distributions. Consequently, statistical inference is conducted nonparametrically, often using bootstrapping or permutation techniques <ns0:ref type='bibr'>(Zelditch et al., 2012, pp.248-259)</ns0:ref>. These nonparametric procedures are inherently poor for small sample sizes <ns0:ref type='bibr' target='#b6'>(Anderson and Braak, 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Brombin and Salmaso, 2009)</ns0:ref> because the probability distributions are constructed empirically and numerically, using the actual data, and both the precision and accuracy of these nonparametrically constructed distributions can decrease substantially with small sample sizes.</ns0:p><ns0:p>A variety of landmark-free or landmark-minimal methods also exist, including for example techniques that fit mathematical curves to shape outlines <ns0:ref type='bibr' target='#b60'>(Rohlf, 1990)</ns0:ref>. One technique that has been particularly widely used is elliptical Fourier analysis <ns0:ref type='bibr' target='#b17'>(Claude, 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bonhomme et al., 2014)</ns0:ref>, which considers the spatial relations amongst neighboring points, and characterizes the spatial frequencies along the contour perimeter as a change-relevant representation of shape. Elliptical Fourier analysis has been frequently employed to analyse structures on which few homologous landmarks can be identified such as fins, jaws and teeth (e.g. <ns0:ref type='bibr' target='#b27'>Fu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Hill et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Cullen and Marshall, 2019)</ns0:ref>. These methods are highly relevant to the methods described in this paper, in that they deal with original, high-dimensional shape data like 2D contours and 3D surface scans.</ns0:p><ns0:p>While landmark-free or landmark-minimal methods initially operate on original high-dimensional shape data, they tend to use much lower-dimensional representations of shape when conducting classical hypothesis testing. For example, elliptical Fourier analysis tends to conduct hypothesis testing using a relatively small number (fewer than ten) harmonic coefficients <ns0:ref type='bibr' target='#b9'>(Bonhomme et al., 2014)</ns0:ref>. Common landmark and landmark-free methods are thus similar from from a hypothesis testing perspective in that the hypothesis tests ultimately pertain to relatively low-dimensional shape metrics.</ns0:p><ns0:p>This main aim of this paper was to show that classical hypothesis testing is possible on original, highdimensional shape data, and in particular on continuous surfaces, without the need for low-dimensional shape representations, and with suitably high power even for analyses of relatively small samples. The methodology, which we refer to as 'continuous, mass-multivariate analysis' consists of a number of previously described techniques including: (1) point set registration, (2) correspondence, and (3) massmultivariate hypothesis testing. This combination of techniques allows one to conduct landmark-free hypothesis testing on original surface shapes. For interpretive convenience we limit focus to 2D contours</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>Analyses were conducted in Python 3.6.10 (van Rossum, 2019) using <ns0:ref type='bibr'>Anaconda 3.6.10 (Anaconda, 2020)</ns0:ref> and in R 3.6.2 (R Core Team, 2019) . Data processing scripts are available along with all original and processed data in this project's public repository at: https://github.com/0todd0000/ lmfree2d.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>Nine datasets were analyzed (Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>). All datasets were taken from the the open-source 2D Shape Structure database <ns0:ref type='bibr' target='#b13'>(Carlier et al., 2016)</ns0:ref> (http://2dshapesstructure.github.io). The database consists of 70 different shape classes. Inclusion criteria for shape class were: (i) qualitatively similar geometry in at least 10 shapes (Fig. <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>), and (ii) at least four readily identifiable landmarks for all contour shapes.</ns0:p><ns0:p>Each dataset consisted of 20 contour shapes, where a 'dataset' represents a shape class (e.g., 'Bell' or 'Face') and individual shapes represent morphological variation within that shape class. We manually selected ten shapes from each dataset in a pseudo-random manner in order to span a range of effect sizes; in the Results, note that p values span a wide range (p < 0.001 to p > 0.9). We selected just ten shapes primarily because it has been suggested that parametric procedures are unsuitable for the morphological analyses of small samples <ns0:ref type='bibr' target='#b18'>(Collyer et al., 2015)</ns0:ref>, and we wished to demonstrate that the proposed parametric technique is indeed sufficiently powerful for small-sample analyses. Secondary reasons for considering just 10 shapes included: (1) qualitatively different within-class geometry, implying that statistical comparisons would be dubious if all 20 shapes were used, (2) inconsistent curvature characteristics (e.g., some with sharp corners, others with no discernible corners), implying landmarking difficulties, and (3) untrue contour data (e.g., internal loops and thus non-convex polygons) implying that contour parameterization was not possible for all shapes. Two-sample tests were conducted on each dataset using the four approaches as described below. For replicability, the final set of ten shapes selected for analysis from each class are redistributed in this project's repository at: https://github.com/0todd0000/lmfree2d. Note that the ultimately selected contours had a variable number of contour points within each dataset (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head>Data processing</ns0:head><ns0:p>The 2D contour shape data were analyzed using four related approaches, consisting of the four combinations of (i) landmarks vs. contours, and (ii) univariate (UV) vs. mass-multivariate (mass-MV). These four approaches are summarized in Fig. <ns0:ref type='figure'>1</ns0:ref>. The Landmarks-UV approach (Fig. <ns0:ref type='figure'>1a</ns0:ref>) is common in the literature, none of the other approaches is common. The primary purpose of this study was to compare and contrast the Landmarks-UV and Contours-MassMV approaches (Fig. <ns0:ref type='figure'>1a,d</ns0:ref>). We also employed intermediary approaches (Fig. <ns0:ref type='figure'>1b,c</ns0:ref>) to more clearly highlight the differences between the two main approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Landmarks univariate (UV) analysis</ns0:head><ns0:p>Landmarks were defined for each dataset as depicted in Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. Both the number of landmarks (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>) and their locations were selected in an ad hoc manner, with the qualitative requirement of readily Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b13'>(Carlier et al., 2016)</ns0:ref>. For each dataset in this figure, one representative shape is highlighted, along with its numbered landmarks. Note that shape variance ranges from relatively small (e.g. Bell, Face) to relatively large (e.g. Device8, Heart). One operator used a mouse to manually digitize the landmarks for each of the 90 shapes (10 shapes 141 for each of 9 datasets). The operator was ignorant of the final shape groupings for the ultimate two-sample 142 tests (see below), implying that the landmarking was performed without grouping bias.</ns0:p></ns0:div>
<ns0:div><ns0:head>143</ns0:head><ns0:p>The landmarks were spatially registered using Generalized Procrustes Analysis (GPA) <ns0:ref type='bibr' target='#b30'>(Gower, 1975)</ns0:ref>, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b29'>(Goodall, 1991)</ns0:ref> -a method which considers the variance in the Procrustes distance across a dataset.</ns0:p><ns0:p>Note that the Procrustes distance is a scalar quantity that summarizes shape difference, and thus that this method is univariate. GPA and Procrustes ANOVA were both conducted using the geomorph package for R <ns0:ref type='bibr' target='#b0'>(Adams and Otárola-Castillo, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Landmarks mass-multivariate (mass-MV) analysis</ns0:head><ns0:p>This approach was identical to the Landmarks-UV approach described above, except for statistical analysis.</ns0:p><ns0:p>The two-sample Hotelling's T 2 statistic was calculated for each landmark according to its definition:</ns0:p><ns0:formula xml:id='formula_0'>T 2 i = n 1 n 2 n 1 + n 2 r 1i − r 2i ⊤ W −1 i r 1i − r 2i<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where i indexes landmarks, the subscripts '1' and '2' index the two groups, n is sample size, r i is the mean position vector of landmark i, and W i is the pooled covariance matrix for landmark i:</ns0:p><ns0:formula xml:id='formula_1'>W i = 1 n 1 + n 2 − 2 n 1 ∑ j=1 (r 1i j − r 1i )(r 1i j − r 1i ) ⊤ + n 2 ∑ j=1 (r 2i j − r 2i )(r 2i j − r 2i ) ⊤ (2)</ns0:formula><ns0:p>where the i index is dropped for convenience in Eqn.2.</ns0:p><ns0:p>Statistical inference was conducted in a mass-multivariate manner, using Statistical Parametric Mapping (SPM) <ns0:ref type='bibr' target='#b25'>(Friston et al., 2007)</ns0:ref>. SPM bases statistical inferences on the distribution of the maximum T 2 value T 2 max , which can be roughly interpreted as the largest landmark effect, and which is defined as:</ns0:p><ns0:formula xml:id='formula_2'>T 2 max ≡ max i∈L T 2 i (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where L is the number of landmarks.</ns0:p><ns0:p>SPM provides a parametric solution to the distribution of T 2 max under the null hypothesis, so significance can be assessed by determining where in this distribution the observed T 2 max lies. Classical hypothesis testing involves the calculation of a critical threshold (T 2 ) critical , defined as the (1 − α)th percentile of this distribution, and all landmarks whose T 2 values exceed (T 2 ) critical are deemed significant at a Type I error rate of α. This is a correction for multiple comparisons (i.e., across multiple landmarks) that is 'massmultivariate' in the following sense: 'mass' refers to a family of tests, in this case a family of landmarks, and 'multivariate' refers to a multivariate dependent variable, in this case is a two-component position vector. This is similar to traditional corrections for multiple comparisons like Bonferroni corrections, with one key exception: rather than using the total number of landmarks L as the basis for the multiple comparisons correction, as the Bonferroni correction does, SPM instead solves the mass-MV problem by assessing the correlation amongst neighboring landmarks or semilandmarks, and using the estimated correlation to provide a less severe correction than the Bonferroni correction, unless there is no correlation, in which case the SPM and Bonferroni corrections are equivalent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contours univariate (UV) analysis</ns0:head><ns0:p>Similar to the Landmarks UV approach, this approach ultimately conducted Procrustes ANOVA, but did so on contour data rather than landmark data. This was achieved through two main processing steps: coherent point drift (CPD) point set registration (Fig. <ns0:ref type='figure'>4</ns0:ref>) and optimum roll correspondence (Fig. <ns0:ref type='figure'>5</ns0:ref>).</ns0:p><ns0:p>Coherent point drift (CPD) <ns0:ref type='bibr' target='#b48'>(Myronenko and Song, 2010)</ns0:ref> is a point set registration algorithm that spatially aligns to sets of points that belong to the same or a similar object. Neither an equal number of points nor homologous points are required (Fig. <ns0:ref type='figure'>4</ns0:ref>), making this approach useful for contours that have an arbitrary number of points.</ns0:p><ns0:p>Since contour points from arbitrary datasets may generally be unordered (Fig. <ns0:ref type='figure'>5a</ns0:ref>), we started our analyses by randomly ordering all contour points, then applying CPD to the unordered points. We acknowledge that many 2D contour datasets consist of ordered points -including those in the database used for this study <ns0:ref type='bibr' target='#b13'>(Carlier et al., 2016)</ns0:ref> -but since 3D surface points are much more likely to be unordered, we regard unordered point support as necessary for showing that the proposed method is generalizable to 3D analyses. Following CPD, we re-ordered the points using parametric surface modeling</ns0:p></ns0:div>
<ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b8'>(Bingol and Krishnamurthy, 2019)</ns0:ref>, which fits a curved line to the contour, and parameterizes the contour using position u, where u ranges from zero to one (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). This contour parameterization results in a continuous representation of the contour, from which an arbitrary number of ordered points (Fig. <ns0:ref type='figure'>5b</ns0:ref>) can be used to discretize the contour of each shape for subsequent analysis. We used NURBS parameterization with B-spline interpolation <ns0:ref type='bibr' target='#b8'>(Bingol and Krishnamurthy, 2019)</ns0:ref> to calculate specific contour point locations.</ns0:p><ns0:p>We then applied an optimum roll transformation, which found the value of u for one contour that minimized the deformation energy across the two contours (Fig. <ns0:ref type='figure'>5c,d</ns0:ref>).</ns0:p><ns0:p>We repeated contour parameterization, ordering, and optimum roll correspondence across all contour shapes, using the shape with the maximum number of contour points in each dataset as the template shape to which the nine other shapes were registered. Note that this registration procedure is unrelated to the traditional landmark analyses described in 'Landmark UV analysis' above, for which an equal number of points is a requirement of registration and analysis. The correspondence analysis step resulted in an equal number of contour points, upon which we conducted Procrustes ANOVA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contours mass-multivariate (mass-MV) analysis</ns0:head><ns0:p>This approach was identical to the Contours-UV approach, with the exception of statistical analysis, which we conducted using SPM as outlined above. Unlike the landmark data above, which are generally spatially disparate, contour points are spatially proximal, and neighboring points tend to displace in a correlated manner. For example, if one contour point in a specific shape lies above the mean point location, its immediate neighbors also tend to lie above the mean location). SPM leverages this correlation to reduce the severity of the multiple comparisons correction, and SPM solutions converge to a common (T 2 ) critical regardless of the number of contour points, provided the number of contour points is sufficiently large to embody the spatial frequencies of empirical interest, as outlined in classical signal processing theory <ns0:ref type='bibr' target='#b50'>(Nyquist, 1928)</ns0:ref>.</ns0:p><ns0:p>As SPM uses parametric inference to calculate the critical T 2 threshold, and Procrustes ANOVA uses nonparametric inference, we also conduct Contours Mass-MV analysis using statistical non-parametric mapping <ns0:ref type='bibr' target='#b49'>(Nichols and Holmes, 2002)</ns0:ref>, which uses permutation to numerically build the T 2 max distribution under the null hypothesis. This permutation approach converges to the parametric solution when the residuals are normally distributed (i.e., point location variance follows an approximately bivariate Gaussian distribution). All SPM analyses were conducted in spm1d <ns0:ref type='bibr' target='#b52'>(Pataky, 2012)</ns0:ref>; note that one-dimensional SPM is sufficient because the contour domain (U) is one-dimensional (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The four analyses approaches produced a range of p values from very low (p < 0.001) to very high (p > 0.9), and even yielded a large range of p values for single datasets (e.g. Heart: 0.016 < p < 0.940) (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). Of the nine datasets, only two yielded consistent hypothesis testing conclusions (at α = 0.05) across the four analysis approaches: for the Comma dataset all approaches failed to reject the null hypothesis, and for the Flatfish dataset all approaches rejected the null hypothesis. The seven other datasets showed a range of disagreement on the methods. For example, for the Key dataset neither Landmarks approach reached significance, but both Contours approaches did reach significance. For the Hammer dataset, three approaches failed to reach significance, but the Contours Mass-MV approach produced a very low p value (p < 0.001). The Landmarks approaches executed comparatively rapidly ( 50 ms) compared to the Contours approaches ( 2 s) (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>).</ns0:p><ns0:p>Since Procrustes ANOVA results are commonly used in the literature, and are summarized for the current study in (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>), the remainder of the results considers the Mass-MV approaches' results.</ns0:p><ns0:p>First, the Landmarks Mass-MV approach indicate a wide range of T 2 statistic values at each landmark (Fig, <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). For example, Landmark 5 in the Horseshoe dataset (Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>) had a very high T 2 value, and all other landmarks had comparatively low p values (Fig, <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). This suggests that (a) shape differences can be highly localized, and that (b) univariate methods that employ an overall shape change metric, like Procrustes ANOVA, may not be able to detect these changes, even when the landmarks are identical (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>).</ns0:p><ns0:p>The Contour Mass-MV results showed little qualitative difference between parametric and nonparametric inference (Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>), with minor exceptions regarding specific locations and spatial extent of supra-threshold contour points (e.g. Key, Horseshoe). Since this Contour Mass-MV approach is sensitive to point-specific variation, it was generally more sensitive at detecting changes, as shown in the relatively high rate of null hypothesis rejection relative to the other approaches (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>); that is, even though the Contours-UV and Contours Mass-MV approaches consider the same data, the latter reached significance more often than the former, implying that it is more sensitive to location-specific effects. Whether this sensitivity is a benefit or not is considered in the Discussion. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Test statistic value Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science DISCUSSION Main findings</ns0:note><ns0:p>This study's main result is the demonstration that it is possible to conduct fully automated, landmark-free, parametric hypothesis testing regarding whole 2D contour shapes, irrespective of the number of points and point ordering in the original contour data. These analyses can be executed relatively quickly; the current non-optimized implementation required less than 2 s for all analysis steps (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> ). The proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) consists of families of previous techniques including: point set registration (e.g. <ns0:ref type='bibr' target='#b48'>Myronenko and Song, 2010)</ns0:ref>, point correspondence algorithms (e.g. <ns0:ref type='bibr' target='#b42'>Loy et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b48'>Myronenko and Song, 2010)</ns0:ref>, and mass-multivariate testing <ns0:ref type='bibr' target='#b25'>(Friston et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b72'>Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Chung et al., 2010)</ns0:ref>, and some of these techniques have been used for classical hypothesis testing regarding shapes in the past <ns0:ref type='bibr' target='#b72'>(Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Chung et al., 2010)</ns0:ref>. A variety of landmark-free techniques have also been previously proposed (e.g. <ns0:ref type='bibr' target='#b76'>Wuhrer et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b72'>Taylor and Worsley, 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Chung et al., 2010)</ns0:ref> Nevertheless, these techniques have not, to our knowledge, been previously combined into a general hypothesis testing framework -from raw data to statistical results -as depicted in Fig. <ns0:ref type='figure'>1d</ns0:ref>. The main novelty of this paper is thus the demonstration that it is possible to fully automate data processing from raw 2D contour data to final hypothesis testing results.</ns0:p><ns0:p>The second main novelty of this paper is the demonstration that parametric hypothesis testing is possible when conducted at the whole-contour level. We stress that 'possible' implies neither 'valid' nor 'appropriate'; demonstrating the validity and appropriateness of the proposed method would require substantial empirical efforts over a range of datasets, data modalities, experimental designs, and applications, in addition likely to simulation studies, and as such assessing validity and appropriateness are beyond the scope of this paper. We also stress that 'possible' does not imply that one should use the proposed technique in isolation. We believe that the proposed technique offers unique information that is complimentary to other techniques, and that ideally the results of multiple analysis techniques should be corroborated to build interpretive robustness.</ns0:p><ns0:p>The proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) offers various improvements over landmark analysis (Fig. <ns0:ref type='figure'>1a</ns0:ref>) including: (1) the modeling flexibility of classical hypothesis testing, (2) increased objectivity due to avoidance of subjective landmark definition and selection, (3) increased speed due to avoidance of manual work, and (4) unique, implicit morphological meaning in hypothesis testing results. We acknowledge that each of these improvements also involve limitations, and we address these limitations below. We stress that 'objectivity' implies none of 'accurate', 'useful' or 'interpretable'. We use 'objective' instead primarily to mean 'algorithmic'.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical Parametric Mapping (SPM)</ns0:head><ns0:p>SPM, like most parametric tests, assumes normality, so in this case SPM assumes that the spatial variability of all contour points are distributed in a bivariate Gaussian manner. This distributional assumption could be directly tested using distributional tests in a point-by-point manner. In this paper, instead of directly testing for distributional adherence, we instead tested the assumption indirectly, by conducting nonparametric tests (Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>), which do not assume bivariate normality. In this case there were minor quantitative differences between the parametric and nonparametric results, but overall the qualitative interpretations were largely unaffected by the use of parametric vs. nonparametric analysis. This represents relatively strong (albeit indirect) evidence that the parametric approach's distributional assumptions are appropriate at best, or largely inconsequential at worst, for these particular datasets. This however does not imply that parametric inference is appropriate for all datasets, so distributional assumptions should generally be tested for all datasets, possibly indirectly through nonparametric tests like those conducted in this paper.</ns0:p><ns0:p>Although this paper considered only two-sample tests, SPM supports all classical hypothesis testing procedures, ranging from simple linear regression to MANCOVA <ns0:ref type='bibr' target='#b25'>(Friston et al., 2007)</ns0:ref>, thereby making the proposed framework highly flexible to arbitrary experimental designs. To emphasize this point, and how it may be valuable for general shape analysis, we conducted a set of supplementary analyses using synthetic data involving simple, circular shapes with controlled morphological effects (Fig. <ns0:ref type='figure' target='#fig_10'>9a,b</ns0:ref>). The controlled effects included a size-dependent signal, which was modeled using a Gaussian contour pulse that increased in amplitude with increasing shape size (as defined by the shape's average radius) (Fig. <ns0:ref type='figure' target='#fig_10'>9a</ns0:ref>), and a group-dependent signal, which was modeled similarly, but which was applied to just one of two hypothetical groups (Fig. <ns0:ref type='figure' target='#fig_10'>9b</ns0:ref>). To isolate and emphasize design flexibility, and to eliminate registration and correspondence as potential sources of error, we controlled both by sampling at 101 evenly distributed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>angular displacements with respect to the horizontal axis. We considered two MANCOVA possibilities:</ns0:p><ns0:p>analysis of the original, unscaled dataset (Fig. <ns0:ref type='figure' target='#fig_10'>9a</ns0:ref>), and analysis of the scaled / registered dataset (Fig. <ns0:ref type='figure' target='#fig_10'>9b</ns0:ref>).</ns0:p><ns0:p>We applied a single MANCOVA model, which modeled both shape size (i.e., mean shape radius) and group, and which thereby afforded consideration of both (1) size effects, with group effects linearly removed, and (2) group effects, with size effects linearly removed. Size effects for the original, unscaled data naturally showed very large test statistic values at all contour points (Fig. <ns0:ref type='figure' target='#fig_10'>9c</ns0:ref>). In contrast, size effects for the registered data correctly isolated the modeled size-dependent signal (Fig. <ns0:ref type='figure' target='#fig_10'>9d</ns0:ref>). Group effects were practically identical for both the original, unscaled data and the registered data (Fig. <ns0:ref type='figure' target='#fig_10'>9e,f</ns0:ref>), emphasizing the point that MANCOVA can be used to remove size-related effects in lieu of registration.</ns0:p><ns0:p>More generally, this analysis shows that the proposed framework is highly flexible, and can be used with arbitrary continuous and categorical independent variables, provided these variables adhere to the requirements of classical linear design modeling. We nevertheless caution readers that the (Fig. <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>) analyses consider close-to-ideal data, for which registration and correspondence are near-perfectly controlled. For real dataset analysis, both registration and correspondence generally introduce errors that may or not affect the ultimate hypothesis testing results. Results' sensitivity to data processing algorithms and their parameters must be considered in general analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison with landmarking and other methods</ns0:head><ns0:p>The proposed methodology partially overcomes limitations of landmark selection, and the corresponding susceptibility to bias <ns0:ref type='bibr' target='#b7'>(Arnqvist and Martensson, 1998;</ns0:ref><ns0:ref type='bibr' target='#b65'>Rohlf, 2003;</ns0:ref><ns0:ref type='bibr' target='#b26'>Fruciano, 2016)</ns0:ref>; shape-to-shape landmark identification is often manual and therefore subjective. Algorithmic landmark identification is nevertheless possible <ns0:ref type='bibr' target='#b16'>(Claes et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b71'>Strait and Kurtek, 2016)</ns0:ref>, and indeed modern machine learning techniques have been shown to substantially improve landmark detection, with the promise of eliminating landmark-associated subjectivity <ns0:ref type='bibr' target='#b46'>(Morris, 2003;</ns0:ref><ns0:ref type='bibr' target='#b77'>Young and Maga, 2015;</ns0:ref><ns0:ref type='bibr' target='#b71'>Strait and Kurtek, 2016;</ns0:ref><ns0:ref type='bibr' target='#b23'>Devine et al., 2020)</ns0:ref>. Like automated landmarking, the proposed method can be used with little-to-no subjective intervention, implying generally more repeatable results. Here 'objective' does not necessarily mean 'accurate' or 'appropriate'; it simply means that results are expected to be more reproducible than the results from more subjective methods. Determining the accuracy and appropriateness of all methods, including the proposed one, requires substantial empirical effort across a range of data modalities and applications.</ns0:p><ns0:p>We also note that the proposed landmark-free approach is just one end of the spectrum, where manual landmark definition is the other, and that a variety of alternative techniques occupy positions between these two extremes. For example, semilandmarks <ns0:ref type='bibr' target='#b43'>(Mitteroecker and Gunz, 2009)</ns0:ref> provide an objective way to fill spatial gaps between landmarks, thereby creating a continuous surface. From the perspective of the proposed method, semilandmarks represent the results of piecewise registration over the domain u (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>), or equivalently a hybrid registration method consisting of both algorithmic and manual components <ns0:ref type='bibr' target='#b58'>(Ramsay and Li, 1998)</ns0:ref>. As there are a plethora of automated techniques for geometrical matching <ns0:ref type='bibr' target='#b35'>(Holden, 2008)</ns0:ref>, the proposed framework regards these techniques each as objective, substitutable, yet each imperfect components, whose assumptions and parameters could ultimately affect the final results.</ns0:p><ns0:p>From this perspective, a second layer of objectivity could be added to the proposed framework, whereby different techniques and/or parameters are iteratively substituted in a sensitivity framework, to objectively discern the numerical stability of the final results, as well as the boundaries of that stability <ns0:ref type='bibr' target='#b54'>(Pataky et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Landmarks and other low-dimensionality representations of shape -including harmonic coefficients from elliptic Fourier analysis <ns0:ref type='bibr' target='#b9'>(Bonhomme et al., 2014)</ns0:ref> -embody a second important limitation: a potentially over-simplified representation of shape. In the case of landmarks, a danger of over-simplification arises from the Nyquist theorem: under-sampling a continuous process (including the continuous spatial surface of an object) can lead to aliasing, whereby the under-sampled measurement can misrepresent the true characteristics of the underlying object <ns0:ref type='bibr' target='#b50'>(Nyquist, 1928)</ns0:ref>, and can even reverse statistical interpretations through mechanisms such as regional conflation <ns0:ref type='bibr' target='#b53'>(Pataky et al., 2008)</ns0:ref>. This latter problem of shape simplification can nevertheless be solved by the use of semi-landmarks <ns0:ref type='bibr' target='#b11'>(Bookstein, 1997;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adams et al., 2004)</ns0:ref> which, as argued above, can be regarded as a specific approach to shape registration, implying that semi-landmark approaches could interface easily with the proposed technique.</ns0:p><ns0:p>An advantage of the proposed method is processing speed. The current, non-optimized analyses executed in under 2 s, with statistical inference itself requiring well under 100 ms (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). We Size effects from MANCOVA for the original and registered data; the test statistic is presented as √ T 2 because a linear T 2 scale would result in imperceivable color differences (i.e., the panel (c) points would be all white, and the points in the other panels would all be close-to-black). (e,f) Group effects from MANCOVA for the original and registered data; note that the (e) and (f) results are similar because MANCOVA accounts for size-related effects in the 'Original' data.</ns0:p><ns0:p>acknowledge that other data processing steps, including image segmentation and registration for example, 351 can require substantial effort, so we caution readers that the reported execution speeds do not necessarily 352 translate to reduced laboratory hours. The primary advantage in our view is instead the promotion of 353 sensitivity analysis: since the entire data processing chain can be executed relatively rapidly, it would be 354 possible to systematically adjust algorithm parameters, and even swap algorithms, in a sensitivity loop, to 355 probe the robustness of particular results. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Another advantage of the proposed method is implicit morphological information. The proposed method yields results that are rich in morphological detail (Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>) which, much like a highlighted photograph or x-ray image, can be readily interpreted at a glance. Since SPM operates directly on (registered) contours, without reducing the object-of-hypothesis-testing to a single abstract metric (like Procrustes ANOVA), or to a small handful of abstract metrics (like elliptical Fourier analysis), SPM results embody morphological meaning insofar as contours themselves embody morphological meaning.</ns0:p><ns0:p>While individual contour points do not necessarily embody meaning, one could argue that the set of all contour points collectively embodies substantial morphological meaning. This perspective is analogous to a pixel-and-image argument. The color of a single pixel is largely irrelevant to the overall interpretation and meaning of an image. Similarly, the test statistic value at a single contour point is itself largely irrelevant to the overall morphological interpretation of SPM results; morphological meaning is instead encapsulated implicitly in the overall excursion set, where 'excursion set' means the set of supra-threshold contour points, like those in Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>. Regardless of the quality of morphological meaning, SPM results must be viewed as just one set of results, which may or may not embody useful morphological information, and which should be considered along with other, more explicit morphological methods like Procrustes ANOVA and elliptical Fourier analysis.</ns0:p><ns0:p>Considering last specific results from this paper, a particularly unintuitive set of results was observed for the Device8 dataset, for which UV analysis yielded the smallest p value (0.022), and for which no other method yielded significance (p > 0.2) (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). This result was likely caused by widespread but relatively small-magnitude mean-shape differences (Fig. <ns0:ref type='figure' target='#fig_8'>8c</ns0:ref>); since the deformation is widespread it would be detected by a general deformation metric like Procrustes distance, but since the deformation magnitude is relatively small it would not be detected by local contour-point methods like SPM. The interpretation is emphasized in the Flatfish dataset, where general deformations were similarly broadly distributed across the contour, but maximal local deformations were greater (Fig. <ns0:ref type='figure' target='#fig_8'>8e</ns0:ref>), which yielded significance in all methods (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). Nevertheless, this interpretation appears to be inconsistent with the Horseshoe dataset, which exhibited both large and widely distributed deformation (Fig. <ns0:ref type='figure' target='#fig_8'>8h</ns0:ref>), but which also failed to yield significant UV results (Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). Nevertheless, this apparent consistency may be resolved by considering the large variability in the Horseshoe dataset, particularly at the selected landmarks (Fig. <ns0:ref type='figure' target='#fig_0'>2h</ns0:ref>).</ns0:p><ns0:p>To more completely resolve such apparent inconsistencies, and more generally to understand the nature of landmark-vs. contour-based methods, it would be necessary to consider individual contour points, their deformations, and their covariances.</ns0:p></ns0:div>
<ns0:div><ns0:head>Generalization to 3D analysis</ns0:head><ns0:p>While this paper was limited to 2D analysis, it should be noted that the proposed analysis framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) can be readily extendable to the morphological analysis of 3D surfaces. Similar to the unwrapping of 2D contours onto a 1D domain u (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>), 3D surfaces can be unwrapped onto a 2D domain uv Fig. <ns0:ref type='figure' target='#fig_12'>10</ns0:ref>, and methods like SPM <ns0:ref type='bibr' target='#b25'>(Friston et al., 2007)</ns0:ref> can be used to conduct domain-level hypothesis testing regarding these unwrapped data. This domain-wide testing is possible due to the underlying model of domain-level variance, which SPM models as smooth, Gaussian random fields, and which can be extended to arbitrarily high-dimensional domains with arbitrary geometry <ns0:ref type='bibr' target='#b3'>(Adler and Taylor, 2007)</ns0:ref>. For the current paper involving 2D shapes, the (flattened) domain is one-dimensional, and the dependent variable is a two-component position vector; that is, a two-component position is defined at all locations u along the contour. Similarly, for 3D surfaces, the (flattened) domain is two-dimensional and the dependent variable is a three-component position vector, where position is defined at all locations uv across the surface. A variety of computational tools exist for 3D geometry flattening (e.g. <ns0:ref type='bibr' target='#b22'>Dale et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b67'>Sawhney and Crane, 2017)</ns0:ref>, so 3D implementations of the proposed method could presumably proceed in a fully automated manner.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>The proposed mass-multivariate framework (Fig. <ns0:ref type='figure'>1d</ns0:ref>) has a number of limitations. The most severe of these is sensitivity to algorithmic specifics. For example, simply by randomly changing the order of the points, it is possible to yield qualitatively different results (Fig. <ns0:ref type='figure'>11</ns0:ref>). Systematic, random variations of point ordering would be necessary for assessment of the results' sensitivity, but in our view this would be insufficient because ultimate results may also be sensitive to other particulars including, for example, specific parameter values used in contour parameterization, registration, and correspondence algorithms.</ns0:p><ns0:p>In other words, one should regard the results as potentially sensitive to all data processing steps, and not just to point ordering. The current paragraph considers just one example (point ordering) as a potential source of sensitivity concern. In (Fig. <ns0:ref type='figure'>11</ns0:ref>), the qualitative change in results can be attributed to a minor shift in point correspondence (Fig. <ns0:ref type='figure'>11a-b</ns0:ref>), which created a small shift in pointwise covariance, but a shift that was large enough to alter the hypothesis rejection decision at α = 0.05. That is, point-specific covariance is direction dependent, so small changes in point-deformation direction can yield qualitative changes in test statistics <ns0:ref type='bibr' target='#b54'>(Pataky et al., 2014)</ns0:ref>. Nevertheless, we observed this type of sensitivity to random point ordering only occasionally, with most randomizations resulting in qualitatively similar results. Also, in most cases we noticed that probability results, while variable, were generally stable. The problem only emerged qualitatively when that variability spanned α=0.05, as depicted in Fig. <ns0:ref type='figure'>11</ns0:ref>). This problem of probability value variability <ns0:ref type='bibr' target='#b32'>(Halsey et al., 2015)</ns0:ref> partially reflects a weakness of classical hypothesis testing, which has a binary interpretation of continuous probability. We acknowledge that we did not systematically conduct sensitivity testing, and also that each stage of processing involves a variety of components or parameters that could be subjected to sensitivity analysis. Comprehensive consideration of this sensitivity would require a large research effort, so we leave this for future work.</ns0:p><ns0:p>The datasets and analyses presented in this paper also have limitations. We analyzed shapes from just one database <ns0:ref type='bibr' target='#b13'>(Carlier et al., 2016)</ns0:ref> and, for each dataset, we selected only ten shapes for analysis, and only conducted two-sample tests. While we do not expect analysis of datasets from other databases to appreciably affect this paper's messages, we acknowledge that analyses of relatively small samples, and just one simple experimental design, fully exposes neither the advantages nor disadvantages of the proposed analysis framework. We selected just ten shapes for each dataset primarily to emphasize that the proposed parametric procedure is sufficiently sensitive to detect morphological effects for small sample sizes. The specific ten shapes were selected in an ad hoc manner to emphasize particular concepts including, for example: interpretation agreement between the proposed and landmark methods' results, and the opposite: interpretation disagreement. Since these datasets were selected in an ad hoc manner, from a single database, and with only two-sample analyses, the reader is left to judge the relevance of these results to other datasets and experimental designs. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Overview of analyzed datasets. All contour data are available in the 2D Shape Structure Dataset<ns0:ref type='bibr' target='#b13'>(Carlier et al., 2016)</ns0:ref>. For each dataset in this figure, one representative shape is highlighted, along with its numbered landmarks. Note that shape variance ranges from relatively small (e.g. Bell, Face) to relatively large (e.g. Device8, Heart).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Shape class exclusion examples. Shape classes were excluded if they contained shapes with qualitatively different contour geometry. For example: (a) the 'cup' class was excluded because some shapes had unattached handles with holes and others had attached handles without holes. (b) The 'octopus' class was excluded because the eight appendages appeared in non-homologous locations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>140</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>144</ns0:head><ns0:label /><ns0:figDesc>and the resulting registered landmarks were analyzed in a univariate manner, using Procrustes ANOVA145 5/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Example point set registration using the coherent point drift (CPD) algorithm<ns0:ref type='bibr' target='#b48'>(Myronenko and Song, 2010)</ns0:ref>. Note that CPD requires neither corresponding points, nor an equal number of points.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Example parametric representations of 2D contour shape. Dots represent manually defined landmarks, and are shown as visual references. Left panel (XY plane): the spatial plane in which shape data are conventionally presented. The three colors represent different shapes. Bottom panel (UX plane) and right panel (UY plane): abstract planes in which U represents the parametric position (from 0 to 1) along the contour; positions U=0 and U=1 are equivalent.</ns0:figDesc><ns0:graphic coords='9,141.73,385.78,413.57,232.63' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Landmark results from mass-multivariate testing. Landmark-specific T 2 values are presented along with the critical threshold at α=0.05, and probability values for the overall mass-multivariate test.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Contours mass-multivariate results using Statistical Parametric Mapping (SPM). Results for both parametric and nonparametric inference are shown. P values represent the probability that random variation in the Mean A contour would produce a deformation as large as in the observed Mean B, given the estimated contour variance. Dots on the Mean B contour represent contour points whose T 2 values exceeded the threshold for significance at α=0.05; if the maximum T 2 value did not reach this threshold, the p value is greater than α, and no dots are shown.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure9. Example MANCOVA using synthetic data; for simplicity, data were generated to have (i) a relatively large signal:noise ratio, and (ii) close-to-perfect correspondence, by sampling at 101 equally spaced angular distances around the contour. (a) The original contour dataset, consisting of five noisy circles for each of two groups, with systematically different mean radii, and also with both group-and size-dependent signal, where 'size' was considered to be the mean radius, and where 'signal' implies true morphological difference. Note that the size-dependent signal is more easily perceived in panel (a), and that the group-dependent signal is more easily perceived in the next panel. (b) Registered contours. (c,d) Size effects from MANCOVA for the original and registered data; the test statistic is presented as √ T 2 because a linear T 2 scale would result in imperceivable color differences (i.e., the panel (c) points would be all white, and the points in the other panels would all be close-to-black). (e,f) Group effects from MANCOVA for the original and registered data; note that the (e) and (f) results are similar because MANCOVA accounts for size-related effects in the 'Original' data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Example 3D surface unwrapping. (a) Original 3D geometry. (b) Unwrapped geometry; this is a 2D parametric (UV) representation of the original geometry. Colors represent changes in surface normal direction. The thick black line in panel (a) represents a seam along which the 3D geometry is cut so that it can be flattened into a 2D shape. Unwrapping was performed here using boundary first flattening (Sawhney and Crane, 2017).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset count summary. Point counts refer to the original data from<ns0:ref type='bibr' target='#b13'>Carlier et al. (2016)</ns0:ref>.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>Shapes</ns0:cell><ns0:cell /><ns0:cell>Points</ns0:cell><ns0:cell /><ns0:cell>Landmarks</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Min Median Max</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bell</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>185</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Comma</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Device8</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>8</ns0:cell></ns0:row><ns0:row><ns0:cell>Face</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>103</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatfish</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Hammer</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Horseshoe</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>128</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>Key</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>103</ns0:cell><ns0:cell>106</ns0:cell><ns0:cell>115</ns0:cell><ns0:cell>5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Statistical results summary, probability values. As nonparametric inference yielded similar p values (see Results), only parametric p values are reported in this table for brevity.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell cols='2'>Landmarks</ns0:cell><ns0:cell cols='2'>Contours</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell></ns0:row><ns0:row><ns0:cell>Bell</ns0:cell><ns0:cell>0.130</ns0:cell><ns0:cell>0.302</ns0:cell><ns0:cell>0.084</ns0:cell><ns0:cell>0.041</ns0:cell></ns0:row><ns0:row><ns0:cell>Comma</ns0:cell><ns0:cell>0.155</ns0:cell><ns0:cell>0.294</ns0:cell><ns0:cell>0.719</ns0:cell><ns0:cell>0.327</ns0:cell></ns0:row><ns0:row><ns0:cell>Device8</ns0:cell><ns0:cell>0.022</ns0:cell><ns0:cell>0.214</ns0:cell><ns0:cell>0.433</ns0:cell><ns0:cell>0.681</ns0:cell></ns0:row><ns0:row><ns0:cell>Face</ns0:cell><ns0:cell>0.025</ns0:cell><ns0:cell>0.103</ns0:cell><ns0:cell>0.052</ns0:cell><ns0:cell>0.013</ns0:cell></ns0:row><ns0:row><ns0:cell>Flatfish</ns0:cell><ns0:cell>0.023</ns0:cell><ns0:cell>0.016</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Hammer</ns0:cell><ns0:cell>0.708</ns0:cell><ns0:cell>0.206</ns0:cell><ns0:cell>0.417</ns0:cell><ns0:cell>< 0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart</ns0:cell><ns0:cell>0.940</ns0:cell><ns0:cell>0.976</ns0:cell><ns0:cell>0.544</ns0:cell><ns0:cell>0.016</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Horseshoe 0.084</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Key</ns0:cell><ns0:cell>0.532</ns0:cell><ns0:cell>0.270</ns0:cell><ns0:cell>0.013</ns0:cell><ns0:cell>0.022</ns0:cell></ns0:row></ns0:table><ns0:note>9/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Execution</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Procedure</ns0:cell><ns0:cell cols='2'>Landmarks</ns0:cell><ns0:cell cols='2'>Contours</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>UV Mass-MV</ns0:cell><ns0:cell>UV</ns0:cell><ns0:cell>Mass-MV</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CPD</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>414.1</ns0:cell><ns0:cell>414.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Point Ordering</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>327.9</ns0:cell><ns0:cell>327.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Registration</ns0:cell><ns0:cell>Interpolation</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>835.1</ns0:cell><ns0:cell>835.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Correspondence</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>40.9</ns0:cell><ns0:cell>40.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GPA</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>8.5</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Hypothesis test</ns0:cell><ns0:cell>Proc-ANOVA SPM</ns0:cell><ns0:cell>60.0 -</ns0:cell><ns0:cell>-39.3</ns0:cell><ns0:cell>99.0 -</ns0:cell><ns0:cell>-66.8</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>66.7</ns0:cell><ns0:cell>46.0</ns0:cell><ns0:cell>1725.5</ns0:cell><ns0:cell>1684.8</ns0:cell></ns0:row></ns0:table><ns0:note>durations (unit: ms). Averages across the nine datasets. Procrustes ANOVA (Proc-ANOVA) involved 1000 iterations for each dataset. Average SnPM durations (not shown in this table) were 344.0 and 6336.0 ms for Landmarks Mass-MV and Contours Mass-MV, respectively.10/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53545:2:0:NEW 30 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "PeerJ Computer Science - Responses to Reviews
Article 53545
Title: “Landmark-free, parametric hypothesis tests regarding two-dimensional contour shapes
using coherent point drift registration and statistical parametric mapping”
We thank the Editors and Referees once again for your time. Please find that we have reproduced your
comments below in blue text, numbered each comment for cross-referencing purposes, and responded
immediately below each comment. Please also find that all main changes to the main manuscript are
colored red.
Thank you for re-considering our work,
Todd Pataky, Masahide Yagi, Noriaki Ichihashi and Phil Cox
_______________________________________________________________________
REVIEWER #1
Basic reporting
{1.1} 113-131: More details on group definition and shape choice are appreciated. However, as
written, information in these three paragraphs is presented in a confusing order. Please reorder and
edit to improve reader understanding. For example, at one point, it is claimed that shapes are assigned
to groups in a pseudo-random manner. Presumably, this is after 10 shapes were chosen randomly for
each object type (as described in the next paragraph). Then, in the last paragraph is described
explicitly or is this something else?
Response: We agree that the order of presentation was poor. Please find that we have updated these
paragraphs and removed redundant and apparently contradictory details.
{1.2} Figure 8: Based on other tables, it appears that the parametric p-values are found in dark grey
shapes and the nonparametric p-values are found in light grey shapes. However, this should probably
be indicated explicitly within the Figure caption.
Response: We agree, and we apologize for the lack of clarity. We have decided to add a second
legend to the figure which clarifies that the darker and lighter shapes represent parametric and
nonparametric results, respectively. Due to this change we believe it is not necessary to update the
caption.
{1.2} 280: imply instead of apply?
Response: We agree and apologize for this typo. We have made the suggested change.
{1.4} Discussion: Perhaps subsection headings of the discussion would be useful. For example (~Line
374), to separate more general discussion of the SPM method from discussion of specific results
presented in the paper?
Response: We agree. Please find that we have added subheadings, including at the specified
demarkation point. This has necessitated some content shuffling, but please note that the previously
submitted text remains intact so we have not indicated these shuffles in the revised manuscript.
{1.5} 433: ad hoc instead of add hoc?
Response: We agree and apologize for this typo. We have made the suggested change.
Experimental design
{1.6} 201: The clarification of contour points is helpful. However, the statement that the shape with
the maximum number of points is used as a template for shape registration is still potentially
confusing within the context of a standard Procrustes ANOVA analysis. Within GM, Procrustes
superimposition of shapes would be based on an equal number of 1:1 homologous points per shape.
The number of points is one of several factors that will lead to different Procrustes distance
estimations. So, if a standard Procrustes ANOVA is being conducted, the reader assumption is likely
to be that Procrustes distances were estimated between shapes represented by equal numbers of
points. The author response to previous comment 1.11 is comforting to the reviewer, but will not be
seen by other readers. The statement that the shape with maximum points (even if this means
maximum number of points before parameterization) will be off-putting to GM readers. Can this
implication of shapes with different numbers of points within Procrustes ANOVA be clarified?
Response: We agree. Please find that we have edited the cited passage to more clearly describe how
the described procedure can be used for Procrustes ANOVA.
Validity of the findings
{1.7} 114-116 & other places: The clarification that one major goal is to show statistical power to
identify differences using parametric methods is helpful here and elsewhere. As someone who applies
existing methods more often than he develops statistical methods, this reviewer previously assumed
that phrases like “suitability” and “possible” did imply empirical appropriateness of methods rather
than being specifically related to statistical power. The explicit clarification of meaning that have been
made throughout the paper are much appreciated.
Response: Thank you very much for this feedback, we have no response.
{1.8} However, in this reviewer’s experience/interpretations of various readings, the main concern
about parametric testing methods for multivariate morphometric data from small datasets has been
that the assumptions of parametric methods (including distribution assumptions) may not be met. This
leads to a concern that statistically significant signals generated with a parametric test are invalid and
may not reflect a “real” difference between groups under study. Of course, “real” means different
things in different contexts. But, the idea of statistical power seems secondary to the idea of empirical
validity to this reviewer (who is acknowledges a potential blindspot on this subject). That being
stated, the explicit statements of word meaning and research goals added by the authors in the revised
manuscript are certainly adequate to prevent reader confusion. This potential philosophical difference
is no reason to delay publication and requires no practical changes to the manuscript.
Response: Thank you very much for sharing these insights. We agree with these interpretations. Since
no specific revisions have been suggested we have not modified the manuscript.
Comments for the Author
{1.9} The authors have made major revisions and welcome additions to the Introduction and
Discussion session that address most previous reviewer comments. These changes will make this
work more accessible and increase its impact among practicing morphometricians. The in-depth
comments, clarifications, and illustrations of statistical concepts within the author response are helpful
and much appreciated. A few potential issues remain, but they are minor in nature.
Response: Thank you very much for this feedback, we have no response.
" | Here is a paper. Please give your review comments after reading it. |
104 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Virtual reality (VR) technology is an emerging tool that is supporting the connection between conservation research and public engagement with environmental issues. The use of VR in ecology consists of interviewing diverse groups of people while they are immersed within a virtual ecosystem to produce better information than more traditional surveys. However, at present, the relatively high level of expertise in specific programming languages and disjoint pathways required to run VR experiments hinder their wider application in ecology and other sciences. We present R2VR, a package for implementing and performing VR experiments in R. The package provides functions for rendering VR scenes on web browsers with A-Frame that can be viewed by multiple users on smartphones, laptops, and VR headsets. It also provides instructions on how to retrieve answers from an online database in R. Three published ecological case studies are used to illustrate the R2VR workflow, and show how to run a VR experiments and collect the resulting datasets. By tapping into the popularity of R among ecologists, the R2VR package creates new opportunities to address the complex challenges associated with conservation, improve scientific knowledge, and promote new ways to share better understanding of environmental issues. The package could also be used in other fields outside of ecology.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>The emergence of digital technologies, including Virtual Reality (VR), facilitates connections between the public and the scientific community and creates innovative pathways for environmental conservation research <ns0:ref type='bibr' target='#b18'>(Mazumdar et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref>. In general, VR uses a combination of immersive technology via head-mounted devices, hand controllers and stereoscopic sound to replace natural sensory input with inputs from a computer system, such that a person is exposed to vivid virtual scenes <ns0:ref type='bibr' target='#b29'>(Riva et al., 2007)</ns0:ref>. In the field of ecology, VR experiences are used as a research tool to (1) increase understanding about the complexity of environmental issues associated with climate change, (2) influence empathy, and (3) promote environmental behavior changes <ns0:ref type='bibr' target='#b17'>(Markowitz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b10'>Herrera et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Nelson et al., 2020)</ns0:ref>. Despite promising results, the small number of published studies that have used VR approaches in ecology shows that there remain opportunities for further research in environmental education <ns0:ref type='bibr' target='#b28'>(Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref> and for the development of programming tools that ease the integration of VR with applied science fields <ns0:ref type='bibr' target='#b25'>(Okamoto et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b33'>Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Loup et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Brookes et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bexter and Kampa, 2020)</ns0:ref>.</ns0:p><ns0:p>VR experiments for environmental conservation involve the elicitation of information while people are immersed in virtual scenes of natural ecosystems, such as 360-degree images. VR experiments include PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science multimodal features of text, images, sounds and haptic feedback to create a rich and engaging environment to expose people to more complete and complex information <ns0:ref type='bibr' target='#b8'>(Fauville et al., 2020)</ns0:ref>.</ns0:p><ns0:p>However, many current VR research applications use only a subset of these features, depending on the available technology. For example, audio is often excluded, despite the growing body of literature showing that it increases the sense of presence and realism <ns0:ref type='bibr' target='#b13'>(Kern and Ellermeier, 2020)</ns0:ref>. Notwithstanding this, even the restricted VR environment settings have the potential to produce better qualitative and quantitative elicited information. In the fields of ecology and conservation, VR has the potential to support greater understanding of complex ecological processes such as coral bleaching <ns0:ref type='bibr' target='#b21'>(Minocha et al., 2017)</ns0:ref>, and new forms of thinking about ecosystem dynamics <ns0:ref type='bibr' target='#b9'>(Grotzer et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Queiroz et al., 2018)</ns0:ref>. VR experiments solve the difficulty of accessing ecosystems that are situated in remote locations and might be potentially dangerous or expensive to survey. Continuous access to these ecosystems opens up new opportunities for ecologists to fill the gaps in current scientific knowledge related to the paucity of data and ecological consequences of major changes in ecosystems health and species composition.</ns0:p><ns0:p>Combined with modern techniques in statistical ecology, elicited information collected from VR experiments can produce new types of ecological insights that complement environmental monitoring and conservation efforts. For example, VR experiments with 360-degree images were used to develop aesthetic indicators based on people's perception of the beauty of a coral reef <ns0:ref type='bibr' target='#b34'>(Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>They were also used to predict the presence or absence of emblematic species threatened by habitat loss and fragmentation, such as koalas (Phascolarctos cinereus, <ns0:ref type='bibr' target='#b15'>(Leigh et al., 2019)</ns0:ref>, Australian rock wallabies (Petrogale penicillata, <ns0:ref type='bibr' target='#b5'>(Brown et al., 2016)</ns0:ref>, and jaguars (Panthera onca) <ns0:ref type='bibr' target='#b1'>(Bednarz et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Mengersen et al., 2017)</ns0:ref>. In these experiments, opinions and knowledge were extracted from the responses given by experts, indigenous communities, scuba-divers and non-expert participants. This information was then incorporated into quantitative statistical models and used to improve understanding of complex ecological systems and to inform the development of future management and conservation strategies. Such strategies included the creation of a jaguar conservation corridor across the Amazon rainforest <ns0:ref type='bibr' target='#b35'>(Zeller et al., 2013)</ns0:ref> and supporting the Australian government in their reporting to UNESCO on the status of the Great Barrier Reef World Heritage Area <ns0:ref type='bibr' target='#b34'>(Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>VR experiments in ecology are often conducted using generic VR experiences such as Google Expeditions or pre-made 360-degree movies <ns0:ref type='bibr' target='#b19'>(McMillan et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Parmaxi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Nelson et al., 2020)</ns0:ref>, which are primarily developed for educational purposes <ns0:ref type='bibr' target='#b17'>(Markowitz et al., 2018)</ns0:ref>. These tools are not designed to be adapted for specific research purposes, therefore a collaboration with VR developers and accessibility to bespoke VR software is required to repurpose them for research applications <ns0:ref type='bibr' target='#b16'>(Loup et al., 2018)</ns0:ref>. Common VR programming environments such as C#/Unity 1 , C++/Unreal Engine 2 and React 360 3 require specific programming expertise, which ecologists and other scientists may lack.</ns0:p><ns0:p>The R2VR package development was motivated with the goal of providing greater access to VR experiments and the associated research benefits of using the R statistical software environment, a top ten popular programming language 4 extensively used by quantitative ecologists <ns0:ref type='bibr' target='#b14'>(Lai et al., 2019)</ns0:ref>. The purpose of R2VR is to implement and perform VR experiments, and record and analyse data for scientists while minimizing the need for different tools and expertise beyond the R language. We adopted a similar approach to that of <ns0:ref type='bibr' target='#b16'>Loup et al. (2018)</ns0:ref>, which allows non-VR developers to create VR experiences without the need for VR programming expertise. Their approach is based on the development of an open-access pipeline in which non-VR programmers can generate and use versatile VR scripts for their own purposes. The pipeline simplifies the development of VR environments by connecting game engines with VR assistance tools. Similarly, the R2VR package uses the WebVR to generate VR experiences for non-developers and to collect data from R. The technical challenges relate to (1) the ability for an R user to interact with a VR scene via WebSocket connections between R and a WebXR Device API (see R2VR description) and (2) the creation of a database to store and retrieve data from VR experiments, which, in the present case, is achieved via a Node API.</ns0:p><ns0:p>In this paper, we first describe the functions of R2VR to assist in the creation of VR experiments and its applications in environmental conservation research. We then present a comparative review of analogous studies from three different perspectives: the user, the developer and the quantitative ecologist.</ns0:p><ns0:p>Following this, we present three case studies in which we have implemented the R2VR package. The paper concludes with a general discussion.</ns0:p><ns0:p>The R2VR package opens up many new directions of enquiry among quantitative ecologists and software developers. These include the elicitation of expert information, the analysis of elicited responses and the validation of these data. It is beyond the scope of this paper to discuss these issues, although we point the reader to <ns0:ref type='bibr' target='#b6'>Choy et al. (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Bednarz et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Brown et al. (2016)</ns0:ref>; <ns0:ref type='bibr'>Santos-Fernandez et al. (2020)</ns0:ref>; <ns0:ref type='bibr'>Santos-Fernandez and Mengersen (2020)</ns0:ref> for further reading.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2VR DESCRIPTION</ns0:head><ns0:p>The R2VR package uses A-Frame 5 and WebXR Device API platforms 6 (Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>) for building VR experiences. These are open-source and make the VR functionality accessible to people with basic knowledge in web programming <ns0:ref type='bibr' target='#b30'>(Santos and Cardoso, 2019)</ns0:ref>. VR experiences are composed of assets (a asset) that can be an image, texture or model; entities (a entity) indicating the placeholder for an object; and scenes (a scene) composed of all the created objects. The R2VR package uses the A-Frame architecture which allows VR scenes to be composed and served directly from an R script. It The R package is hosted by a Github repository: 7 and can be installed using the command: devtools::install_github('ACEMS/r2vr')</ns0:p><ns0:p>The package functionality is composed of five generic functions, which are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The rendering of VR scenes in the web browser is started and stopped using the functions start() and </ns0:p></ns0:div>
<ns0:div><ns0:head>COMPARATIVE STUDIES</ns0:head><ns0:p>In this section, we embed the R2VR package in the body of related literature. We first note the merit of VR compared with 2D technologies, then focus on a comparative review of other platforms. Noting the intended audience for R2VR, we address the latter from three perspectives: the user, the ecologist and the quantitative ecologist.</ns0:p></ns0:div>
<ns0:div><ns0:head>VR versus 2D technologies</ns0:head><ns0:p>The benefits of using 3D technologies including VR and Augmented Reality (AR) compared to 2D</ns0:p><ns0:p>environments have been widely demonstrated in the literature. <ns0:ref type='bibr' target='#b0'>Akpan and Shanker (2019)</ns0:ref> performed a meta-analysis to compare 3D and traditional 2D technologies and found enhanced performance and quality in 3D settings. VR technology has been found effective for training, gamification and collaboration;</ns0:p><ns0:p>resulting in more suitable experiences and motivated participants <ns0:ref type='bibr' target='#b12'>(Kavanagh et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2VR versus other platforms</ns0:head><ns0:p>The package R2VR uses the A-Frame platform 9 to create the VR environments and R to run the experiment and read the data. The A-Frame platform is a common choice for development of VR environments <ns0:ref type='bibr' target='#b7'>(Dibbern et al., 2018)</ns0:ref>, although other 3D game engines such as Unity are popular for the integration of VR experiments within other research fields, including neuroscience <ns0:ref type='bibr' target='#b33'>(Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014)</ns0:ref> and human behaviour <ns0:ref type='bibr' target='#b4'>(Brookes et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Here, we compare positive and negative characteristics of R2VR (A-Frame embedded in R) and Unity from the perspective of the user, the developer and the quantitative ecologist (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). The user is defined as a participant in the VR experiments with the duty of answering interview questions. The developer is the programmer that focus on providing the immersive environments. We define the category quantitative ecologist as researchers that use VR experiments as a tool to answer research questions. They hold the responsibility of developing interview questions and immersive environments, performing experiments, and collecting data for analyses.</ns0:p><ns0:p>The comparison was based on two sources: first hand experience and published literature. The first hand experience was based on elicited information from users that participated in the ecological case studies (see below), developers in Web-and Unity-programming, and quantitative ecologists that Manuscript to be reviewed</ns0:p><ns0:p>Computer Science conceptualized these studies. The published literature was based on key references cited in the following papers <ns0:ref type='bibr' target='#b7'>(Dibbern et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nebeling and Speicher, 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Nguyen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Santos and Cardoso, 2019)</ns0:ref>. To our knowledge, existing VR packages for applied sciences require specialized hardware and substantial programming knowledge to customise the experiment beyond the case study for which the tool was developed. The tools that we found in the related literature require downloading, setting up, and interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus on customising 3d objects from models with pre-fabricated environments <ns0:ref type='bibr' target='#b33'>(Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bexter and Kampa, 2020)</ns0:ref>, in comparison to R2VR which gives the freedom to change the environments, customise expected data responses, and interact with VR to change images or ask questions in real time. Some are customizable, but extensive knowledge in C# and Unity knowledge are needed to re-purpose them <ns0:ref type='bibr' target='#b4'>(Brookes et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>ECOLOGICAL CASE STUDIES</ns0:head><ns0:p>The R2VR package was used to replicate parts of VR experiments developed in previous studies. We used this approach to demonstrate stepwise how to build a VR environment directly from R. The content for reproducing these case studies is composed of twelve 360-degree images, R2VR functions and R scripts with interview questions. These resources are included with the R2VR package.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science from volunteers, sightings estimated from thermal imagery collected using drones, and data from experts elicited using VR technology. Experts were immersed in 360-degree images of forested areas and asked about (1) the likelihood of koalas being present and (2) habitat suitability for koalas, with associated confidence in their estimates. Answers were manually recorded and transferred to CSV files. Probabilities obtained from elicitation were modeled using a beta regression and subsequently integrated with presenceabsence data obtained from volunteers and thermal images within logistic regression models. The results demonstrated that incorporating data elicited using VR into the statistical models produced better predictions of koala distribution and better characterisation of their habitats.</ns0:p><ns0:p>For the purpose of the present study, co-authors and collaborators were immersed in a sample of these 360-degree images of forested areas and were asked: 'Do you see a koala?'. They responded by selecting 'yes' or 'no' within the VR scenes (Fig. <ns0:ref type='figure' target='#fig_4'>2A</ns0:ref>). The associated data table koala is composed of five variables and populated with the user's responses (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). User names and answers were retrieved by using the starting and ending times of the elicitation (recordedOn in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>) in the R script. VR scenes were restarted for each participant between case studies. </ns0:p></ns0:div>
<ns0:div><ns0:head>Case study 2: Jaguar</ns0:head><ns0:p>A team of QUT researchers and collaborators conducted a study in Peru to help the development of a jaguar conservation corridor across South America <ns0:ref type='bibr' target='#b35'>(Zeller et al., 2013)</ns0:ref>. Part of the study involved modelling the distribution of jaguars (Panthera onca) using jaguar sightings from the Shipibo and Conibo indigenous communities. <ns0:ref type='bibr' target='#b20'>Mengersen et al. (2017)</ns0:ref> considered several occupancy and abundance models, which included environmental covariates to account for uncertainty associated with different types of jaguar sightings (e.g. visual sighting, vocalisation, tracks, etc.) elicited from community members. The elicited information was then combined with visual and sound recordings to create immersive environments of the Peruvian jungle 10 and interview jaguar experts on several aspects of jaguar habitat <ns0:ref type='bibr' target='#b1'>(Bednarz et al., 2016)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here, we used the R2VR package to show four 360-degrees images of the Peruvian jungle. Co-authors and collaborators were asked to consider characteristics known to affect jaguar habitat suitability, such as the presence/absence of water, jaguar tracks, jaguar scratches on trees, and dense vegetation. They could select more than one indicator by clicking on the associated boxes within the VR scenes (Fig. <ns0:ref type='figure' target='#fig_4'>2B</ns0:ref>). The jaguar data table (Fig. <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>) is composed of eight variables (four relating to presence/absence of indicators, four for metadata, id, image id, image file and, recordedOn -shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Case study 3: Coral reef</ns0:head><ns0:p>The reef VR experiments was originally developed to estimate indicators of coral reef aesthetics based on people's perception of reef beauty <ns0:ref type='bibr' target='#b34'>(Vercelloni et al., 2018)</ns0:ref>. Three groups of people (marine scientists, experienced divers and the general public) were interviewed while immersed in 360-degree images of the Great Barrier Reef, Australia. The presence/absence of variables that represent different ecological characteristics of coral reefs and their opinions about reef beauty were used to parameterize a logistic regression model in order to gain knowledge about what makes a coral reef beautiful. The model results suggested that a structurally complex reef with diverse colours had a positive influence on reef aesthetics.</ns0:p><ns0:p>We asked co-authors and collaborators to look at four virtual reefs and answer the question 'Do the live corals on the reef form structurally complex habitats?' (Fig. <ns0:ref type='figure' target='#fig_4'>2C</ns0:ref>). After the classification of the images, we obtained a data table called coral reef (Fig. <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>) composed of five variables with binary responses (0 for non-complex reef and 1 for complex reef) to the question.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The directly to R allows researchers to process, manipulate and visualize data, and access to the latest statistical methods in ecology. The generic, multipurpose and unique programming language of R2VR is key to increasing the uptake of VR as an accessible research tool for ecologists and other applied scientists.</ns0:p><ns0:p>Previous efforts to simplify VR development for non-programmers allowed the integration of VR in different research fields <ns0:ref type='bibr' target='#b11'>(Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b33'>Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Brookes et al., 2019)</ns0:ref> but those remained focused on specific purposes and are not easily adaptable to other research questions. Ecologists have different needs, including the flexibility to modify VR scenes and collect data for analyses in a language that they can understand. These needs are different from users' and developers' perspectives.</ns0:p><ns0:p>The current implementation of R2VR uses A-Frame to create VR environments written in the R language. Another R package 'shinyframe' uses this combination to visualise 3D plots 11 . Further developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR environments could be coded in a different language other than R including Python 12 which is an even more popular language among applied scientists 13 , but not as developed in terms of statistical capabilities.</ns0:p><ns0:p>The availability of such tools will greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive experiments in applied science.</ns0:p><ns0:p>It is increasingly being shown that VR technology facilitates the public involvement in environmental conservation by creating engaging learning environments <ns0:ref type='bibr' target='#b28'>(Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref>.</ns0:p><ns0:p>R2VR provides a fundamental framework for citizen science projects that could use VR to perform different activities including online data collection, data extraction from existing records, and knowledge sharing <ns0:ref type='bibr' target='#b18'>(Mazumdar et al., 2018)</ns0:ref>. It can also facilitate a natural link between VR experiments and citizen science projects by offering an open-access tool for research scientists to build their own VR environments.</ns0:p><ns0:p>In this way, members of the public can perform tasks in the same perceptual environment as might an ecologist and collect useful data. The web framework that is associated with the R2VR package means that online citizen science projects could be developed at low cost. However, modifications of the R2VR server (from local to web server) and the automation of package functions are required to support this goal. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To date, R2VR has only been used to elicit information from static 360-degree images without audio.</ns0:p><ns0:p>However, the A-Frame software offers additional VR experiences, such as the inclusion of soundscapes 14 and 3D mesh from Geographical Information System layers 15 that could be easily integrated into the R2VR workflow. While this development will offer a greater level of virtual immersion, further research and development is required to understand how to increase the knowledge gained from VR experiments <ns0:ref type='bibr' target='#b8'>(Fauville et al., 2020)</ns0:ref>. By having the capability to design their own experiments or being interviewed, experts in ecology may enhance the potential of VR to support new scientific discoveries due to the priming of visual memories from known environments and ecological knowledge <ns0:ref type='bibr' target='#b5'>(Brown et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Further package developments will also enhance the access and security of the database. The current db4free database connected to R2VR is a free hosting provider. In this implementation we used db4free</ns0:p><ns0:p>to avoid the payments associated with data hosting provider and the burden of installing a local database, but we acknowledge that this choice may cause other issues. We suggest that users check the db4free website to ensure that it is a suitable host provider for their experiments, locate the codes within the package that connect to the online database and modify them if necessary. We also recommend that they regularly save their data on their own machine using the read() function from the R2VR package and write.csv() or save() functions from R. Additional security improvements would include adding authentication/authorization to secure API endpoints. Whilst the Node server is using HTTPS, the R server is currently using the HTTP protocol. The current implementation contains anonymous and non-sensitive data. However, improvements to the Fiery server's security may be beneficial for use cases with non-anonymous sensitive data. Another development goal is to support the creation of more flexible data tables.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In conclusion, we have demonstrated that it is now possible to create VR experiments in a seamless statistical programming environment that is highly popular and hence accessible among ecologists. This tool offers new horizons for ecological research as data generated from VR experiments can be used by researchers themselves, but might also be integrated with information collected by other technologies.</ns0:p><ns0:p>This provides a new tool for filling in data gaps in ecosystems with poor data density or coverage, and allowing for a better understanding of ecological systems. R2VR is also applicable to other ecosystems as well as directly generalisable to non-ecological VR experiments.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>then displays them in a web browser via a local IP (Internet Protocol). VR experiments are performed by typing functions directly into the R console. VR experiments can be performed from any device connected to the same network as R2VR, including smartphones, laptops and VR headsets (e.g. HTC Vive, Oculus Rift, Oculus Quest, Oculus Go, Google Daydream, Samsung GearVR and HTC Vive Focus). Once the VR scenes have been created data can be collected from users immersed in the scene, stored in an online database, and retrieved directly in R. The R2VR package does this via a RESTful Node.JS Application Programming Interface (APIRest, Fig. 1). Instructions on setting up the package and examples of VR scene creation using 360-degree images are given in the next section.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Workflow of the R2VR package. A function package is used to start a Fiery server from the R console and render WebXR Device API scenes via harnessing Mozilla's A-Frame framework. This allows for the scene to be composed through the R interface and served into HTML and JavaScript which displays the VR scene in a WebVR environment (web browser and/or VR headset). There is a WebSocket connection between the Fiery server and the client which allows for R console commands to directly communicate with the user (e.g. display a question with the pop() function) in the VR environment. The recorded data is stored in an online MySQL database through a RESTful MVC NodeJS Application Programming Interface (APIRest). The Node API endpoints are made accessible for data fetching into R so all user responses can be analysed. There is an interoperable flow of data between R and VR through the implementation of the WebSocket and an API connections.</ns0:figDesc><ns0:graphic coords='4,249.31,357.46,198.43,136.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>5 https://aframe.io/ 6 https://www.w3.org/TR/webxr/ 7 https://github.com/ACEMS/r2vr3/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)Manuscript to be reviewed Computer Science stop(), respectively. The toggling of questions (on or off) is controlled by the pop() function and images are changed via the function go(). Answers are automatically saved within the online database hosted 8 . Data are retrieved using the function read(). From the users' point of view, there is no need for external installation and manipulation since this is automatically done by the R2VR package. The data collected during an experiment are then curated and can be visualised in real time from the R console.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>8 https://www.db4free.net/ 9 https://aframe.io/ 4/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. Case studies developed using the R2VR package with A) Koala, B) Jaguar and C) Coral reef studies. The screenshots show the questions that were asked as part of the framework testing. Coral reef images were provided by Underwater Earth / XL Catlin Seaview Survey / Christophe Bailhache. Short videos of the virtual reality scenes can be seen at: https://youtu.be/el08HKysZX8.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.56,137.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>10 http://vis.stats.technology/ 6/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>package R2VR offers open access to VR experiments for the scientific community. It supports the development of new research tools by creating a more complex environment for participants and lowers the technical barriers for ecologists to integrate VR into scientific projects. There are several advantages to embedding VR experiments in R. Connecting VR experimental results</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Description of the main functions included in the package. See the help files for more details about the function arguments.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Function</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>start() starts the VR server on the web browser</ns0:cell></ns0:row><ns0:row><ns0:cell>end()</ns0:cell><ns0:cell>kills the VR server</ns0:cell></ns0:row><ns0:row><ns0:cell>pop()</ns0:cell><ns0:cell>displays the question on the image</ns0:cell></ns0:row><ns0:row><ns0:cell>go()</ns0:cell><ns0:cell>jumps to another image</ns0:cell></ns0:row><ns0:row><ns0:cell>read()</ns0:cell><ns0:cell>retrieves the data from the database</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>To ensure that the framework is fully operational, VR experiments were conducted using the R2VR</ns0:cell></ns0:row></ns0:table><ns0:note>package installed on local computers at Queensland University of Technology (QUT). Co-authors and several collaborators were immersed into three virtual ecosystems composed of four different 360-degree images and then asked to answer questions using a Samsung GearVR headset and Oculus Quest. The data collected during these experiments were used to create the visualisations in the R2VR package vignette.The 'Interaction' vignette is included in the R2VR package and contains instructions on how to reproduce the case studies. It is also displayed in the Supplementary Material.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparisons R2VR (A-Frame embedded R) and Unity.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>R2VR</ns0:cell><ns0:cell>Unity</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Description of the data obtained from the elicitation stored online in table koala.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>id</ns0:cell><ns0:cell>classification id</ns0:cell></ns0:row><ns0:row><ns0:cell>image id</ns0:cell><ns0:cell>unique identifier of the image</ns0:cell></ns0:row><ns0:row><ns0:cell>image file</ns0:cell><ns0:cell>image's file name</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>binary response 0 (absence) or 1 (presence)</ns0:cell></ns0:row><ns0:row><ns0:cell>recordedOn</ns0:cell><ns0:cell>date-time of the classification event</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://unity3d.com 2 https://www.unrealengine.com/en-US/ 3 https://opensource.facebook.com/ 4 https://www.tiobe.com/tiobe-index/ 2/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:1:1:NEW 22 Feb 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Rebuttal letter
Connecting virtual reality and ecology: a new tool to run seamless immersive
experiments in R.
February 8, 2021
We thank the editor and reviewers for their careful reading of the manuscript and their helpful comments.
The suggestions greatly improved our paper. Below we provide a detailed description of the changes we
made in response to each comment (in gray) and in the revised manuscript (in blue).
In the revised manuscript, we demonstrate that despite attempts to simplify the workflow for nonprogrammers and engage with VR technology, the small number of peer-reviewed publications that use
VR in ecology shows that these simplifications may still not be accessible enough to the large scientific
community including ecologists. The comparative evaluation study that we performed shows that this may
be due to a lack of a generic and seamless tool written in a popular language of applied scientists. Our
concept is applicable to the broader field of computer science because further developments could be easily
implemented including the use of Unity game engines to create VR environments, coded in popular language(s) such as Python. The availability of such tools for research will fulfill the need of applied scientists
and increase the adoption of VR experiments.
Best Regards,
Julie Vercelloni
Reviewer 1
This article is very clear thanks to its good structure and the right expressions used. Literature references
used have a quantity adapted to this paper. The majority of the articles cited are very recent and from
reputable publishers.
Thank you very much for your positive feedback.
1. PeerJ recommends a title between 20 and 60 words to be more concise. In your case, it might be
interesting to add some details. While reading the current title, the reader might think of a solution to see
the results of R in a virtual reality headset.
The longer-term perspective of R2VR is to effectively be able to see the results of R including outputs from statistical model in VR. The team is currently experimenting with a new R2VR function where
participants can visualise their answers and check their performance compared to a larger cohort in almost
real-time. This work has not yet been finalised and is not ready for discussion. In this study, we are focusing
on the concept and application of R2VR with the aim of encouraging the R community to engage with VR
without the current technical barriers. We agree with the reviewer that the title can be more precise to better
reflect the paper aims. We have therefore proposed a new title:
Connecting virtual reality and ecology: a new tool to run seamless immersive experiments in R.
1
2. A-Frame has many differences with other solutions such as Unity3d and Unreal. Both development
environments also try to be accessible to as many designers as possible. Unity is the leader in virtual reality
applications and Unreal Engine offers a Blueprint interface to be more accessible to novices (Dickson,
2017). It would therefore be preferable to add some justification for your choice of A-Frame for ecologists.
This comment is addressed in the evaluation study section where we compared positive and negatives
characteristic of Unity and A-Frame + R (R2VR) from the perspectives of the user, the developer and the
quantitative ecologist. R2VR exists because, despite great improvements to simplify the workflow for nonprogrammers, a generic and seamless tool, written in the popular language of ecologists, is not currently
available. The lack of such a tool is reflected in the small number of peer-review publications that use
VR in ecology despite its significant potential to tackle important ecological issues. The concept of R2VR
does not rely on A-Frame solely and next version could use other environments such as Unity and Unreal
Engine. In the same way, the concept can be developed using Python, another popular software among
applied scientists. These notions have been added in the revised manuscript in the discussion and reported
below.
L93-102: In this paper, we first describe the functions of R2VR to assist in the creation of VR experiments and its applications in environmental conservation research. We then present a comparative review of
analogous studies from three different perspectives: the user, the developer and the quantitative ecologist.
Following this, we present three case studies in which we have implemented the R2VR package. The paper
concludes with a general discussion.
The R2VR package opens up many new directions of enquiry among quantitative ecologists and software developers. These include the elicitation of expert information, the analysis of elicited responses and
the validation of these data. It is beyond the scope of this paper to discuss these issues, although we point
the reader to [5, 1, 4, 16, 15] for further reading.
L144-162: Here, we compare positive and negative characteristics of R2VR (A-Frame embedded in
R) and Unity from the perspective of the user, the developer and the quantitative ecologist (Table 2). The
user is defined as a participant in the VR experiments with the duty of answering interview questions. The
developer is the programmer that focus on providing the immersive environments. We define the category quantitative ecologist as researchers that use VR experiments as a tool to answer research questions.
They hold the responsibility of developing interview questions and immersive environments, performing
experiments, and collecting data for analyses.
The comparison was based on two sources: first hand experience and published literature. The first hand
experience was based on elicited information from users that participated in the ecological case studies (see
below), developers in Web- and Unity- programming, and quantitative ecologists that conceptualized these
studies. The published literature was based on key references cited in the following papers [6, 11, 12, 14].
L163-171: To our knowledge, existing VR packages for applied sciences require specialized hardware
and substantial programming knowledge to customise the experiment beyond the case study for which the
tool was developed. The tools that we found in the related literature require downloading, setting up, and interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus on
customising 3d objects from models with pre-fabricated environments vasser2017vrex,jangraw2014nede,bexter2020m
in comparison to R2VR which gives the freedom to change the environments, customise expected data responses, and interact with VR to change images or ask questions in real time. Some are customizable, but
extensive knowledge in C# and Unity knowledge are needed to re-purpose them brookes2019studying.
L225-240: There are several advantages to embedding VR experiments in R. Connecting VR experimental results directly to R allows researchers to process, manipulate and visualize data, and access to
the latest statistical methods in ecology. The generic, multipurpose and unique programming language of
R2VR is key to increasing the uptake of VR as an accessible research tool for ecologists and other applied
scientists. Previous efforts to simplify VR development for non-programmers allowed the integration of
2
VR in different research fields [8, 17, 3] but those remained focused on specific purposes and are not easily
adaptable to other research questions. Ecologists have different needs, including the flexibility to modify
VR scenes and collect data for analyses in a language that they can understand. These needs are different
from users’ and developers’ perspectives.
The current implementation of R2VR uses A-Frame to create VR environments written in the R language. Another R package “shinyframe” uses this combination to visualise 3D plots 1 . Further developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR environments
could be coded in a different language other than R including Python 2 which is an even more popular language among applied scientists3 , but not as developed in terms of statistical capabilities. The availability of
such tools will greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive
experiments in applied science.
1
https://cran.r-project.org/web/packages/shinyaframe/ shinyaframe.pdf
https://www.python.org/
3
https://www.tiobe.com/tiobe-index/
2
3
Table 1: Comparisons R2VR (A-Frame embedded in R) and Unity.
User
Developer
Quantitative
ecologist
R2VR
Unity
Pros
Accessibility, Run on the
web, Compatible with
most VR headsets
Mature, Ongoing development by large firm
and massive community,
Compatible with most
VR headsets
Cons
Not as mature or as commercially refined
App access, compatibility, and maintenance
Pros
Open-access
sources,
Relatively easy to implement, Accessible to
the vast pool of web
developers,
Popular
programming language
Flexible, Customizable,
Extensive documentation and community,
Easily integrated with
other software, Mature tool support and
high-quality, Integrated
Developer Environment
Tools, Asset Store resources are large and
complete
Cons
Background in web programming, Not as flexible
Very
specific
programming language(s),
Complex
environment, Need licence for
research projects
Pros
Generic, Multipurpose,
Use a unique programming language, Collect
data in flexible format
Can produce refined
user experiences for
non-domain specialists
Cons
Access
to
internet
mandatory,
Potential
issues with free hosting
provider
Specific purpose, Use
of more than one platforms to perform experiments, Manipulate more
than one programming
language
3. Ideally, we would have liked to know the significant differences for designers or end-users with
another solution. If no measurement was possible, then it would be preferable to detail your observations
on R2VR’s designers in these 3 cases.
Table 2 in the revised manuscript and reported above reflects the reviewer comments. Pros and cons
were elicited from two sources: published literature and first-hand reviews. The latter were obtained from
colleagues who were involved in the case studies and thus have experience with A-Frame in the R2VR
package, and who have comparative experience using Unity. It is clear that the developers preferred Unity
app for numerous reasons whereas the ecologists tended to prefer R2VR because of its usability and manipulation with the different aspects of the experiment, especially with the development of the VR scenes
because it can be written in R.
4
4. At the end of the discussion, advice on how to use it is given. Improvements and problems concerning
the security and confidentiality of data storage could be discussed.
The following sentences have been added in the discussion to reflect the reviewer’s comment.
L268-272: Additional security improvements would include adding authentication/authorization to secure API endpoints. Whilst the Node server is using HTTPS, the R server is currently using the HTTP
protocol. The current implementation contains anonymous and non-sensitive data. However, improvements
to the Fiery server’s security may be beneficial for use cases with non-anonymous sensitive data.
5. This well-written article proposes an innovative and accessible solution for a whole community.
Three applications described have been created using the R2VR package. However, it would be interesting
to define the criteria on which the situation has improved in order to facilitate experimentation by ecologists.
The purpose of the three applications are for resusability with the intention of showing some real-life
applications of R2VR for environmental conservation to the ecologists. Without them, the study would
remain very conceptual and technical, and won’t necessarily be explicit enough for applied scientists to
understand how R2VR could be used for their own research. It is also a common approach in the R world
to use case-studies to showcase the package purpose and code. See examples in [13] and https://www.
meganstodel.com/posts/no-to-iris/ for interesting perspectives about the use of case studies
in R. The comparative studies section in the revised manuscript and reported above addresses the reviewer’s
comment on how R2VR can facilitate the experimentation for ecologists compared to Unity.
Reviewer 2
The manuscript is well written and professional, and I enjoyed reading it.
Thank you.
The background section begins with a very narrow interpretation of virtual reality, namely that of an
audiovisual system substituting for real world sensory input. This description is not adhered to throughout
the manuscript: audio is never mentioned again. I checked the supplementary material and the videos were
silent there too. A further problem with this interpretation is it fails to account for dynamic interaction in
VR, in particular where users may move and interact with the environment, with such actions resulting in
an enhanced sense of presence [1] and a more immersive, ’vivid’ experience. The authors should be more
critical of their interpretation and aware of the limitations this imposes on how others may find their system
useful.
The definition and use of VR in general and in our study in now clarified in the introduction adn discussion sections of the revised manuscript.
L45-52: VR experiments include multimodal features of text, images, sounds and haptic feedback to
create a rich and engaging environment to expose people to more complete and complex information [7].
However, many current VR research applications use only a subset of these features, depending on the
available technology. For example, audio is often excluded, despite the growing body of literature showing
that it increases the sense of presence and realism [10]. Notwithstanding this, even the restricted VR
environment settings have the potential to produce better qualitative and quantitative elicited information
L252-254: To date, R2VR has only been used to elicit information from static 360-degree images
without audio. However, the A-Frame software offers additional VR experiences, such as the inclusion of
5
soundscapes 4 and 3D mesh from Geographical Information System layers 5 that could be easily integrated
into the R2VR workflow.
The second issue I have is the lack of a clear contribution behind this work. Many prior work has tackled
the issue of simplifying work flows for non programmers/experts so they may engage with VR technology
(for example, see [2], [3], and [4]). Its unclear how R2VR differs from these other attempts both in its implementation and its goal: there is overlap at a high level between psychologists, social sciences, and applied
sciences (as in this work) to elicit responses from participants while immersed in a virtual environment.
Therefore I question the utility of R2VR over past attempts/existing solutions. Furthermore, no comparison
is made between R2VR and other systems: I recommend the authors perform a feature comparison to help
convey *why* their system is both needed and *how* it compares to, or better yet improves upon, prior
art/existing solutions. This being said I very much welcome the open source nature of R2VR! It is also
worth considering the premise behind elicitation within versus outside of virtual environments. This is not
a trivial matter: please see [5] for discussion on administering questionnaires inside virtual environments.
[1] Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535),
3549–3557. https://doi.org/10.1098/rstb.2009.0138
[2] Vasser, M., Kängsepp, M., Magomedkerimov, M., Kilvits, K., Stafinjak, V., Kivisik, T., Vicente,
R., Aru, J. (2017). VREX: an open-source toolbox for creating 3D virtual reality experiments. BMC
Psychology, 5(1), 4. https://doi.org/10.1186/s40359-017-0173-4
[3] Jangraw, D. C., Johri, A., Gribetz, M., Sajda, P. (2014). NEDE: An open-source scripting suite
for developing experiments in 3D virtual environments. Journal of Neuroscience Methods, 235, 245–251.
https://doi.org/10.1016/j.jneumeth.2014.06.033
[4] Brookes, J., Warburton, M., Alghadier, M., Mon-Williams, M., Mushtaq, F. (2020). Studying
human behavior with virtual reality: The Unity Experiment Framework. Behavior Research Methods,
52(2), 455–463. https://doi.org/10.3758/s13428-019-01242-0
[5] Regal, G., Voigt-Antons, J.-N., Schmidt, S., Schrammel, J., Kojić, T., Tscheligi, M., Möller, S.
(2019). Questionnaires embedded in virtual environments: Reliability and positioning of rating scales in
virtual environments. Quality and User Experience, 4(1), 5. https://doi.org/10.1007/s41233-019-0029-1
The purpose of R2VR is to provide a malleable tool that the scientific community can access to easily
create their own VR environments that collect data. The tool is particularly targeted to the very large
community of R users, noting that R is one of the top ten programming languages in the world and the
most popular statistical package in many applied science disciplines, including ecology. R2VR provides
the ability for researchers to customise the data retrieved from experiments which is made accessible into
the statistical programming language, R. While this purpose may sounds similar in other studies, we did not
find in the literature a tool that enables to create VR experiments, that is easy to use for non-programmers,
that can generate data for use in data analyses, and is generic enough to be re-purposed.
As the reviewer mentioned, some software exists (see references above) for specific applications in
the field of psychology and social science, however, these either require specialized hardware or require
solid programming knowledge to customise the experiment beyond the case study for which the tool was
developed.
The tools that we found in our search require downloading, setting up, and interfacing with Unity, an
additional step that many scientific researcher wish to avoid. Some focus on customising 3d objects from
models with pre-fabricated environments [17, 8, 2], in comparison to R2VR which gives the freedom to
easily change the environment, customise expected data responses, and interact with VR to change images
or ask questions in real time. Some are customizable , but in the sense that you need deep C/ and Unity
knowledge since it gives you the power of the entire language and framework [3].
VREX [17] is a free open-source toolbox for virtual reality research in the field of experimental psychology and neuroscience. Experiments test for change blindness or memory. This involves a user clicking
4
5
https://www.8thwall.com/playground/aframe-audio-and-media-recorder
https://milesmcbain.xyz/posts/r2vr3-shading-meshes-in-webvr/
6
on an object they think has changed or remember seeing respectively. This software is very specific to the
field of psychology and is not intended to be customised beyond adding 3d models with Unity into a room.
There is also no link to statistical software. VREX is only supported by the Oculus rift headset due to how
the virtual camera works.
Researchers using NEDE [8] can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with
a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other
recording modalities. NEDE again focuses on using Unity to create models opposed to changing the environment itself. NEDE has a complicated and expensive setup requiring 3 computers, Unity Pro software,
eye tracking hardware and software and EEG hardware and software. Whilst NEDE does generate data
relating to human physiology, it does not generate data from interacting with VR directly; rather, it matches
the time elapsed in the VR environment to the physiological response. It also requires MatLab to access the
data on EEG activity and eye-position from the hardware. In addition to these setup requirements, it is very
specific to the field of neuroscience and isn’t easily transferable to other fields in science.
UXF [3] consists of a set of components which simplify human behaviour experiments. UXF 2.0 supports Desktop, PCVR, Standalone VR, as well as Web based experiments for full remote data collection,
with different data output modes. UXF focuses on Simplifying VR experiments consisting of a session,
block, trial setup. That is, the software allows the user to repeat a task in VR, change something minor/major or present new stimuli, and perform the task again to collect data. UXF has the benefit of being
able to create versatile behaviour experiments relating to time (e.g. reaction time, time to perform a task) or
memory based (e.g. selecting between two objects). UXF is the most abstract of the VR systems mentioned
as it focuses heavily on the methodology opposed to the environment itself. Unity; hence a higher level of
proficiency in C/Unity is required to extend the software. This is currently out of scope for many applied
scientists. It may also require. It also may require some AWS knowledge to use the web-based approach.
MazeMaster [2] was created with Python and comes as a bundle that connects a 3D engine (Blender
game engine, included in the package) with a powerful maze experiment design and control tool. With this,
it is possible to create and modulate complex mazes, simple tunnels or other forms of three-dimensional
constructs. The tool can be used to design and conduct behavioural experiments intended for mice and
rats. MazeMaster [6] requires special hardware, including 5 monitors, solid graphics card, custom spherical
treadmill input device, digital trigger for water devices and lick detection. In addition to the technology
requirements and setup, MazeMaster appears to be very specific to the field of animal science and not
easily customizable to other fields.
R2VR is open source and appropriate for the scientific community. R is a very popular statistical
language that researchers use to analyse data. R2VR comes with very simple R scripts that an R-user can
customize. Hence, R2VR is a tool that researchers in many fields of applied science can readily use and
modify.
To address the reviewer’s comment, we undertook a small comparative evaluation study between 2D
and 3D technologies as well as the Unity platform and R2VR. The resulting table (Table 2 in the revised
manuscript and reported below) shows positive and negative characteristics of these two platforms from
the view of the user, the developer and the quantitative ecologist. These pros and cons were elicited from
the literature (including the references recommended by the reviewer) as well as co-authors that used both
R2VR and the original app created using Unity to interview people, WebVR developers , ecologists/analysts
. We also involved two external VR developers, international experts in Unity programming, that developed
the original app to contribute in the comparative evaluation study.
L134-143: In this section, we embed the R2VR package in the body of related literature. We first note
the merit of VR compared with 2D technologies, then focus on a comparative review of other platforms.
Noting the intended audience for R2VR, we address the latter from three perspectives: the user, the ecologist
and the quantitative ecologist.
7
VR versus 2D technologies
The benefits of using 3D technologies including VR and Augmented Reality (AR) compared to 2D environments have been widely demonstrated in the literature. akpan2019comparative performed a meta-analysis
to compare 3D and traditional 2D technologies and found enhanced performance and quality in 3D settings. VR technology has been found effective for training, gamification and collaboration; resulting in
more suitable experiences and motivated participants [9].
L144-162: Here, we compare positive and negative characteristics of R2VR (A-Frame embedded in
R) and Unity from the perspective of the user, the developer and the quantitative ecologist (Table 2). The
user is defined as a participant in the VR experiments with the duty of answering interview questions. The
developer is the programmer that focus on providing the immersive environments. We define the category quantitative ecologist as researchers that use VR experiments as a tool to answer research questions.
They hold the responsibility of developing interview questions and immersive environments, performing
experiments, and collecting data for analyses.
The comparison was based on two sources: first hand experience and published literature. The first hand
experience was based on elicited information from users that participated in the ecological case studies (see
below), developers in Web- and Unity- programming, and quantitative ecologists that conceptualized these
studies. The published literature was based on key references cited in the following papers [6, 11, 12, 14].
L163-171: To our knowledge, existing VR packages for applied sciences require specialized hardware
and substantial programming knowledge to customise the experiment beyond the case study for which the
tool was developed. The tools that we found in the related literature require downloading, setting up, and interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus on
customising 3d objects from models with pre-fabricated environments vasser2017vrex,jangraw2014nede,bexter2020m
in comparison to R2VR which gives the freedom to change the environments, customise expected data responses, and interact with VR to change images or ask questions in real time. Some are customizable, but
extensive knowledge in C# and Unity knowledge are needed to re-purpose them brookes2019studying.
L225-240: There are several advantages to embedding VR experiments in R. Connecting VR experimental results directly to R allows researchers to process, manipulate and visualize data, and access to
the latest statistical methods in ecology. The generic, multipurpose and unique programming language of
R2VR is key to increasing the uptake of VR as an accessible research tool for ecologists and other applied
scientists. Previous efforts to simplify VR development for non-programmers allowed the integration of
VR in different research fields [8, 17, 3] but those remained focused on specific purposes and are not easily
adaptable to other research questions. Ecologists have different needs, including the flexibility to modify
VR scenes and collect data for analyses in a language that they can understand. These needs are different
from users’ and developers’ perspectives.
The current implementation of R2VR uses A-Frame to create VR environments written in the R language. Another R package “shinyframe” uses this combination to visualise 3D plots 6 . Further developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR environments
could be coded in a different language other than R including Python 7 which is an even more popular language among applied scientists8 , but not as developed in terms of statistical capabilities. The availability of
such tools will greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive
experiments in applied science.
6
https://cran.r-project.org/web/packages/shinyaframe/ shinyaframe.pdf
https://www.python.org/
8
https://www.tiobe.com/tiobe-index/
7
8
Table 2: Comparisons R2VR (A-Frame embedded in R) and Unity.
User
Developer
Quantitative
ecologist
R2VR
Unity
Pros
Accessibility, Run on the
web, Compatible with
most VR headsets
Mature, Ongoing development by large firm
and massive community,
Compatible with most
VR headsets
Cons
Not as mature or as commercially refined
App access, compatibility, and maintenance
Pros
Open-access
sources,
Relatively easy to implement, Accessible to
the vast pool of web
developers,
Popular
programming language
Flexible, Customizable,
Extensive documentation and community,
Easily integrated with
other software, Mature tool support and
high-quality, Integrated
Developer Environment
Tools, Asset Store resources are large and
complete
Cons
Background in web programming, Not as flexible
Very
specific
programming language(s),
Complex
environment, Need licence for
research projects
Pros
Generic, Multipurpose,
Use a unique programming language, Collect
data in flexible format
Can produce refined
user experiences for
non-domain specialists
Cons
Access
to
internet
mandatory,
Potential
issues with free hosting
provider
Specific purpose, Use
of more than one platforms to perform experiments, Manipulate more
than one programming
language
My final issue revolves around the utility of the case studies. Insufficient detail is provided, making it
difficult to understand the point of their inclusion.
The inclusion of the case studies is for the ecologists/R users with the intention of showing some reallife applications of R2VR for environmental conservation. Without them, the study would remain very
conceptual and technical, and won’t necessarily inspire ecologists to re-purpose R2VR for their own research. It is also a common approach in the R world to use case-studies to showcase the package purpose
and code. See examples in [13] and https://www.meganstodel.com/posts/no-to-iris/ for
additional perspectives about the use of case studies in R.
There is no discussion regarding how R2VR was perceived as a benefit (again, no comparison to other
9
workflows/tools are provided), and also all 3 case studies were performed by the same research group
involved in developing R2VR. It is best to trial the system with other researchers not involved in the development/production of R2VR, and perform some qualitative analysis, perhaps involving lengthy debrief
interviews with other researchers to get their perspective and opinion on R2VR as a useful tool to add
to their repertoire. I would like further discussion in the paper around the problems of accessibility and
non-power user (programmer) use, including how the authors intend to tackle this problem.
The reviewer’s comment is addressed in the comparative evaluation study in Table 2 in the revised
manuscript and reported above. R2VR codes and VR rendering were shown to the two external Unity
developers in order to elicit their perspectives about R2VR versus Unity and reported in Table 2. We did
not report their opinion because the study is not focused on the use of R2VR for the developer but rather on
quantitative ecologists.
It is unclear what the ’findings’ from this work are. The material is quite light, giving a brief business
case for R2VR, offering a high level overview of the system design, and then finally 3 case studies involving
its use. The authors conclude that they ’have demonstrated that it is now possible to create VR experiments
via a unique software’, but as I have pointed out in the ’Basic Reporting’ section, the scientific community
already knew this.
In the revised manuscript, the finding of our study demonstrates that despite attempts to simplify the
workflow for non-programmers and engage with VR technology, the small number of peer-reviewed publications that use VR in the field of ecology shows that these simplifications may still not be accessible
enough to the large scientific community including ecologists, despite the acknowledged potential of VR to
tackle major environmental issues. We found that this may be due to a lack of a generic and seamless tool
written in the popular language of applied scientists and developed R2VR to close this gap.
These findings are highlighted in several sections of the revised manuscript including in lines:
L18-20: However, at present, the relatively high level of expertise in specific programming languages
and disjoint pathways required to run VR experiments hinder their wider application in ecology and other
sciences.
L80-82: The R2VR package development was motivated with the goal of providing greater access to
VR experiments and the associated research benefits of using the R statistical software environment, a top
ten popular programming language 9 extensively used by quantitative ecologists lai2019evaluating.
L93-102: In this paper, we first describe the functions of R2VR to assist in the creation of VR experiments and its applications in environmental conservation research. We then present a comparative review of
analogous studies from three different perspectives: the user, the developer and the quantitative ecologist.
Following this, we present three case studies in which we have implemented the R2VR package. The paper
concludes with a general discussion.
The R2VR package opens up many new directions of enquiry among quantitative ecologists and software developers. These include the elicitation of expert information, the analysis of elicited responses and
the validation of these data. It is beyond the scope of this paper to discuss these issues, although we point
the reader to [5, 1, 4, 16, 15] for further reading.
L163-171: To our knowledge, existing VR packages for applied sciences require specialized hardware
and substantial programming knowledge to customise the experiment beyond the case study for which the
tool was developed. The tools that we found in the related literature require downloading, setting up, and
interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus
on customising 3d objects from models with pre-fabricated environments [17, 8, 2], in comparison to R2VR
which gives the freedom to change the environments, customise expected data responses, and interact with
9
https://www.tiobe.com/tiobe-index/
10
VR to change images or ask questions in real time. Some are customizable, but extensive knowledge in C#
and Unity knowledge are needed to re-purpose them [3].
The current implementation of R2VR uses A-Frame to create VR environments written in the R language. Another R package “shinyframe” uses this combination to visualise 3D plots 10 . Further developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR environments
could be coded in a different language other than R including Python 11 which is an even more popular language among applied scientists12 , but not as developed in terms of statistical capabilities. The availability of
such tools will greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive
experiments in applied science.
There is a typo on line 45, and line 198. With respect, we did not find a typo in line 45. However, we
changed ”user’s” to ”users’” in line 146. Line 198 has been modified in the revised manuscript.
L 256-257: While this development will offer a greater level of virtual immersion, further research and
development is required to understand how to increase the knowledge gained from VR experiments [7].
Please consider using footnotes for URLs. URLs are now indicated using footnotes following the reviewer’s suggestion.
The link provided in line 197 returns a 404. Thank you, this has been fixed.
References
[1] T. Bednarz, J. Kim, R. Brown, A. James, K. Burrage, S. Clifford, J. Davis, K. Mengersen, E. Peterson,
S. Psaltis, and J. Vercelloni. Virtual reality for conservation. In Proceedings of the 21st International Conference on Web3D Technology, Web3D ’16, page 177–178, New York, NY, USA, 2016.
Association for Computing Machinery.
[2] A. Bexter and B. M. Kampa. Mazemaster: an open-source python-based software package for controlling virtual reality experiments. bioRxiv, 2020.
[3] J. Brookes, M. Warburton, M. Alghadier, M. Mon-Williams, and F. Mushtaq. Studying human behavior with virtual reality: The unity experiment framework. Behavior research methods, pages 1–9,
2019.
[4] R. Brown, P. Bruza, W. Heard, K. Mengersen, and J. Murray. On the (virtual) getting of wisdom:
Immersive 3d interfaces for eliciting spatial information from experts. Spatial Statistics, 18:318–331,
2016.
[5] S. L. Choy, R. O’Leary, and K. Mengersen. Elicitation by design in ecology: using expert opinion to
inform priors for Bayesian statistical models. Ecology, 90(1):265–277, 2009.
[6] C. Dibbern, M. Uhr, D. Krupke, and F. Steinicke. Can webvr further the adoption of virtual reality?
Mensch und Computer 2018-Usability Professionals, 2018.
[7] G. Fauville, A. C. M. Queiroz, and J. N. Bailenson. Virtual reality as a promising tool to promote
climate change awareness. In J. Kim and H. Song, editors, Technology and Health, pages 91 – 108.
Academic Press, 2020.
[8] D. C. Jangraw, A. Johri, M. Gribetz, and P. Sajda. Nede: An open-source scripting suite for developing
experiments in 3d virtual environments. Journal of neuroscience methods, 235:245–251, 2014.
[9] S. Kavanagh, A. Luxton-Reilly, B. Wuensche, and B. Plimmer. A systematic review of virtual reality
in education. Themes in Science and Technology Education, 10(2):85–119, 2017.
10
https://cran.r-project.org/web/packages/shinyaframe/ shinyaframe.pdf
https://www.python.org/
12
https://www.tiobe.com/tiobe-index/
11
11
[10] A. C. Kern and W. Ellermeier. Audio in vr: Effects of a soundscape and movement-triggered step
sounds on presence. Frontiers in Robotics and AI, 2020.
[11] M. Nebeling and M. Speicher. The trouble with augmented reality/virtual reality authoring tools.
In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct),
pages 333–337. IEEE, 2018.
[12] V. T. Nguyen, R. Hite, and T. Dang. Web-based virtual reality development in classroom: From
learner’s perspectives. In 2018 IEEE International Conference on Artificial Intelligence and Virtual
Reality (AIVR), pages 11–18. IEEE, 2018.
[13] S. Pavoine. adiv: An r package to analyse biodiversity in ecology. Methods in Ecology and Evolution,
11(9):1106–1112.
[14] S. G. Santos and J. C. Cardoso. Web-based virtual reality with a-frame. In 2019 14th Iberian Conference on Information Systems and Technologies (CISTI), pages 1–2. IEEE, 2019.
[15] E. Santos-Fernandez and K. Mengersen. Bayesian item response models for citizen science ecological
data. arXiv preprint arXiv:2003.06966, 2020.
[16] E. Santos-Fernandez, E. E. Peterson, J. Vercelloni, E. Rushworth, and K. Mengersen. Correcting
misclassification errors in crowdsourced ecological data: A bayesian perspective. Journal of the Royal
Statistical Society: Series C (Applied Statistics), 2020.
[17] M. Vasser, M. Kängsepp, M. Magomedkerimov, K. Kilvits, V. Stafinjak, T. Kivisik, R. Vicente, and
J. Aru. Vrex: an open-source toolbox for creating 3d virtual reality experiments. BMC psychology,
5(1):1–8, 2017.
12
" | Here is a paper. Please give your review comments after reading it. |
105 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Virtual reality (VR) technology is an emerging tool that is supporting the connection between conservation research and public engagement with environmental issues. The use of VR in ecology consists of interviewing diverse groups of people while they are immersed within a virtual ecosystem to produce better information than more traditional surveys. However, at present, the relatively high level of expertise in specific programming languages and disjoint pathways required to run VR experiments hinder their wider application in ecology and other sciences. We present R2VR, a package for implementing and performing VR experiments in R with the aim of easing the learning curve for applied scientists including ecologists. The package provides functions for rendering VR scenes on web browsers with A-Frame that can be viewed by multiple users on smartphones, laptops, and VR headsets. It also provides instructions on how to retrieve answers from an online database in R. Three published ecological case studies are used to illustrate the R2VR workflow, and show how to run a VR experiments and collect the resulting datasets. By tapping into the popularity of R among ecologists, the R2VR package creates new opportunities to address the complex challenges associated with conservation, improve scientific knowledge, and promote new ways to share better understanding of environmental issues. The package could also be used in other fields outside of ecology.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>The emergence of digital technologies, including Virtual Reality (VR), facilitates connections between the public and the scientific community and creates innovative pathways for environmental conservation research <ns0:ref type='bibr' target='#b17'>(Mazumdar et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref>. In general, VR uses a combination of immersive technology via head-mounted devices, hand controllers and stereoscopic sound to replace natural sensory input with inputs from a computer system, such that a person is exposed to vivid virtual scenes <ns0:ref type='bibr' target='#b28'>(Riva et al., 2007)</ns0:ref>. In the field of ecology, VR experiences are used as a research tool to (1) increase understanding about the complexity of environmental issues associated with climate change, (2) influence empathy, and (3) promote environmental behavior changes <ns0:ref type='bibr' target='#b16'>(Markowitz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b10'>Herrera et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nelson et al., 2020)</ns0:ref>. Despite promising results, the small number of published studies that have used VR approaches in ecology shows that there remain opportunities for further research in environmental education <ns0:ref type='bibr' target='#b27'>(Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref> and for the development of programming tools that ease the integration of VR with applied science fields <ns0:ref type='bibr' target='#b25'>(Okamoto et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b32'>Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Loup et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b4'>Brookes et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bexter and Kampa, 2020)</ns0:ref>.</ns0:p><ns0:p>VR experiments for environmental conservation involve the elicitation of information while people are immersed in virtual scenes of natural ecosystems, such as 360-degree images. VR experiments include multimodal features of text, images, sounds and haptic feedback to create a rich and engaging environment to expose people to more complete and complex information <ns0:ref type='bibr' target='#b8'>(Fauville et al., 2020)</ns0:ref>.</ns0:p><ns0:p>In the fields of ecology and conservation, VR has the potential to support greater understanding of complex ecological processes such as coral bleaching <ns0:ref type='bibr' target='#b20'>(Minocha et al., 2017)</ns0:ref>, and new forms of thinking about ecosystem dynamics <ns0:ref type='bibr' target='#b9'>(Grotzer et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>Queiroz et al., 2018)</ns0:ref>. VR experiments solve the difficulty of accessing ecosystems that are situated in remote locations and might be potentially dangerous or expensive to survey. Continuous access to these ecosystems opens up new opportunities for ecologists to fill the gaps in current scientific knowledge related to the paucity of data and ecological consequences of major changes in ecosystems health and species composition.</ns0:p><ns0:p>Combined with modern techniques in statistical ecology, elicited information collected from VR experiments can produce new types of ecological insights that complement environmental monitoring and conservation efforts. For example, VR experiments with 360-degree images were used to develop aesthetic indicators based on people's perception of the beauty of a coral reef <ns0:ref type='bibr' target='#b33'>(Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>They were also used to predict the presence or absence of emblematic species threatened by habitat loss and fragmentation, such as koalas (Phascolarctos cinereus, <ns0:ref type='bibr' target='#b14'>(Leigh et al., 2019)</ns0:ref>, Australian rock wallabies (Petrogale penicillata, <ns0:ref type='bibr' target='#b5'>(Brown et al., 2016)</ns0:ref>, and jaguars (Panthera onca) <ns0:ref type='bibr' target='#b2'>(Bednarz et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Mengersen et al., 2017)</ns0:ref>. In these experiments, opinions and knowledge were extracted from the responses given by experts, indigenous communities, scuba-divers and non-expert participants. This information was then incorporated into quantitative statistical models and used to improve understanding of complex ecological systems and to inform the development of future management and conservation strategies. Such strategies included the creation of a jaguar conservation corridor across the Amazon rainforest <ns0:ref type='bibr' target='#b34'>(Zeller et al., 2013)</ns0:ref> and supporting the Australian government in their reporting to UNESCO on the status of the Great Barrier Reef World Heritage Area <ns0:ref type='bibr' target='#b33'>(Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>VR experiments in ecology are often conducted using generic VR experiences such as Google Expeditions or pre-made 360-degree movies <ns0:ref type='bibr' target='#b18'>(McMillan et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Parmaxi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Nelson et al., 2020)</ns0:ref>, which are primarily developed for educational purposes <ns0:ref type='bibr' target='#b16'>(Markowitz et al., 2018)</ns0:ref>. These tools are not designed to be adapted for specific research purposes, therefore a collaboration with VR developers and accessibility to bespoke VR software is required to repurpose them for research applications <ns0:ref type='bibr' target='#b15'>(Loup et al., 2018)</ns0:ref>. Common VR programming environments such as C#/Unity 1 , C++/Unreal Engine 2 and React 360 3 require specific programming expertise, which ecologists and other scientists may lack.</ns0:p><ns0:p>The R2VR package development was motivated with the goal of providing greater access to VR experiments and the associated research benefits of using the R statistical software environment, a top ten popular programming language 4 extensively used by quantitative ecologists <ns0:ref type='bibr' target='#b13'>(Lai et al., 2019)</ns0:ref>. The purpose of R2VR is to implement and perform VR experiments, and record and analyse data for scientists while minimizing the need for different tools and expertise beyond the R language. We adopted a similar approach to that of <ns0:ref type='bibr' target='#b15'>Loup et al. (2018)</ns0:ref>, which allows non-VR developers to create VR experiences without the need for VR programming expertise. Their approach is based on the development of an open-access pipeline in which non-VR programmers can generate and use versatile VR scripts for their own purposes. The pipeline simplifies the development of VR environments by connecting game engines with VR assistance tools. Similarly, the R2VR package uses the WebXR to generate VR experiences for non-developers and to collect data from R. The technical challenges relate to (1) the ability for an R user to interact with a VR scene via WebSocket connections between R and a WebXR Device API (see R2VR description) and (2) the creation of a database to store and retrieve data from VR experiments, which, in the present case, is achieved via a Node API.</ns0:p><ns0:p>In this paper, we first describe the functions of R2VR to assist in the creation of VR experiments and its applications in environmental conservation research. We then present a comparative review of analogous studies from three different perspectives: the user, the developer and the quantitative ecologist.</ns0:p><ns0:p>Following this, we present three case studies in which we have implemented the R2VR package. The paper concludes with a general discussion.</ns0:p><ns0:p>The R2VR package opens up many new directions of enquiry among quantitative ecologists and software developers. These include the elicitation of expert information, the analysis of elicited responses and the validation of these data. It is beyond the scope of this paper to discuss these issues, although we point the reader to <ns0:ref type='bibr' target='#b6'>Choy et al. (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Bednarz et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Brown et al. (2016)</ns0:ref>; <ns0:ref type='bibr'>Santos-Fernandez et al. (2020)</ns0:ref>; <ns0:ref type='bibr'>Santos-Fernandez and Mengersen (2020)</ns0:ref> for further reading.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2VR DESCRIPTION</ns0:head><ns0:p>The R2VR package uses A-Frame 5 and WebXR Device API platforms 6 (Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>) for building VR experiences. These are open-source and make the VR functionality accessible to people with basic knowledge in web programming <ns0:ref type='bibr' target='#b29'>(Santos and Cardoso, 2019)</ns0:ref>. VR experiences are composed of assets (a asset) that can be an image, texture or model; entities (a entity) indicating the placeholder for an object; and scenes (a scene) composed of all the created objects. The R2VR package uses the A-Frame architecture which allows VR scenes to be composed and served directly from an R script. It This allows for the scene to be composed through the R interface and served into HTML and JavaScript which displays the VR scene in a WebXR environment (web browser and/or VR headset). There is a WebSocket connection between the Fiery server and the client which allows for R console commands to directly communicate with the user (e.g. display a question with the pop() function) in the VR environment. The recorded data is stored in an online MySQL database through a RESTful MVC NodeJS Application Programming Interface (APIRest). The Node API endpoints are made accessible for data fetching into R so all user responses can be analysed. There is an interoperable flow of data between R and VR through the implementation of the WebSocket and an API connections.</ns0:p><ns0:p>The R package is hosted by a Github repository: 7 and can be installed using the command:</ns0:p><ns0:formula xml:id='formula_0'>devtools::install_github('ACEMS/r2vr')</ns0:formula><ns0:p>The package functionality is composed of five generic functions, which are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The rendering of VR scenes in the web browser is started and stopped using the functions start() and stop(), respectively. The toggling of questions (on or off) is controlled by the pop() function and images are changed via the function go(). Answers are automatically saved within the online database Manuscript to be reviewed Computer Science hosted 8 . Data are retrieved using the function read(). From the users' point of view, there is no need for external installation and manipulation since this is automatically done by the R2VR package. The data collected during an experiment are then curated and can be visualised in real time from the R console. </ns0:p></ns0:div>
<ns0:div><ns0:head>COMPARATIVE STUDIES</ns0:head><ns0:p>In this section, we embed the R2VR package in the body of related literature. We first note the merit of VR compared with 2D technologies, then focus on a comparative review of other platforms. Noting the intended audience for R2VR, we address the latter from three perspectives: the user, the ecologist and the quantitative ecologist.</ns0:p></ns0:div>
<ns0:div><ns0:head>VR versus 2D technologies</ns0:head><ns0:p>The benefits of using 3D technologies including VR and Augmented Reality (AR) compared to 2D</ns0:p><ns0:p>environments have been widely demonstrated in the literature. <ns0:ref type='bibr' target='#b0'>Akpan and Shanker (2019)</ns0:ref> performed a meta-analysis to compare 3D and traditional 2D technologies and found enhanced performance and quality in 3D settings. VR technology has been found effective for training, gamification and collaboration;</ns0:p><ns0:p>resulting in more suitable experiences and motivated participants <ns0:ref type='bibr' target='#b12'>(Kavanagh et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2VR versus other platforms</ns0:head><ns0:p>The purpose of R2VR is to provide a malleable tool that the scientific community can access to easily create their own VR environments that collect data. The tool is particularly targeted to the very large community of R users, noting that R is a very popular programming languages in the world with an open-access to many statistical packages for data analyses.</ns0:p><ns0:p>The package R2VR uses the A-Frame platform 9 to create the VR environments and R to run the experiment and read the data. The A-Frame platform is a common choice for development of VR environments <ns0:ref type='bibr' target='#b7'>(Dibbern et al., 2018)</ns0:ref>, although other 3D game engines such as Unity are popular for the integration of VR experiments within other research fields, including neuroscience <ns0:ref type='bibr' target='#b32'>(Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014)</ns0:ref> and human behaviour <ns0:ref type='bibr' target='#b4'>(Brookes et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Here, we compare positive and negative characteristics of R2VR (A-Frame embedded in R) and Unity from the perspective of the user, the developer and the quantitative ecologist (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). The user is defined as a participant in the VR experiments with the duty of answering interview questions. The developer is the programmer that focus on providing the immersive environments. We define the category quantitative ecologist as researchers that use VR experiments as a tool to answer research questions. They hold the responsibility of developing interview questions and immersive environments, performing experiments, and collecting data for analyses.</ns0:p><ns0:p>The comparison was based on two sources: first hand experience and published literature. The first hand experience was based on elicited information from users that participated in the ecological case studies (see below), developers in Web-and Unity-programming, and quantitative ecologists that conceptualized these studies. Information from the published literature was extracted using a systematic reading on which positive and negative characteristics of VR experiments in applied sciences are discussed by the authors. The published literature was based on key references cited in the following papers <ns0:ref type='bibr' target='#b7'>(Dibbern et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Nebeling and Speicher, 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Nguyen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Santos and Cardoso, 2019)</ns0:ref>. R2VR provides the ability for researchers to customise the data retrieved from experiments which is made accessible into the statistical programming language, R. While this purpose may sounds similar in other studies, we did not find in the literature a tool that enables to create VR experiments, that is easy to use for non-programmers, that can generate data for use in data analyses, and is generic enough to be re-purposed. To our knowledge, existing VR packages for applied sciences require specialized hardware and substantial programming knowledge to customise the experiment beyond the case study for which the tool was developed. The tools that we found in the related literature require downloading, setting up, and interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus on customising 3d objects from models with pre-fabricated environments <ns0:ref type='bibr' target='#b32'>(Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bexter and Kampa, 2020)</ns0:ref>, in comparison to R2VR which gives the freedom to change the environments, customise expected data responses, and interact with VR to change images or ask questions in real time. Some are customizable, but extensive knowledge in C# and Unity knowledge are needed to re-purpose them <ns0:ref type='bibr' target='#b4'>(Brookes et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:2:0:NEW 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>ECOLOGICAL CASE STUDIES</ns0:head><ns0:p>The R2VR package was used to replicate parts of VR experiments developed in previous studies. We used this approach to demonstrate stepwise how to build a VR environment directly from R. The content for reproducing these case studies is composed of twelve 360-degree images, R2VR functions and R scripts with interview questions. These resources are included with the R2VR package. from volunteers, sightings estimated from thermal imagery collected using drones, and data from experts elicited using VR technology. Experts were immersed in 360-degree images of forested areas and asked about (1) the likelihood of koalas being present and (2) habitat suitability for koalas, with associated confidence in their estimates. Answers were manually recorded and transferred to CSV files. Probabilities obtained from elicitation were modeled using a beta regression and subsequently integrated with presenceabsence data obtained from volunteers and thermal images within logistic regression models. The results demonstrated that incorporating data elicited using VR into the statistical models produced better predictions of koala distribution and better characterisation of their habitats.</ns0:p><ns0:p>For the purpose of the present study, co-authors and collaborators were immersed in a sample of these 360-degree images of forested areas and were asked: 'Do you see a koala?'. They responded by selecting 'yes' or 'no' within the VR scenes (Fig. <ns0:ref type='figure' target='#fig_3'>2A</ns0:ref>). The associated data table koala is composed of five variables and populated with the user's responses (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). User names and answers were retrieved by using the starting and ending times of the elicitation (recordedOn in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>) in the R script. VR scenes were restarted for each participant between case studies. </ns0:p></ns0:div>
<ns0:div><ns0:head>Case study 2: Jaguar</ns0:head><ns0:p>A team of QUT researchers and collaborators conducted a study in Peru to help the development of a jaguar conservation corridor across South America <ns0:ref type='bibr' target='#b34'>(Zeller et al., 2013)</ns0:ref>. sightings (e.g. visual sighting, vocalisation, tracks, etc.) elicited from community members. The elicited information was then combined with visual and sound recordings to create immersive environments of the Peruvian jungle 10 and interview jaguar experts on several aspects of jaguar habitat <ns0:ref type='bibr' target='#b2'>(Bednarz et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Here, we used the R2VR package to show four 360-degrees images of the Peruvian jungle. Co-authors and collaborators were asked to consider characteristics known to affect jaguar habitat suitability, such as the presence/absence of water, jaguar tracks, jaguar scratches on trees, and dense vegetation. They could select more than one indicator by clicking on the associated boxes within the VR scenes (Fig. <ns0:ref type='figure' target='#fig_3'>2B</ns0:ref>). The jaguar data table (Fig. <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>) is composed of eight variables (four relating to presence/absence of indicators, four for metadata, id, image id, image file and, recordedOn -shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Case study 3: Coral reef</ns0:head><ns0:p>The reef VR experiments was originally developed to estimate indicators of coral reef aesthetics based on people's perception of reef beauty <ns0:ref type='bibr' target='#b33'>(Vercelloni et al., 2018)</ns0:ref>. Three groups of people (marine scientists, experienced divers and the general public) were interviewed while immersed in 360-degree images of the Great Barrier Reef, Australia. The presence/absence of variables that represent different ecological characteristics of coral reefs and their opinions about reef beauty were used to parameterize a logistic regression model in order to gain knowledge about what makes a coral reef beautiful. The model results suggested that a structurally complex reef with diverse colours had a positive influence on reef aesthetics.</ns0:p><ns0:p>We asked co-authors and collaborators to look at four virtual reefs and answer the question 'Do the live corals on the reef form structurally complex habitats?' (Fig. <ns0:ref type='figure' target='#fig_3'>2C</ns0:ref>). After the classification of the images, we obtained a data table called coral reef (Fig. <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>) composed of five variables with binary responses (0 for non-complex reef and 1 for complex reef) to the question.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The There are several advantages to embedding VR experiments in R. Connecting VR experimental results directly to R allows researchers to process, manipulate and visualize data, and access to the latest statistical methods in ecology. The generic, multipurpose and unique programming language of R2VR is key to increasing the uptake of VR as an accessible research tool for ecologists and other applied scientists.</ns0:p><ns0:p>Previous efforts to simplify VR development for non-programmers allowed the integration of VR in different research fields <ns0:ref type='bibr' target='#b11'>(Jangraw et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b32'>Vasser et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Brookes et al., 2019)</ns0:ref> but those remained focused on specific purposes and are not easily adaptable to other research questions. Ecologists have different needs, including the flexibility to modify VR scenes and collect data for analyses in a language that they can understand. These needs are different from users' and developers' perspectives.</ns0:p><ns0:p>The current implementation of R2VR uses A-Frame to create VR environments written in the R language. Another R package 'shinyframe' uses this combination to visualise 3D plots 11 . Further developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR environments could be coded in a different language other than R but familiar to the ecologists. For example, R2VR could be rewritten for Python 12 which is an even more popular language among applied scientists 13 , but not as developed in terms of statistical capabilities. The availability of such tools will greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive experiments in applied science.</ns0:p><ns0:p>It is increasingly being shown that VR technology facilitates the public involvement in environmental conservation by creating engaging learning environments <ns0:ref type='bibr' target='#b27'>(Queiroz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Fauville et al., 2020)</ns0:ref>.</ns0:p><ns0:p>R2VR provides a fundamental framework for citizen science projects that could use VR to perform different activities including online data collection, data extraction from existing records, and knowledge Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>sharing <ns0:ref type='bibr' target='#b17'>(Mazumdar et al., 2018)</ns0:ref>. It can also facilitate a natural link between VR experiments and citizen science projects by offering an open-access tool for research scientists to build their own VR environments.</ns0:p><ns0:p>In this way, members of the public can perform tasks in the same perceptual environment as might an ecologist and collect useful data. The web framework that is associated with the R2VR package means that online citizen science projects could be developed at low cost. However, modifications of the R2VR server (from local to web server) and the automation of package functions are required to support this goal.</ns0:p><ns0:p>To date, R2VR has only been used to elicit information from static 360-degree images without audio.</ns0:p><ns0:p>However, the A-Frame software offers additional VR experiences, such as the inclusion of soundscapes 14 and 3D mesh from Geographical Information System layers 15 that could be easily integrated into the R2VR workflow. While this development will offer a greater level of virtual immersion, further research and development is required to understand how to increase the knowledge gained from VR experiments <ns0:ref type='bibr' target='#b8'>(Fauville et al., 2020)</ns0:ref>. By having the capability to design their own experiments or being interviewed, experts in ecology may enhance the potential of VR to support new scientific discoveries due to the priming of visual memories from known environments and ecological knowledge <ns0:ref type='bibr' target='#b5'>(Brown et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b33'>Vercelloni et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Further package developments will also enhance the access and security of the database. The current db4free database connected to R2VR is a free hosting provider. In this implementation we used db4free</ns0:p><ns0:p>to avoid the payments associated with data hosting provider and the burden of installing a local database, but we acknowledge that this choice may cause other issues. We suggest that users check the db4free website to ensure that it is a suitable host provider for their experiments, locate the codes within the package that connect to the online database and modify them if necessary. We also recommend that they regularly save their data on their own machine using the read() function from the R2VR package and write.csv() or save() functions from R. Additional security improvements would include adding authentication/authorization to secure API endpoints. Whilst the Node server is using HTTPS, the R server is currently using the HTTP protocol. The current implementation contains anonymous and non-sensitive data. However, improvements to the Fiery server's security may be beneficial for use cases with non-anonymous sensitive data. Another development goal is to support the creation of more flexible data tables.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In conclusion, we have demonstrated that it is now possible to create VR experiments in a seamless statistical programming environment that is highly popular and hence accessible among ecologists. This tool offers new horizons for ecological research as data generated from VR experiments can be used by researchers themselves, but might also be integrated with information collected by other technologies.</ns0:p><ns0:p>This provides a new tool for filling in data gaps in ecosystems with poor data density or coverage, and allowing for a better understanding of ecological systems. R2VR is also applicable to other ecosystems as well as directly generalisable to non-ecological VR experiments.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>then displays them in a web browser via a local IP (Internet Protocol). VR experiments are performed by typing functions directly into the R console. VR experiments can be performed from any device connected to the same network as R2VR, including smartphones, laptops and VR headsets (e.g. HTC Vive, Oculus Rift, Oculus Quest, Oculus Go, Google Daydream, Samsung GearVR and HTC Vive Focus). Once the VR scenes have been created data can be collected from users immersed in the scene, stored in an online database, and retrieved directly in R. The R2VR package does this via a RESTful Node.JS Application Programming Interface (APIRest, Fig. 1). Instructions on setting up the package and examples of VR scene creation using 360-degree images are given in the next section.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Workflow of the R2VR package. A function package is used to start a Fiery server from the R console and render WebXR Device API scenes via harnessing Mozilla's A-Frame framework. This allows for the scene to be composed through the R interface and served into HTML and JavaScript which displays the VR scene in a WebXR environment (web browser and/or VR headset). There is a WebSocket connection between the Fiery server and the client which allows for R console commands to directly communicate with the user (e.g. display a question with the pop() function) in the VR environment. The recorded data is stored in an online MySQL database through a RESTful MVC NodeJS Application Programming Interface (APIRest). The Node API endpoints are made accessible for data fetching into R so all user responses can be analysed. There is an interoperable flow of data between R and VR through the implementation of the WebSocket and an API connections.</ns0:figDesc><ns0:graphic coords='4,249.31,324.74,198.43,136.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>5 https://aframe.io/ 6 https://www.w3.org/TR/webxr/ 7 https://github.com/ACEMS/r2vr3/10PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:2:0:NEW 9 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. Case studies developed using the R2VR package with A) Koala, B) Jaguar and C) Coral reef studies. The screenshots show the questions that were asked as part of the framework testing. Coral reef images were provided by Underwater Earth / XL Catlin Seaview Survey / Christophe Bailhache. Short videos of the virtual reality scenes can be seen at: https://youtu.be/el08HKysZX8.</ns0:figDesc><ns0:graphic coords='7,141.73,140.31,413.56,137.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Part of the study involved modelling the distribution of jaguars (Panthera onca) using jaguar sightings from the Shipibo and Conibo indigenous 6/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:2:0:NEW 9 Apr 2021) Manuscript to be reviewed Computer Science communities. Mengersen et al. (2017) considered several occupancy and abundance models, which included environmental covariates to account for uncertainty associated with different types of jaguar</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>package R2VR offers open access to VR experiments for the scientific community. It supports the development of new research tools by creating a more complex environment for participants and lowers the technical barriers for ecologists by easing the learning curve for VR programming and uptake VR technologies.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Description of the main functions included in the package. See the help files for more details about the function arguments.To ensure that the framework is fully operational, VR experiments were conducted using the R2VR package installed on local computers at Queensland University of Technology (QUT). Co-authors and several collaborators were immersed into three virtual ecosystems composed of four different 360-degree images and then asked to answer questions using a Samsung GearVR headset and Oculus Quest. The data collected during these experiments were used to create the visualisations in the R2VR package vignette.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Function</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>start() starts the VR server on the web browser</ns0:cell></ns0:row><ns0:row><ns0:cell>end()</ns0:cell><ns0:cell>kills the VR server</ns0:cell></ns0:row><ns0:row><ns0:cell>pop()</ns0:cell><ns0:cell>displays the question on the image</ns0:cell></ns0:row><ns0:row><ns0:cell>go()</ns0:cell><ns0:cell>jumps to another image</ns0:cell></ns0:row><ns0:row><ns0:cell>read()</ns0:cell><ns0:cell>retrieves the data from the database</ns0:cell></ns0:row></ns0:table><ns0:note>The 'Interaction' vignette is included in the R2VR package and contains instructions on how to reproduce the case studies. It is also displayed in the Supplementary Material.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparisons R2VR (A-Frame embedded R) and Unity.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>R2VR</ns0:cell><ns0:cell>Unity</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Description of the data obtained from the elicitation stored online in table koala.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>id</ns0:cell><ns0:cell>classification id</ns0:cell></ns0:row><ns0:row><ns0:cell>image id</ns0:cell><ns0:cell>unique identifier of the image</ns0:cell></ns0:row><ns0:row><ns0:cell>image file</ns0:cell><ns0:cell>image's file name</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>binary response 0 (absence) or 1 (presence)</ns0:cell></ns0:row><ns0:row><ns0:cell>recordedOn</ns0:cell><ns0:cell>date-time of the classification event</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://unity3d.com 2 https://www.unrealengine.com/en-US/ 3 https://opensource.facebook.com/ 4 https://www.tiobe.com/tiobe-index/ 2/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:2:0:NEW 9 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='8'>https://www.db4free.net/ 9 https://aframe.io/ 4/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55483:2:0:NEW 9 Apr 2021)</ns0:note>
</ns0:body>
" | "Rebuttal letter 2
Connecting virtual reality and ecology: a new tool to run seamless immersive
experiments in R.
April 9, 2021
We thank the editor and reviewers for their second reviews. Below we provide a detailed description of
the changes we made in response to each comment (in gray) and in the revised manuscript (in blue).
Best Regards,
Julie Vercelloni
Editor
Please make sure to highlight how this paper falls into the aims and scope of PeerJ.
The paper falls into the aims and scope of PeerJ as the research underneath the R2VR package is
rigorous, novel and of wide interest in the areas of Computer Science. R2VR fills the gaps of the current
lack of seamless tools that enable to create VR experiments, that is easy to use for ecologists and other
applied scientists, that can generate data for use in statistical analyses, and is generic enough to be repurposed. We also propose further extensions including the use of different VR technologies that can be
embedded into another programming software than R. The important conclusions of our study is that, R2VR
fulfills better the needs of applied scientists and has the potential to increase the uptake of VR for research
purposes and promote cross-disciplinary collaboration.
Reviewer 2
L45 to 52: In my review I criticized the narrow interpretation of VR. To say many current VR research
applications use only a subset of features, meaning multisensory stimuli, is an unfounded statement. For
example, a recent survey paper concluded 84% of application use at least one additional sensory modality
(haptics) to vision [1]. Please remove this statement or establish a basis for it.
[1]: Melo, M., Gonçalves, G., Monteiro, P., Coelho, H., Vasconcelos-Raposo, J., Bessa, M. (2020).
Do Multisensory stimuli benefit the virtual reality experience? A systematic review. IEEE Transactions on
Visualization and Computer Graphics, 1–1. https://doi.org/10.1109/TVCG.2020.3010088
Thank your comment. In the revised version, we removed the following sentences:
However, many current VR research applications use only a subset of these features, depending on the
available technology. For example, audio is often excluded, despite the growing body of literature showing
that it increases the sense of presence and realism (Kernand and Ellermeier, 2020). Notwithstanding this,
even the restricted VR environment settings have the potential to produce better qualitative and quantitative
elicited information.
1
L144 to 171, 225 to 240, and Table 2: These additions are very welcome. However, they require
further refinement. For example, the authors make the case of the existing tools requiring knowledge of
C and Unity. This would be a valid stance to take if it weren’t for the fact that the authors then discuss
the use of technologies and languages beyond base R. Based on my understanding of the r2vr framework–
so please correct me if I am wrong–for end users to do anything more than pop-up existing assets in a VR
environment, they would need to familiarize themselves with other technologies and languages e.g., python,
HTML.
The goal of discussing the use of other VR technologies and languages (discussion section - L 242-247
in the revised manuscript) is to highlight potential future expansions of the r2vr concept for accessing a
broader scientist community that does not necessarily use R. We assume here that the ecologists or other
applied scientists will use the language that they are already familiar with. Based on this concept, future
packages could propose different combinations of VR and analytical software including A-Frame embedded
in Python, Unity embedded in R and Unity embedded in Python. To clarify this point, we modified the
following paragraph in the revised manuscript:
L 242-249: The current implementation of R2VR uses A-Frame to create VR environments written in
the R language. Another R package “shinyframe” uses this combination to visualise 3D plots 1 . Further
developments could include using Unity instead of A-Frame within the R2VR package. Similarly, VR
environments could be coded in a different language other than R but still familiar to the ecologists. For
example, R2VR could be rewritten for Python 2 which is an even more popular language among applied
scientists3 , but not as developed in terms of statistical capabilities. The availability of such tools will
greatly help to fulfill the need of ecologists and ultimately increase the adoption of immersive experiments
in applied science.
In my original review, my point was for the authors to state clearly ”convey *why* their system is
both needed and *how* it compares to, or better yet improves upon, prior art/existing solutions.” Based
on my reading of this rebuttal, the business case behind r2vr is to encourage uptake of VR technologies by
ecologists through the stratgey of easing the learning curve e.g., non-power users I referred to in my original
review. If this is the case, then the authors should state this clearly in the manuscript.
Yes, several statements throughout the manuscript highlight the novelty of r2vr in easing the access
of VR technologies to applied scientists. We made this aspect even more clearer by adding the following
sentences to the revised manuscript:
L 20-22: We present R2VR, a package for implementing and performing VR experiments in R with the
aim of easing the learning curve for applied scientists including ecologists.
L 143-144: The purpose of R2VR is to provide a malleable tool that the scientific community can access
to easily create their own VR environments that collect data.
L 229-232: The package R2VR offers open access to VR experiments for the scientific community. It
supports the development of new research tools by creating a more complex environment for participants
and lowers the technical barriers for ecologists by easing the learning curve for VR programming and uptake
VR technologies.
With respect to Table 2, there is a lot of information justifying its inclusion in the rebuttal that is missing
from the proposed changes to the manuscript e.g., the first paragraph in gray font on page 6 of the rebuttal.
Can the authors please include this detail in the manuscript along with detail regarding how pros and cons
1
https://cran.r-project.org/web/packages/shinyaframe/ shinyaframe.pdf
https://www.python.org/
3
https://www.tiobe.com/tiobe-index/
2
2
were elicited from the literature?
Following the reviewer’s suggestions, the following paragraphs were modified to the revised manuscript:
L 143-146: The purpose of R2VR is to provide a malleable tool that the scientific community can access
to easily create their own VR environments that collect data. The tool is particularly targeted to the very
large community of R users, noting that R is a very popular programming languages in the world with an
open-access to many statistical packages for data analyses.
L 166-178: R2VR provides the ability for researchers to customise the data retrieved from experiments
which is made accessible into the statistical programming language, R. While this purpose may sounds
similar in other studies, we did not find in the literature a tool that enables to create VR experiments, that is
easy to use for non-programmers, that can generate data for use in data analyses, and is generic enough to
be re-purposed. To our knowledge, existing VR packages for applied sciences require specialized hardware
and substantial programming knowledge to customise the experiment beyond the case study for which the
tool was developed. The tools that we found in the related literature require downloading, setting up, and
interfacing with Unity, an additional step that many ecologists wish to avoid. Some of these packages focus
on customising 3d objects from models with pre-fabricated environments [8, 4, 1], in comparison to R2VR
which gives the freedom to change the environments, customise expected data responses, and interact with
VR to change images or ask questions in real time. Some are customizable, but extensive knowledge in C#
and Unity knowledge are needed to re-purpose them [2].
L 159-165: The comparison was based on two sources: first hand experience and published literature.
The first hand experience was based on elicited information from users that participated in the ecological
case studies (see below), developers in Web- and Unity- programming, and quantitative ecologists that
conceptualized these studies. Information from the published literature was extracted using a systematic
reading on which positive and negative characteristics of VR experiments in applied sciences are discussed
by the authors. The published literature was based on key references cited in the following papers [3, 5, 6, 7].
References
[1] A. Bexter and B. M. Kampa. Mazemaster: an open-source python-based software package for controlling virtual reality experiments. bioRxiv, 2020.
[2] J. Brookes, M. Warburton, M. Alghadier, M. Mon-Williams, and F. Mushtaq. Studying human behavior
with virtual reality: The unity experiment framework. Behavior research methods, pages 1–9, 2019.
[3] C. Dibbern, M. Uhr, D. Krupke, and F. Steinicke. Can webvr further the adoption of virtual reality?
Mensch und Computer 2018-Usability Professionals, 2018.
[4] D. C. Jangraw, A. Johri, M. Gribetz, and P. Sajda. Nede: An open-source scripting suite for developing
experiments in 3d virtual environments. Journal of neuroscience methods, 235:245–251, 2014.
[5] M. Nebeling and M. Speicher. The trouble with augmented reality/virtual reality authoring tools.
In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct),
pages 333–337. IEEE, 2018.
[6] V. T. Nguyen, R. Hite, and T. Dang. Web-based virtual reality development in classroom: From learner’s
perspectives. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality
(AIVR), pages 11–18. IEEE, 2018.
[7] S. G. Santos and J. C. Cardoso. Web-based virtual reality with a-frame. In 2019 14th Iberian Conference
on Information Systems and Technologies (CISTI), pages 1–2. IEEE, 2019.
[8] M. Vasser, M. Kängsepp, M. Magomedkerimov, K. Kilvits, V. Stafinjak, T. Kivisik, R. Vicente, and
J. Aru. Vrex: an open-source toolbox for creating 3d virtual reality experiments. BMC psychology,
5(1):1–8, 2017.
3
" | Here is a paper. Please give your review comments after reading it. |